As mentioned by my previous blog post, fallback fonts can have a significant impact on the CLS portion of the Web Vitals score. Your main font is a web font from somewhere like Google fonts, but browsers may take some time to fully load it so the page does the initial render with a fallback font and then swap in the web font once it is ready. The closer your fallback font is to the web font, the less the page will shift and the lower the impact on your CLS score. You may have selected one fallback font because it most closely resembles your web font, but something about it isn't quite identical and you see a visible shift during testing. The first tool to try is using @font-face and adjusting some of the CSS properties such as letter spacing, but it will only get you so far because there are some elements of the fallback font which can't be changed with existing CSS. That's where the horribly named font metrics override descriptors (or f-mods) come in.

F-mods are a set of @font-face descriptors added in Chrome 87 which allow changing some of the subtle font differences to try and make the fallback font hold the same space as the web font. I am not a font expert by any means, so I apologize if this could be more accurately described with better terminology. I would highly recommend watching one of the videos from Chrome Dev Summit 2020, Beyond Fast by Jake Archibald, which has an excellent brief segment on these.

Ascent-override - specifies the amount of the font which appears above the baseline.

Descent-override - specifies the amount of the font which appears below the baseline.

Line-gap-override - specifies the amount of whitespace gaps at the top and bottom of each line.

Good stuff, but only supported by Chrome 87 at the moment. These will be ignored by other browsers, so there's no harm in using them now. You get the benefits for those users now so that, most importantly, it will be reflected in the CLS CrUX field data Google will increasingly be using as part of their ranking algorithm. With how new these are, there isn't much information about them yet, and no helpful tools. I needed to develop something to help figure out what values to use for these new bits, which I've made public in Github. It will let you feed in a web font URL and a named fallback font, overlaying the two fonts to help align them. My use case was Google fonts, so the Javascript may need to be adjusted if you're using something else which delivers the actual font file rather than a stylesheet with @font-face values, as Google fonts do. Hope this helps!

F-mods alignment helper

Wow, it's been a long time since I posted anything. Nothing like having kids to keep you from blogging.

One of the things I keep an eye on for work are the Google Lighthouse scores. The metrics used to calculate the performance score began shifting in v6 to Core Web Vitals. Pagespeed Insights uses Lighthouse for this score as well, which will be important later in the post. Pagespeed Insights gives a good window into the Chrome User Experience Report, so I like to check that regularly as well. The Core Web Vitals switch seems like a good one to me, the metrics are more transparent and seem to have less variance between runs.

We ran into an interesting issue with the CLS, Content Layout Shift, portion of Core Web Vitals on some of our pages. We would run Lighthouse locally in Chrome and see a CLS of bascially zero on all network conditions. We would then run it from Pagespeed Insights against the same page and see a failing score of greater than 0.1. The culprit ended up being the fallback fonts set in our font-family delaration. It had probably been at least 20 years since we thought about it, other than adding a Google font to the font of it a few years ago. We had the fallback font installed locally, but it turned out Googlebot didn't have it. Googlebot fell back to the next font in the list. The next font in the list was not a great alternative to the first two, in hindsight. The browser initially renders text with one of the fallback fonts under the Lighthouse throttling conditions, and then swaps in the Google font once it finishes downloading. The offending font in the list had much different spacing and letter widths than either the Google font or our first fallback font, so some text wrapped in spots for Googlebot and resulted in a CLS shift when the Google font finished loading and swapped in.

The most interesting part of this is figuring out what fonts Googlebot supports. You can't just change the user agent in Chrome, as that will still use all of the fonts from your machine. I wasn't able to find anywhere Google has published a list of fonts installed on thier Googlebot VMs. So there are a couple of options I could think of. Either way required publishing something public facing.

  1. Create a page with some client side code which detects the Googlebot user agent, attempts to display fonts, and logs the results somewhere. Could maybe trigger this by doing a live inspection from Google search console.
  2. Create a page with the fonts in question and then run it through Pagespeed Insights and view the resulting screenshot.

Using the Pagespeed Insights method seemed much easier, and the screenshot seemed like a much better way of seeing the results. From the original page, I was curious about what web safe fonts are supported by Googlebot, since our initial fallback font was supposedly web safe. I built a page to try each of the fonts in the W3Schools list of CSS Web Safe Fonts that could be run through Pagespeed Insights where they would all be visible. An image of the result is below, along with links to the test page and to run the test page through PageSpeed Insights.

Googlebot installed fonts test page

Run PageSpeed Insights against page

GooglebotFonts.jpg

San-serif is used as the fallback for the serif fonts, which are the first 6. Conversely, serif was the fallback for the remaining fonts, which are sans serif fonts. Maybe that's a little weird, but I wanted an obvious visual cue. It's quite surprising which were supported and how many were not. Only half of the serif fonts were supported, and less than half of the sans serif were. (Comic Sans MS? You've gotta be kidding me Googlebot!) Hope this helps someone else!

The Development, Staging and Release environments are good enough for most projects. But what if you have one of the situations where you need more?

One of the solutions I work with has some fun requirements and adding more environments allows us to easily cope for additional scenarios. The documentation Microsoft has put together for this is actually quite good and shows much of the flexibility for it. Too good of a job, actually, it’s a little overwhelming. So this is as much to help me remember how to do this in the future as anything.

  1. Add the additional environments to launchSettings.json. These will now show as options in the box next to the play arrow in Visual Studio. We have multiple dev and release scenarios, so we’ll add DBG-X, DBG-Y, REL-X, and REL-Y.
  2. "DBG-X": {
          "commandName": "IISExpress",
          "launchBrowser": true,
          "environmentVariables": {
            "ASPNETCORE_ENVIRONMENT": "DBG-X"
          }
        },

  3. Add appsettings.json files for each of the new environments, such as appsettings.DBG-X.json. The call in Program.cs to WebHost.CreateDefaultBuilder() will now pick up the respective version for each environment. And it even magically knows to nest these under appsettings.json in Visual Studio.
  4. Since we have multiple development environments (DBG-X and DBG-Y) and are no longer using the Development environment, the condition in Startup.Configure using env.IsDevelopment will no longer work. We need a custom version. ASP.Net Core gives multiple ways of doing this, so you could create custom versions of Startup.cs, Configure(), or ConfigureServices() for each environment if you need to. That’s overkill for me at the moment, so I’ll just check if the environment name starts with “DBG-“.
    if(env.EnvironmentName.StartsWith("DBG-")) {…

And that’s really it. Hope this helps!

At my current employer, we moved all of our videos to Azure using Azure Media Services a little under 2 years ago. This allowed us to upgrade them from using a Flash player to an HTML5 based video system, using the Azure Media Player for playback. I can’t say enough good things about using Azure for this. The videos went from looking mediocre and generally only playing on desktop machines to looking crystal clear at multiple resolutions, playable on every desktop and mobile device we throw at it.

We’ve now circled back to fill in a gap we missed at that point in time, captions. (Or subtitles, if you prefer.) Videos without captioning or subtitles excludes a portion of users, and that’s not cool. Since we’re already using Media Services for the video encoding, it made sense to use the Azure Media Indexer to generate the captions for us. However, most of the examples out there around doing this seem to be targeted at doing the indexing when you upload a video. We are certainly doing that moving forward, but there were a significant number of videos already out there which needed to be processed and that doesn’t seem to be a well documented scenario. Hopefully I can fill in that gap a little with this post.

First thing, start with the upload scenario from this link:
https://docs.microsoft.com/en-us/azure/media-services/media-services-index-content

That will get you most of the way, but there are a couple changes when using existing files. The first change is load the existing video Asset using the Asset ID. Replace the function called CreateAssetAndUploadSingleFile with one which looks something like this:

static IAsset LoadExistingAsset(string AssetId)
{
    var matchingAssets = (from a in _context.Assets
                                      where a.Id.Equals(AssetId)
                                      select a);

    IAsset asset = null;
    foreach (IAsset ia in matchingAssets)
    {
        asset = ia;
    }

    return asset;
}

You’ll need to know the Asset Id for the video. Hopefully you’ve been storing those someplace as you’ve encoded videos; we had them in SQL Azure so I pulled them back from there. If you don’t have them, playing around with the LINQ on _context.Assets will probably return them in some way. I haven’t needed to do that myself.

Now that you have a reference to the video asset, you can work your way down the code in RunIndexingJob and update a few things. I would recommend renaming the job to something which uniquely identifies the video, as that will show up in the Jobs section of the media services account in the Azure portal. If it fails, it makes it much easier to figure out which one to redo. Same thing with the indexing task and output asset name, renaming them makes them easier to track in the logs. For the configuration file, follow the Task Preset for Media Indexer link and load the file from wherever seems appropriate to you. I put some string placeholders into the config file for the metadata fields, which I’m replacing with some data pulled from the same database I’m getting the Asset ID from. So that section for me looks like:

<input>
    <metadata key="title" value="{0}" />
    <metadata key="description" value="{1}" />
  </input>

That should get you through RunIndexingJob. This is where the examples really fell flat for me. There are some additional steps required now. I changed RunIndexingJob to return the output media asset, as the caption files now have a different Asset ID than the video. Since Azure Blob Storage underpins Media Services, the Asset ID is actually the blob container name as well. Since the files the indexer generated have a different Asset ID, it means they’re actually in a different container than the video. This is important for actually consuming the captioning file. So rather than returning true like the example code, mine returns job.OutputMediaAssets[0]. There are three steps left to actually be able to use the caption files.

  1. Publish the caption asset.
  2. Change the blob storage permissions on the Asset ID. (Remember the Asset ID is the same as the blob container name.)
  3. Save the path to the published caption file in blob storage.

Publish the Caption Asset

This is really easy, and very similar to publishing the video files after encoding. From code which calls RunIndexingJob:

var asset = RunIndexingJob(AssetId);
ILocator loc = PublishAsset(asset);

The definition for PublishAsset looks something like so:

static ILocator PublishAsset(IAsset asset)
{
    var locator = _context.Locators.Create(LocatorType.Sas, asset, AccessPermissions.Read, TimeSpan.FromDays(35600));
    return locator;
}

The major difference between the video publish and this is the different LocatorType. Using Sas creates a Progressive download locator, whereas OnDemandOrigin creates a streaming locator. If you use the latter, it won’t work. You return the locator back as it has the URL to the container, which will be helpful for the next step.

Change Blob Storage Permissions

Now that the Asset is published, the blob container is out there and available, but requires a SAS token to access it. If that’s what you want, skip this step. I want it to be available publicly, however, so the blob container permissions need a quick update. Since the Asset ID is the same as the blob container name, we’ll use the blob storage API to alter this.

var videoBlobStorageAcct = CloudStorageAccount.Parse(_blobConnStr);
CloudBlobClient videoBlobStorage = videoBlobStorageAcct.CreateCloudBlobClient();
string destinationContainerName = (new Uri(loc.Path)).Segments[1];
CloudBlobContainer assetContainer = videoBlobStorage.GetContainerReference(destinationContainerName);

if (assetContainer.Exists()) // This should always be true
{
     assetContainer.SetPermissions(new BlobContainerPermissions
     {
         PublicAccess = BlobContainerPublicAccessType.Blob
     });
}

I’m setting up the blob container there and grabbing the container name from the locator just to be safe. Then it gets the reference and sets the container access to blob. No more SAS required to get the captions!

Save the path to the caption file

The last thing to do now is build the path to the caption file or files and save them so they can be retrieved and used by the player. I’m only generating the WebVTT format, since I’m only concerned with playing the videos via a website.

string PublishUrl = loc.BaseUri;
string vttFileName = "";

// Loop through the files in the asset to find the vtt file
foreach (IAssetFile file in asset.AssetFiles)
{
    if (file.Name.EndsWith(".vtt"))
    {
        vttFileName = file.Name;
        break;
    }
}

string captionUrl = PublishUrl + "/" + vttFileName;

Now you save the value in captionUrl and you’re good to go! One small additional note which I was stuck on for a little while. If you’re consuming the caption file from a different domain, which is almost guaranteed (unless you’re running your site from static files on blob storage), you’ll need to change the CORS settings for the blob storage account being used by Media Services. The easiest way I’ve found to set this is to use the Azure portal. Browse to the storage account being used via the storage blade and not the media services blade. The storage blade has a nice UI which lets you whip through it in a few seconds. Hope this helps!

(This post refers to Azure Media Indexer v1. At the time of writing, v2 was in preview.)

My original reason for writing the Modern Delicious app for Windows was so I could have a nice way to read through links I had saved on Windows 8. I basically wanted a reading list from my recent Delicious links. I wrote and released the app, but of course soon after, the Reading List app for Windows was announced. My usage scenario was catered for, and there didn’t seem to be a reason for the app to exist any longer. There had been some very kind feedback from several people, however, so I continued development on the app as I had time. This feedback was the reason there were updates to the app for Windows 8.1 and 10, features like tag suggestions were added, and even a Windows Phone 8.1 version happened. Discovering the intracacies of developing for the Windows Store has been an incredibly interesting learning experience. One point in all this which is key – I am only a third party developer and have no affiliation with Delicious or any of the companies which have owned it.

For the last two months, part of the API provided by Delicious has not been functioning correctly. Most of the API is working perfectly, except for one method. This one method happens to be the key one which allows the app to retrieve each person’s entire history of posts (or bookmarks, if you prefer). Without it, the best the app can do is to get the last 100 or so posts for each user. As I’ve learned throughout the course of developing this app, many of the users of Modern Delicious are longtime Delicious users, with an extensive bookmark history. Only being able to view the last 100 posts renders the app effectively useless for these users. I’ve tried to contact Delicious through several different methods: email, Twitter, and their Github repo. They’ve been unresponsive thus far, and I cannot allow the app to continue being downloaded and frustrating users who legitimately just want a functional Delicious client app for Windows 10. To that end, and with great sadness, I’ve removed Modern Delicious from the Windows Store.

This is the danger of developing against a third-party API such as the one provided by Delicious. At any time, the company controlling the API can do whatever they want with it. I have no ill will nor any hard feelings towards Delicious for breaking their API; it’s entirely possible they had legitimate engineering reasons for the breaking change. I’m incredibly grateful to Delicious and all of the engineers who designed and maintained the API over the last decade, it’s been a tremendous development resource for many developers. It would have been great if there had been some warning about impending changes potentially breaking the API, or some notice of the API being retired, which would have allowed myself and others to give some warning to our app users. But ultimately, it’s their API and they can break it as they please in order to do what is needed for their company.

To the many Modern Delicious users over the last few years: thank you for you feedback and continued usage. If Delicious fixes the issue, I will restore the app to the Windows Store with any updates required by API changes. Delicious has a functional API of some kind out there, as their official apps for iOS and Android both remain functional. Either they have not made the details of using the API those apps use public, or I am not clever enough to figure it out. Either way, I feel like I’ve let you all down.

To Delicious: it would be very nice of you to repair the API or publish the details of the API your iOS and Android apps use. You have a long history of providing a wonderful, free service while struggling to find a path to profitability. The website seems adrift after the last changes were released, and I sincerely hope it doesn’t mean the end of Delicious is near. And once again, I continue to extend the same offer I have made many times before: I will gladly give you the app for free so you can have an official Windows Store app. (The Windows 10 version is even a UWP app!)