At my current employer, we moved all of our videos to Azure using Azure Media Services a little under 2 years ago. This allowed us to upgrade them from using a Flash player to an HTML5 based video system, using the Azure Media Player for playback. I can’t say enough good things about using Azure for this. The videos went from looking mediocre and generally only playing on desktop machines to looking crystal clear at multiple resolutions, playable on every desktop and mobile device we throw at it.

We’ve now circled back to fill in a gap we missed at that point in time, captions. (Or subtitles, if you prefer.) Videos without captioning or subtitles excludes a portion of users, and that’s not cool. Since we’re already using Media Services for the video encoding, it made sense to use the Azure Media Indexer to generate the captions for us. However, most of the examples out there around doing this seem to be targeted at doing the indexing when you upload a video. We are certainly doing that moving forward, but there were a significant number of videos already out there which needed to be processed and that doesn’t seem to be a well documented scenario. Hopefully I can fill in that gap a little with this post.

First thing, start with the upload scenario from this link:
https://docs.microsoft.com/en-us/azure/media-services/media-services-index-content

That will get you most of the way, but there are a couple changes when using existing files. The first change is load the existing video Asset using the Asset ID. Replace the function called CreateAssetAndUploadSingleFile with one which looks something like this:

static IAsset LoadExistingAsset(string AssetId)
{
    var matchingAssets = (from a in _context.Assets
                                      where a.Id.Equals(AssetId)
                                      select a);

    IAsset asset = null;
    foreach (IAsset ia in matchingAssets)
    {
        asset = ia;
    }

    return asset;
}

You’ll need to know the Asset Id for the video. Hopefully you’ve been storing those someplace as you’ve encoded videos; we had them in SQL Azure so I pulled them back from there. If you don’t have them, playing around with the LINQ on _context.Assets will probably return them in some way. I haven’t needed to do that myself.

Now that you have a reference to the video asset, you can work your way down the code in RunIndexingJob and update a few things. I would recommend renaming the job to something which uniquely identifies the video, as that will show up in the Jobs section of the media services account in the Azure portal. If it fails, it makes it much easier to figure out which one to redo. Same thing with the indexing task and output asset name, renaming them makes them easier to track in the logs. For the configuration file, follow the Task Preset for Media Indexer link and load the file from wherever seems appropriate to you. I put some string placeholders into the config file for the metadata fields, which I’m replacing with some data pulled from the same database I’m getting the Asset ID from. So that section for me looks like:

<input>
    <metadata key="title" value="{0}" />
    <metadata key="description" value="{1}" />
  </input>

That should get you through RunIndexingJob. This is where the examples really fell flat for me. There are some additional steps required now. I changed RunIndexingJob to return the output media asset, as the caption files now have a different Asset ID than the video. Since Azure Blob Storage underpins Media Services, the Asset ID is actually the blob container name as well. Since the files the indexer generated have a different Asset ID, it means they’re actually in a different container than the video. This is important for actually consuming the captioning file. So rather than returning true like the example code, mine returns job.OutputMediaAssets[0]. There are three steps left to actually be able to use the caption files.

  1. Publish the caption asset.
  2. Change the blob storage permissions on the Asset ID. (Remember the Asset ID is the same as the blob container name.)
  3. Save the path to the published caption file in blob storage.

Publish the Caption Asset

This is really easy, and very similar to publishing the video files after encoding. From code which calls RunIndexingJob:

var asset = RunIndexingJob(AssetId);
ILocator loc = PublishAsset(asset);

The definition for PublishAsset looks something like so:

static ILocator PublishAsset(IAsset asset)
{
    var locator = _context.Locators.Create(LocatorType.Sas, asset, AccessPermissions.Read, TimeSpan.FromDays(35600));
    return locator;
}

The major difference between the video publish and this is the different LocatorType. Using Sas creates a Progressive download locator, whereas OnDemandOrigin creates a streaming locator. If you use the latter, it won’t work. You return the locator back as it has the URL to the container, which will be helpful for the next step.

Change Blob Storage Permissions

Now that the Asset is published, the blob container is out there and available, but requires a SAS token to access it. If that’s what you want, skip this step. I want it to be available publicly, however, so the blob container permissions need a quick update. Since the Asset ID is the same as the blob container name, we’ll use the blob storage API to alter this.

var videoBlobStorageAcct = CloudStorageAccount.Parse(_blobConnStr);
CloudBlobClient videoBlobStorage = videoBlobStorageAcct.CreateCloudBlobClient();
string destinationContainerName = (new Uri(loc.Path)).Segments[1];
CloudBlobContainer assetContainer = videoBlobStorage.GetContainerReference(destinationContainerName);

if (assetContainer.Exists()) // This should always be true
{
     assetContainer.SetPermissions(new BlobContainerPermissions
     {
         PublicAccess = BlobContainerPublicAccessType.Blob
     });
}

I’m setting up the blob container there and grabbing the container name from the locator just to be safe. Then it gets the reference and sets the container access to blob. No more SAS required to get the captions!

Save the path to the caption file

The last thing to do now is build the path to the caption file or files and save them so they can be retrieved and used by the player. I’m only generating the WebVTT format, since I’m only concerned with playing the videos via a website.

string PublishUrl = loc.BaseUri;
string vttFileName = "";

// Loop through the files in the asset to find the vtt file
foreach (IAssetFile file in asset.AssetFiles)
{
    if (file.Name.EndsWith(".vtt"))
    {
        vttFileName = file.Name;
        break;
    }
}

string captionUrl = PublishUrl + "/" + vttFileName;

Now you save the value in captionUrl and you’re good to go! One small additional note which I was stuck on for a little while. If you’re consuming the caption file from a different domain, which is almost guaranteed (unless you’re running your site from static files on blob storage), you’ll need to change the CORS settings for the blob storage account being used by Media Services. The easiest way I’ve found to set this is to use the Azure portal. Browse to the storage account being used via the storage blade and not the media services blade. The storage blade has a nice UI which lets you whip through it in a few seconds. Hope this helps!

(This post refers to Azure Media Indexer v1. At the time of writing, v2 was in preview.)

One of my most frequent issues when releasing to Azure websites using the old build system from VSTS/VSO is the deployment fails. By old build system, I mean the continuous deployment magic you set up from the Azure portal for an Azure website (now App Service). The worst is on a good sized solution I work on which takes about 20 minutes to build. So I wait the 20 minutes, only for it to fail for reasons beyond my control on the deployment step with errors like "invalid thumbprint" or "intermittent communications issues." Despite the build succeeding, I'm left kicking a new build and waiting 20 minutes just for the deployment. So I was really excited to see the new Build and Release Management system in VSTS/VSO where it looks that option is possible, retrying just that deployment step. On the solution with the 20 minute build time, there are also four different configurations. (This means there's potentially 80 minutes of wasted build time if the deploy fails for all.) The new build and release system also looks to have a multi-configuration option where I can build all four at once.

So diving into the main getting started tutorial, it looks very promising. First thing is to set up the build. I'm following the tutorial exactly as shown by creating a Visual Studio build, but with three small additional steps in order to cope with building all four of my configurations. On the variables tab, I replaced the existing build configuration values with my four configurations, separating each with a comma. If you save and queue a build at this point, it will fail, as it treats the value as a single configuration.


The next piece for multiple configurations is on the Options tab. (This has changed, see edit under the image below.) Put a check next to MultiConfiguration, and set Multipliers to "BuildConfiguration". You can optionally choose to check the options below, as I have, to potentially build in parallel and continue with other build configurations if one fails.

*Edit 2018-02-01: Microsoft moved this when multi-phase builds released. This can be enabled now in the Tasks tab by selecting the Phase. I believe this defaults to Phase 1 for both new and converted legacy builds. It's the third in the list and not obvious that it can be clicked, under Process and Get Sources. From there, it's under Execution Plan > Parallelism. The Multipliers field is still the same, but there's now an additional required field, Maximum Number of Agents. Using 1 will force each configuration to finish building before the next one begins, but you can increase it and it might build in parallel.

The final piece is on the Build tab, selecting the Copy and Publish Build Artifacts step. If you were to build now, it would build all four configurations, but drop all four zip files in the same directory, which isn't going to work since they'll all have the same name. My solution is to put each in their own folder, naming the folder for the configuration. This is very easy to do, just add "\$(BuildConfiguration)" (without quotes) after the artifact name in the Artifact Name textbox. Now it's safe to save and queue a build!


Interestingly, my build times decreased to around 14-15 minutes for each configuration. I'm pretty sure the deployment step wasn't taking 5 minutes on the old setup, but regardless, I like anything which makes the build faster!

If you switch over to the Release tab from the top row now, I've followed the tutorial down to the step where you create the release definition. I've created an Azure Website Deployment, disabled the test step, and renamed the environment to match one of my build configurations. I have four configurations, so I'll need four environments. I'll come back and add those in after I get the first one right. Continuing on the tutorial, I also linked to the build definition I created earlier. Once that's done, you can see the artifact name from your build in the Artifacts tab. Back on the Environments tab, we need to line up the configuration name for the release. Click on the three dots to the right of the environment name and select Configure Variables. Next to Release Configuration, I'll set the value to match the first entry in the build configuration list in the first image above, REL-IE.


Hit OK to save and return to the environments. There is a slight adjustment needed to the Web Deploy Package value in order to grab the correct deployment package for the current configuration. The default value of "$(System.DefaultWorkingDirectory)\**\*.zip" will grab all of the zip files in the build artifacts location. I have four configurations, so there are four zip files, each within a folder named for their build configuration. Using a syntax similar to the build step, modify this to look for this configuration folder using the ReleaseConfiguration variable we just set. This will make the value "$(System.DefaultWorkingDirectory)\**\$(ReleaseConfiguration)\**\*.zip". I have a staging slot set up for this web site/app service, so I've also set variables for the release to go to that slot as well.


Check the tutorial again for additional steps for continuous deployment and such. Otherwise, save and run a release - it should be good to go! The first time I tried mine, it failed for no logical reason, much like the old release system. Retried it and it worked, proving that I now have an improved way to release. Now we need to copy the existing environment over for the remaining three build configurations. The easiest way to do this is by clicking the three dots to the right of the environment name, and select Clone Environment. It will copy everything from the existing environment and let you change the name right away. The two things which need to updated now will be the ReleaseConfiguration variable value and the Web App Name for each. (If your sites are in different data centers, you may need to update the Web App Location value as well.) So I cloned the first configuration three times, updated those values, and saved, resulting in something like this:


Now select the release option from the menu, select the build which you queued earlier (and is hopefully done by now), and select the sites/configurations you want to deploy from left to right. My one complaint at this point is you cannot pick and choose which of the environments to deploy, you have to select them in order from left to right. Hopefully that option is added in the future. After selecting all of the configurations, mine worked on the first try, taking a little over 8 minutes.

To workaround the drawback around selecting a release out of order I mentioned above, I've also created individual releases for each of my build configurations in the event I need to deploy just one or two of them. All in all though, I'm really happy with how this turned out. While Release Management is still technically in preview, I'll be using this in production as it's such a significant improvement in my current release experience.

In the long list of tips to optimize your website performance are three things: use a CDN to serve content, make as few requests for Javascript and CSS as possible, and minify your Javascript and CSS files by removing whitespace and comments to make them as small as possible. ASP.Net provides a really excellent Bundling and Minification feature to handle the two latter points.This comes at a cost, however. Those static files must live within the project and can no longer be served via a CDN. (Not bundled, anyway.)

For a basic website, ASP.Net Bundling and Minification is fantastic. You set it up in the BundleConfig.cs file as described here, and you get a minified version of your Javascript in as few files as you configure. The Bundling tries to cope with a CDN through the UseCdn property there, but it doesn’t really work, in my opinion. It just spits out the URL on the page to the CDN, so you’re still left with a request for each library if the user doesn’t have the file cached yet. If you’re using something like jQuery UI, for example, it will probably be a request for jQuery, another for the jQuery UI Javascript file, and another for the jQuery UI theme you’ve specified. For these, all you’ve really done is replaced the old ScriptManager with another construct, and the various performance testing tools are still going to shout at you for making too many requests.

One of the guys on my team at work, Allan Bybee, took it upon himself to find a solution to this CDN bundling limitation. All of our Javascript and CSS files were previously being served from a CDN, so all of our static content lived out there rather than in the web project. Moving all of those files back into the project would have been a large task with many compromises to the development experience and web performance at the end. The critical one being that we would gain the bundling at the expense of serving files from the CDN. After a few weeks of digging around in the ASP.Net source code and diligent trial and error, he ended up with a really elegant solution which gives us the best of all worlds - source files that live in the CDN, which bundling watches for changes and then saves the resulting bundle back to the CDN. He did it by extending the built in ASP.Net bundling system, so it’s very similar to that - just improved upon. Here’s the best part: it’s now a Nuget package!

 

Azure Bundling on Nuget

Install from Nuget, and there is just a small amount of configuration in order to get things going. There are six settings that need configuring, and you have a three options for doing this. (Or read the full documentation.)

  1. Web.config - 6 required app settings
  2. Config.json - A file just for the settings
  3. BundleConfig - A different constructor

1. Web.config

<add key=“AzureAccountName” value=“your account name” />
<add key=“AzureAccessKey” value=“your access key” />
<add key=“CdnPath” value=“your cdn or Azure storage url” />
<add key=“SecureCdnPath” value=“your secure cdn or Azure storage url” />
<add key=“CachePolltime” value=“integer value in seconds how often to poll for file changes” />
<add key=“BundleCacheTTL” value=“integer value in seconds for cache time to live” />

2. Config.json

{
“AzureAccountName”: “your account name”,
“AzureAccessKey”: “your access key”,
“CdnPath”: “your cdn or Azure storage url”,
“SecureCdnPath”: “your secure cdn or Azure storage url”,
“CachePollTime”: “integer value in seconds how often to poll for file changes”,
“BundleCacheTTL”: “integer value in seconds for cache time to live”
}

If you use either of those first two configuration options, it will pick them up automatically and your BundleConfig code ends up very, very similar to what it was prior. Set up a variable that defines the Azure blob storage container name where the files live and the bundles will be stored in. (Make sure this container’s access level is not set to private, or none of this is going to work.) That variable is called AzureContainer in the code example below.

bundles.UseCdn = true;
BundleTable.EnableOptimizations = true;
BundleTable.VirtualPathProvider = new StorageVirtualPath(AzureContainer);
bundles.Add(new JSBundle(“~/your virtual path/jqueryVal”, “your folder”).Include(“~/your folder path/jquery.validate.js”, “~/your folder path/jquery.validate.unobtrusive.js”));
bundles.Add(new JSBundle(“~/your virtual path/modernizr”, “your folder”).Include(“~/your folder path/modernizr-2.6.2.js”));
bundles.Add(new CSSBundle(“~/your virtual path/site”, “your folder”).Include( “~/your folder path/bootstrap.css”, “~/your folder path/site.css”));

The key highlights from the code above are setting the VirtualPathProvider property on the BundleTable to an instance of StorageVirtualPath, and changing any ScriptBundle or StyleBundles which you want to be CDN bundles to instead be JSBundle or CSSBundle, respectively.

3. BundleConfig
Set up variables for each of the settings in the BundleConfig, and pass into the constructor for each of the custom bundling classes. (StorageVirtualPath, JSBundle, and CSSBundle.)

BundleTable.VirtualPathProvider = new StorageVirtualPath(AzureContainer, AzureAccountName, AzureAccessKey, CachePollTime);
var jquerycombined = new JSBundle(“~/bundles/jquerycombined”, AzureContainer, AzureAccountName, AzureAccessKey, CdnPath, SecureCdnPath, BundleCacheTTL).Include( “~/your folder path/jquery-2.1.3.js”, “~/your folder path/bootstrap.js”, “~/your folder path/respond.js”));
bundles.Add(jquerycombined);

If you compare that code to the snippet above, you’ll notice the additional arguments being passed in on the StorageVirtualPath, JSBundle, and CSSBundle. These are the variables you’ll want to define with the appropriate values.

One last note on this is the order of precedence on the settings. Assuming you set it up in both the web.config and JSON file, the settings from the web.config will win over the JSON file. If you set it in the BundleConfig file, that will trump having either of the other two set.

This could not have been achieved without Microsoft making the move to open source .Net and ASP.Net. So if there’s something you don’t like about this, or find a bug, or want to contribute additional features to it, here’s the best part - it’s all on GitHub!

 

AzureBundling on GitHub

This project fills a big gap in ASP.Net bundling for those of us that are intensely focused on tuning the performance of websites. Allan has done a tremendous job figuring this out and putting it out there for the world (if you’re reading this Allan, thanks again!) and I hope that other developers find this as useful as I do. If you do, please make sure to report any issues you find, or better still, help contribute to this great project and make it even better!

Here is a quick list of the things that have tripped me up since I’ve been playing with Azure.

  1. Paths to blobs are case sensitive. I have a camera that saves .jpg as .JPG, which had me scratching my head for a while when they wouldn’t work after uploading to Azure. It turned out I was using lower case when I was trying to access them, so I just had to update my code that uploaded the images to convert the extension to lower case.
  2. SQL Azure doesn’t support full text indexing yet. You have to set up a VM instance running the regular SQL Server if you want this in the Azure cloud.
  3. Always set the blob headers when you upload a file, the content-type header in particular.
  4. You can’t use a CNAME with SSL for blob storage. You have to use the long, odd URL that Azure provides you with when you set up the blob container. So instead of https://blob.example.com, you end up with https://xx000000.xx.msecnd.net.
  5. There is no way to manually expire a file on the Azure CDN. Once you access it through the CDN, it’s there until the refresh algorithm updates it. So you have to be very sure you’re ready before you hit that “Publish” button.

I’ve been playing with the HTML5 canvas element lately. There are some very interesting things you can do with it with images, a couple of which I was hoping to put on my website. There are some security restrictions, however, one of which is that you cannot use some of the most powerful elements of it if the image comes from another domain.

One of the things I was doing with the images for my website was moving them to Azure blob storage with the intention of exposing the blob store on a CDN. This is a pretty common decision for any site that needs to scale. My site doesn’t need it, but I wanted to play with new shiny. However, this renders the HTML5 canvas element fairly useless for anything except small scale experiments like mine.

I can understand the security arguments, but I’m sorry, they’re silly. It’s crippled, except there’s a huge backdoor that renders the security restrictions completely pointless. You simply do an AJAX request for the image and convert the image to a Base64 string, then use that as the canvas image source. It completely defeats the security, but sacrifices performance to do so. So what was the point of pretending to lock down security? How annoying. I may post some code when I finish working through it.

Source: Cleaning remote images for use with HTML5 canvas