At my current employer, we moved all of our videos to Azure using Azure Media Services a little under 2 years ago. This allowed us to upgrade them from using a Flash player to an HTML5 based video system, using the Azure Media Player for playback. I can’t say enough good things about using Azure for this. The videos went from looking mediocre and generally only playing on desktop machines to looking crystal clear at multiple resolutions, playable on every desktop and mobile device we throw at it.

We’ve now circled back to fill in a gap we missed at that point in time, captions. (Or subtitles, if you prefer.) Videos without captioning or subtitles excludes a portion of users, and that’s not cool. Since we’re already using Media Services for the video encoding, it made sense to use the Azure Media Indexer to generate the captions for us. However, most of the examples out there around doing this seem to be targeted at doing the indexing when you upload a video. We are certainly doing that moving forward, but there were a significant number of videos already out there which needed to be processed and that doesn’t seem to be a well documented scenario. Hopefully I can fill in that gap a little with this post.

First thing, start with the upload scenario from this link:
https://docs.microsoft.com/en-us/azure/media-services/media-services-index-content

That will get you most of the way, but there are a couple changes when using existing files. The first change is load the existing video Asset using the Asset ID. Replace the function called CreateAssetAndUploadSingleFile with one which looks something like this:

static IAsset LoadExistingAsset(string AssetId)
{
    var matchingAssets = (from a in _context.Assets
                                      where a.Id.Equals(AssetId)
                                      select a);

    IAsset asset = null;
    foreach (IAsset ia in matchingAssets)
    {
        asset = ia;
    }

    return asset;
}

You’ll need to know the Asset Id for the video. Hopefully you’ve been storing those someplace as you’ve encoded videos; we had them in SQL Azure so I pulled them back from there. If you don’t have them, playing around with the LINQ on _context.Assets will probably return them in some way. I haven’t needed to do that myself.

Now that you have a reference to the video asset, you can work your way down the code in RunIndexingJob and update a few things. I would recommend renaming the job to something which uniquely identifies the video, as that will show up in the Jobs section of the media services account in the Azure portal. If it fails, it makes it much easier to figure out which one to redo. Same thing with the indexing task and output asset name, renaming them makes them easier to track in the logs. For the configuration file, follow the Task Preset for Media Indexer link and load the file from wherever seems appropriate to you. I put some string placeholders into the config file for the metadata fields, which I’m replacing with some data pulled from the same database I’m getting the Asset ID from. So that section for me looks like:

<input>
    <metadata key="title" value="{0}" />
    <metadata key="description" value="{1}" />
  </input>

That should get you through RunIndexingJob. This is where the examples really fell flat for me. There are some additional steps required now. I changed RunIndexingJob to return the output media asset, as the caption files now have a different Asset ID than the video. Since Azure Blob Storage underpins Media Services, the Asset ID is actually the blob container name as well. Since the files the indexer generated have a different Asset ID, it means they’re actually in a different container than the video. This is important for actually consuming the captioning file. So rather than returning true like the example code, mine returns job.OutputMediaAssets[0]. There are three steps left to actually be able to use the caption files.

  1. Publish the caption asset.
  2. Change the blob storage permissions on the Asset ID. (Remember the Asset ID is the same as the blob container name.)
  3. Save the path to the published caption file in blob storage.

Publish the Caption Asset

This is really easy, and very similar to publishing the video files after encoding. From code which calls RunIndexingJob:

var asset = RunIndexingJob(AssetId);
ILocator loc = PublishAsset(asset);

The definition for PublishAsset looks something like so:

static ILocator PublishAsset(IAsset asset)
{
    var locator = _context.Locators.Create(LocatorType.Sas, asset, AccessPermissions.Read, TimeSpan.FromDays(35600));
    return locator;
}

The major difference between the video publish and this is the different LocatorType. Using Sas creates a Progressive download locator, whereas OnDemandOrigin creates a streaming locator. If you use the latter, it won’t work. You return the locator back as it has the URL to the container, which will be helpful for the next step.

Change Blob Storage Permissions

Now that the Asset is published, the blob container is out there and available, but requires a SAS token to access it. If that’s what you want, skip this step. I want it to be available publicly, however, so the blob container permissions need a quick update. Since the Asset ID is the same as the blob container name, we’ll use the blob storage API to alter this.

var videoBlobStorageAcct = CloudStorageAccount.Parse(_blobConnStr);
CloudBlobClient videoBlobStorage = videoBlobStorageAcct.CreateCloudBlobClient();
string destinationContainerName = (new Uri(loc.Path)).Segments[1];
CloudBlobContainer assetContainer = videoBlobStorage.GetContainerReference(destinationContainerName);

if (assetContainer.Exists()) // This should always be true
{
     assetContainer.SetPermissions(new BlobContainerPermissions
     {
         PublicAccess = BlobContainerPublicAccessType.Blob
     });
}

I’m setting up the blob container there and grabbing the container name from the locator just to be safe. Then it gets the reference and sets the container access to blob. No more SAS required to get the captions!

Save the path to the caption file

The last thing to do now is build the path to the caption file or files and save them so they can be retrieved and used by the player. I’m only generating the WebVTT format, since I’m only concerned with playing the videos via a website.

string PublishUrl = loc.BaseUri;
string vttFileName = "";

// Loop through the files in the asset to find the vtt file
foreach (IAssetFile file in asset.AssetFiles)
{
    if (file.Name.EndsWith(".vtt"))
    {
        vttFileName = file.Name;
        break;
    }
}

string captionUrl = PublishUrl + "/" + vttFileName;

Now you save the value in captionUrl and you’re good to go! One small additional note which I was stuck on for a little while. If you’re consuming the caption file from a different domain, which is almost guaranteed (unless you’re running your site from static files on blob storage), you’ll need to change the CORS settings for the blob storage account being used by Media Services. The easiest way I’ve found to set this is to use the Azure portal. Browse to the storage account being used via the storage blade and not the media services blade. The storage blade has a nice UI which lets you whip through it in a few seconds. Hope this helps!

(This post refers to Azure Media Indexer v1. At the time of writing, v2 was in preview.)

I’ve been playing with the HTML5 canvas element lately. There are some very interesting things you can do with it with images, a couple of which I was hoping to put on my website. There are some security restrictions, however, one of which is that you cannot use some of the most powerful elements of it if the image comes from another domain.

One of the things I was doing with the images for my website was moving them to Azure blob storage with the intention of exposing the blob store on a CDN. This is a pretty common decision for any site that needs to scale. My site doesn’t need it, but I wanted to play with new shiny. However, this renders the HTML5 canvas element fairly useless for anything except small scale experiments like mine.

I can understand the security arguments, but I’m sorry, they’re silly. It’s crippled, except there’s a huge backdoor that renders the security restrictions completely pointless. You simply do an AJAX request for the image and convert the image to a Base64 string, then use that as the canvas image source. It completely defeats the security, but sacrifices performance to do so. So what was the point of pretending to lock down security? How annoying. I may post some code when I finish working through it.

Source: Cleaning remote images for use with HTML5 canvas