0 Comments

The Development, Staging and Release environments are good enough for most projects. But what if you have one of the situations where you need more?

One of the solutions I work with has some fun requirements and adding more environments allows us to easily cope for additional scenarios. The documentation Microsoft has put together for this is actually quite good and shows much of the flexibility for it. Too good of a job, actually, it’s a little overwhelming. So this is as much to help me remember how to do this in the future as anything.

  1. Add the additional environments to launchSettings.json. These will now show as options in the box next to the play arrow in Visual Studio. We have multiple dev and release scenarios, so we’ll add DBG-X, DBG-Y, REL-X, and REL-Y.
  2. "DBG-X": {
          "commandName": "IISExpress",
          "launchBrowser": true,
          "environmentVariables": {
            "ASPNETCORE_ENVIRONMENT": "DBG-X"
          }
        },

  3. Add appsettings.json files for each of the new environments, such as appsettings.DBG-X.json. The call in Program.cs to WebHost.CreateDefaultBuilder() will now pick up the respective version for each environment. And it even magically knows to nest these under appsettings.json in Visual Studio.
  4. Since we have multiple development environments (DBG-X and DBG-Y) and are no longer using the Development environment, the condition in Startup.Configure using env.IsDevelopment will no longer work. We need a custom version. ASP.Net Core gives multiple ways of doing this, so you could create custom versions of Startup.cs, Configure(), or ConfigureServices() for each environment if you need to. That’s overkill for me at the moment, so I’ll just check if the environment name starts with “DBG-“.
    if(env.EnvironmentName.StartsWith("DBG-")) {…

And that’s really it. Hope this helps!

0 Comments

At my current employer, we moved all of our videos to Azure using Azure Media Services a little under 2 years ago. This allowed us to upgrade them from using a Flash player to an HTML5 based video system, using the Azure Media Player for playback. I can’t say enough good things about using Azure for this. The videos went from looking mediocre and generally only playing on desktop machines to looking crystal clear at multiple resolutions, playable on every desktop and mobile device we throw at it.

We’ve now circled back to fill in a gap we missed at that point in time, captions. (Or subtitles, if you prefer.) Videos without captioning or subtitles excludes a portion of users, and that’s not cool. Since we’re already using Media Services for the video encoding, it made sense to use the Azure Media Indexer to generate the captions for us. However, most of the examples out there around doing this seem to be targeted at doing the indexing when you upload a video. We are certainly doing that moving forward, but there were a significant number of videos already out there which needed to be processed and that doesn’t seem to be a well documented scenario. Hopefully I can fill in that gap a little with this post.

First thing, start with the upload scenario from this link:
https://docs.microsoft.com/en-us/azure/media-services/media-services-index-content

That will get you most of the way, but there are a couple changes when using existing files. The first change is load the existing video Asset using the Asset ID. Replace the function called CreateAssetAndUploadSingleFile with one which looks something like this:

static IAsset LoadExistingAsset(string AssetId)
{
    var matchingAssets = (from a in _context.Assets
                                      where a.Id.Equals(AssetId)
                                      select a);

    IAsset asset = null;
    foreach (IAsset ia in matchingAssets)
    {
        asset = ia;
    }

    return asset;
}

You’ll need to know the Asset Id for the video. Hopefully you’ve been storing those someplace as you’ve encoded videos; we had them in SQL Azure so I pulled them back from there. If you don’t have them, playing around with the LINQ on _context.Assets will probably return them in some way. I haven’t needed to do that myself.

Now that you have a reference to the video asset, you can work your way down the code in RunIndexingJob and update a few things. I would recommend renaming the job to something which uniquely identifies the video, as that will show up in the Jobs section of the media services account in the Azure portal. If it fails, it makes it much easier to figure out which one to redo. Same thing with the indexing task and output asset name, renaming them makes them easier to track in the logs. For the configuration file, follow the Task Preset for Media Indexer link and load the file from wherever seems appropriate to you. I put some string placeholders into the config file for the metadata fields, which I’m replacing with some data pulled from the same database I’m getting the Asset ID from. So that section for me looks like:

<input>
    <metadata key="title" value="{0}" />
    <metadata key="description" value="{1}" />
  </input>

That should get you through RunIndexingJob. This is where the examples really fell flat for me. There are some additional steps required now. I changed RunIndexingJob to return the output media asset, as the caption files now have a different Asset ID than the video. Since Azure Blob Storage underpins Media Services, the Asset ID is actually the blob container name as well. Since the files the indexer generated have a different Asset ID, it means they’re actually in a different container than the video. This is important for actually consuming the captioning file. So rather than returning true like the example code, mine returns job.OutputMediaAssets[0]. There are three steps left to actually be able to use the caption files.

  1. Publish the caption asset.
  2. Change the blob storage permissions on the Asset ID. (Remember the Asset ID is the same as the blob container name.)
  3. Save the path to the published caption file in blob storage.

Publish the Caption Asset

This is really easy, and very similar to publishing the video files after encoding. From code which calls RunIndexingJob:

var asset = RunIndexingJob(AssetId);
ILocator loc = PublishAsset(asset);

The definition for PublishAsset looks something like so:

static ILocator PublishAsset(IAsset asset)
{
    var locator = _context.Locators.Create(LocatorType.Sas, asset, AccessPermissions.Read, TimeSpan.FromDays(35600));
    return locator;
}

The major difference between the video publish and this is the different LocatorType. Using Sas creates a Progressive download locator, whereas OnDemandOrigin creates a streaming locator. If you use the latter, it won’t work. You return the locator back as it has the URL to the container, which will be helpful for the next step.

Change Blob Storage Permissions

Now that the Asset is published, the blob container is out there and available, but requires a SAS token to access it. If that’s what you want, skip this step. I want it to be available publicly, however, so the blob container permissions need a quick update. Since the Asset ID is the same as the blob container name, we’ll use the blob storage API to alter this.

var videoBlobStorageAcct = CloudStorageAccount.Parse(_blobConnStr);
CloudBlobClient videoBlobStorage = videoBlobStorageAcct.CreateCloudBlobClient();
string destinationContainerName = (new Uri(loc.Path)).Segments[1];
CloudBlobContainer assetContainer = videoBlobStorage.GetContainerReference(destinationContainerName);

if (assetContainer.Exists()) // This should always be true
{
     assetContainer.SetPermissions(new BlobContainerPermissions
     {
         PublicAccess = BlobContainerPublicAccessType.Blob
     });
}

I’m setting up the blob container there and grabbing the container name from the locator just to be safe. Then it gets the reference and sets the container access to blob. No more SAS required to get the captions!

Save the path to the caption file

The last thing to do now is build the path to the caption file or files and save them so they can be retrieved and used by the player. I’m only generating the WebVTT format, since I’m only concerned with playing the videos via a website.

string PublishUrl = loc.BaseUri;
string vttFileName = "";

// Loop through the files in the asset to find the vtt file
foreach (IAssetFile file in asset.AssetFiles)
{
    if (file.Name.EndsWith(".vtt"))
    {
        vttFileName = file.Name;
        break;
    }
}

string captionUrl = PublishUrl + "/" + vttFileName;

Now you save the value in captionUrl and you’re good to go! One small additional note which I was stuck on for a little while. If you’re consuming the caption file from a different domain, which is almost guaranteed (unless you’re running your site from static files on blob storage), you’ll need to change the CORS settings for the blob storage account being used by Media Services. The easiest way I’ve found to set this is to use the Azure portal. Browse to the storage account being used via the storage blade and not the media services blade. The storage blade has a nice UI which lets you whip through it in a few seconds. Hope this helps!

(This post refers to Azure Media Indexer v1. At the time of writing, v2 was in preview.)

0 Comments

My original reason for writing the Modern Delicious app for Windows was so I could have a nice way to read through links I had saved on Windows 8. I basically wanted a reading list from my recent Delicious links. I wrote and released the app, but of course soon after, the Reading List app for Windows was announced. My usage scenario was catered for, and there didn’t seem to be a reason for the app to exist any longer. There had been some very kind feedback from several people, however, so I continued development on the app as I had time. This feedback was the reason there were updates to the app for Windows 8.1 and 10, features like tag suggestions were added, and even a Windows Phone 8.1 version happened. Discovering the intracacies of developing for the Windows Store has been an incredibly interesting learning experience. One point in all this which is key – I am only a third party developer and have no affiliation with Delicious or any of the companies which have owned it.

For the last two months, part of the API provided by Delicious has not been functioning correctly. Most of the API is working perfectly, except for one method. This one method happens to be the key one which allows the app to retrieve each person’s entire history of posts (or bookmarks, if you prefer). Without it, the best the app can do is to get the last 100 or so posts for each user. As I’ve learned throughout the course of developing this app, many of the users of Modern Delicious are longtime Delicious users, with an extensive bookmark history. Only being able to view the last 100 posts renders the app effectively useless for these users. I’ve tried to contact Delicious through several different methods: email, Twitter, and their Github repo. They’ve been unresponsive thus far, and I cannot allow the app to continue being downloaded and frustrating users who legitimately just want a functional Delicious client app for Windows 10. To that end, and with great sadness, I’ve removed Modern Delicious from the Windows Store.

This is the danger of developing against a third-party API such as the one provided by Delicious. At any time, the company controlling the API can do whatever they want with it. I have no ill will nor any hard feelings towards Delicious for breaking their API; it’s entirely possible they had legitimate engineering reasons for the breaking change. I’m incredibly grateful to Delicious and all of the engineers who designed and maintained the API over the last decade, it’s been a tremendous development resource for many developers. It would have been great if there had been some warning about impending changes potentially breaking the API, or some notice of the API being retired, which would have allowed myself and others to give some warning to our app users. But ultimately, it’s their API and they can break it as they please in order to do what is needed for their company.

To the many Modern Delicious users over the last few years: thank you for you feedback and continued usage. If Delicious fixes the issue, I will restore the app to the Windows Store with any updates required by API changes. Delicious has a functional API of some kind out there, as their official apps for iOS and Android both remain functional. Either they have not made the details of using the API those apps use public, or I am not clever enough to figure it out. Either way, I feel like I’ve let you all down.

To Delicious: it would be very nice of you to repair the API or publish the details of the API your iOS and Android apps use. You have a long history of providing a wonderful, free service while struggling to find a path to profitability. The website seems adrift after the last changes were released, and I sincerely hope it doesn’t mean the end of Delicious is near. And once again, I continue to extend the same offer I have made many times before: I will gladly give you the app for free so you can have an official Windows Store app. (The Windows 10 version is even a UWP app!)

0 Comments
One of my most frequent issues when releasing to Azure websites using the old build system from VSTS/VSO is the deployment fails. By old build system, I mean the continuous deployment magic you set up from the Azure portal for an Azure website (now App Service). The worst is on a good sized solution I work on which takes about 20 minutes to build. So I wait the 20 minutes, only for it to fail for reasons beyond my control on the deployment step with errors like "invalid thumbprint" or "intermittent communications issues." Despite the build succeeding, I'm left kicking a new build and waiting 20 minutes just for the deployment. So I was really excited to see the new Build and Release Management system in VSTS/VSO where it looks that option is possible, retrying just that deployment step. On the solution with the 20 minute build time, there are also four different configurations. (This means there's potentially 80 minutes of wasted build time if the deploy fails for all.) The new build and release system also looks to have a multi-configuration option where I can build all four at once.

So diving into the main getting started tutorial, it looks very promising. First thing is to set up the build. I'm following the tutorial exactly as shown by creating a Visual Studio build, but with three small additional steps in order to cope with building all four of my configurations. On the variables tab, I replaced the existing build configuration values with my four configurations, separating each with a comma. If you save and queue a build at this point, it will fail, as it treats the value as a single configuration.


The next piece for multiple configurations is on the Options tab. (This has changed, see edit under the image below.) Put a check next to MultiConfiguration, and set Multipliers to "BuildConfiguration". You can optionally choose to check the options below, as I have, to potentially build in parallel and continue with other build configurations if one fails.

*Edit 2018-02-01: Microsoft moved this when multi-phase builds released. This can be enabled now in the Tasks tab by selecting the Phase. I believe this defaults to Phase 1 for both new and converted legacy builds. It's the third in the list and not obvious that it can be clicked, under Process and Get Sources. From there, it's under Execution Plan > Parallelism. The Multipliers field is still the same, but there's now an additional required field, Maximum Number of Agents. Using 1 will force each configuration to finish building before the next one begins, but you can increase it and it might build in parallel.

The final piece is on the Build tab, selecting the Copy and Publish Build Artifacts step. If you were to build now, it would build all four configurations, but drop all four zip files in the same directory, which isn't going to work since they'll all have the same name. My solution is to put each in their own folder, naming the folder for the configuration. This is very easy to do, just add "\$(BuildConfiguration)" (without quotes) after the artifact name in the Artifact Name textbox. Now it's safe to save and queue a build!


Interestingly, my build times decreased to around 14-15 minutes for each configuration. I'm pretty sure the deployment step wasn't taking 5 minutes on the old setup, but regardless, I like anything which makes the build faster!

If you switch over to the Release tab from the top row now, I've followed the tutorial down to the step where you create the release definition. I've created an Azure Website Deployment, disabled the test step, and renamed the environment to match one of my build configurations. I have four configurations, so I'll need four environments. I'll come back and add those in after I get the first one right. Continuing on the tutorial, I also linked to the build definition I created earlier. Once that's done, you can see the artifact name from your build in the Artifacts tab. Back on the Environments tab, we need to line up the configuration name for the release. Click on the three dots to the right of the environment name and select Configure Variables. Next to Release Configuration, I'll set the value to match the first entry in the build configuration list in the first image above, REL-IE.


Hit OK to save and return to the environments. There is a slight adjustment needed to the Web Deploy Package value in order to grab the correct deployment package for the current configuration. The default value of "$(System.DefaultWorkingDirectory)\**\*.zip" will grab all of the zip files in the build artifacts location. I have four configurations, so there are four zip files, each within a folder named for their build configuration. Using a syntax similar to the build step, modify this to look for this configuration folder using the ReleaseConfiguration variable we just set. This will make the value "$(System.DefaultWorkingDirectory)\**\$(ReleaseConfiguration)\**\*.zip". I have a staging slot set up for this web site/app service, so I've also set variables for the release to go to that slot as well.


Check the tutorial again for additional steps for continuous deployment and such. Otherwise, save and run a release - it should be good to go! The first time I tried mine, it failed for no logical reason, much like the old release system. Retried it and it worked, proving that I now have an improved way to release. Now we need to copy the existing environment over for the remaining three build configurations. The easiest way to do this is by clicking the three dots to the right of the environment name, and select Clone Environment. It will copy everything from the existing environment and let you change the name right away. The two things which need to updated now will be the ReleaseConfiguration variable value and the Web App Name for each. (If your sites are in different data centers, you may need to update the Web App Location value as well.) So I cloned the first configuration three times, updated those values, and saved, resulting in something like this:


Now select the release option from the menu, select the build which you queued earlier (and is hopefully done by now), and select the sites/configurations you want to deploy from left to right. My one complaint at this point is you cannot pick and choose which of the environments to deploy, you have to select them in order from left to right. Hopefully that option is added in the future. After selecting all of the configurations, mine worked on the first try, taking a little over 8 minutes.

To workaround the drawback around selecting a release out of order I mentioned above, I've also created individual releases for each of my build configurations in the event I need to deploy just one or two of them. All in all though, I'm really happy with how this turned out. While Release Management is still technically in preview, I'll be using this in production as it's such a significant improvement in my current release experience.

0 Comments

Extremely easy to do, but extremely easy to mess up. I’m assuming with this post you have already configured Azure Mobile Services with your certificates. If not, there are plenty of tutorials around which can help you through the process. (I personally like this one for APNS.)

The classic Mobile Services examples all involve the Todo List app, which expect you to be using the full Mobile Services backend. It’s certainly slick if you can, but what if you only need the push notification capabilities? This is the problem I was trying to solve. The data for my mobile application is already managed elsewhere, I only need Mobile Services to manage the push subscriptions for my end users. The Javascript API portion of Mobile Services will be called from my backend and by some other applications using C# to generate the push notifications. It’s maybe not a common scenario, but it’s certainly there to allow you to do it. No amount of Google/Bing attempts landed me on a nice tutorial which satisfied my questions, so I’ll do my best.

Go to the API tab in your mobile service in the portal, and either click on the Create option at the bottom or click the big message in the center of the screen. Enter the name you want for the API, I choose Godzilla. (There was a marathon on El Rey last weekend, so I have Godzilla on the brain still.) Set the app permissions appropriately for your API, I’ll to stick with “Anybody with the Application Key” for the example and click the Checkmark button.

Screenshot 2015-07-09 15.39.09

Once it’s done creating the API, click on it to view the code. You’ll have the starter API with a Post and a Get.

There’s no need for the Get, so delete it. The Post has a hint of what you need to use on the third line of comments, “var push =…”. Let’s uncomment that line so we can use it to generate the push. (I’m only going to write the code to send pushes to iOS (APNS) to keep the example nice and simple, the code for Android should be fairly trivial to add.) After the line with “var push = …”, paste in the following code over the remaining body of Post.

push.apns.send("atombomb", {
        alert: request.body.msg       
    },{ 
        success: function(result){
            //console.log("Godzilla push success");
       }, error: function(error){
           console.log("Godzilla push failed, error: " + error);
       }
    });
    response.send(statusCodes.OK, "{msg: 'Push executed, check log for results'}");

So what is this doing? It will push a notification to only the users subscribed to the “atombomb” tag. If you want to push to all users who have allowed push notifications for your app, change that parameter to null, without quotes around null. The next parameter is the JSON content which will be the contents of the push notification. (Here’s a great reference for more options you can put in it.) You see many other examples where they only use the payload property, which is certainly nice if you need to send some data with the push, but won’t actually pop a message on the user’s device. Alert is what you need to pop a message up, and the contents of the Alert parameter get displayed on the top of the user’s screen when it is received.

Quick side tangent. If you only provide the Payload property, as I did at first, it’s hugely confusing why there isn’t a notification popping up on the device. The portal will show the push succeeds, but nothing shows up on the device. The reference link above was the light bulb moment when I realized what I had done wrong. If there is no alert property as part of the content parameter, the user will not be notified in any way. You only want to use Payload by itself when you want to push an update in the background, without the user being prompted. You can certainly use both Alert and Payload at the same time, however.

On my alert parameter, you can see I’m actually sending in the content as part of the body of the API call. If you know your push message will always be the same, you can change this out for a string right there, but I think a parameter from the body makes for a better real world example. The final parameter contains the success and error functions, which are really useful for debugging. They will write to the mobile service log using console.log, which I’ll get to next. So here’s the final result:

Screenshot 2015-07-09 16.22.22

I also edited the response message, but you can leave it as the default if you like. Now click the save button at the bottom to save your changes. If you want to check the logs once you have called your new API, click the back arrow after saving and it’s the last option in the top menu. You may need to uncomment the body of the success function to get anything to appear there, if it all works right away.

Screenshot 2015-07-09 16.24.33

So now we need to call our API. This is when Mobile Services really flexes its muscles, it’s so incredibly simple. I’ve got a fairly reusable method for this which lets me pass in the API I want to call, and the message I want to send. So when invoking the method below, I use the parameters “Godzilla” for Api and “I knew that tuna-eating monster was useless!” for Message. If you’ve swapped out the message in the body for a string as I mentioned earlier, it gets even easier as you can then remove two lines which deal with the body variable.

public async Task SendPushNotification(string Api, string Message)
        {
            JObject body = new JObject();
            body.Add("msg", Message);
	    MobileServiceClient client = new MobileServiceClient(appUrl, appKey);
            return await client.InvokeApiAsync(Api, body);
        }

You can go much further with that method if you want, such as passing in your own custom object rather than using a JObject. If you’re looking to do something like that, I would recommend reading “Custom API in Mobile Services,” it goes nicely into a scenario like that, and includes a little bit on handling authenticated users.