I’ve been struggling with making semantic zoom work on a Windows 8 app that I been working on. I’m using it as an excuse to learn the MVVM pattern, but I ran into some some serious challenges making the two place nicely.

If you Google for the problem, the main post that will inevitably come back will be this post by Mikael Koskinen. It all hinges on the following code:

var collectionGroups = groupedItemsViewSource.View.CollectionGroups;
((ListViewBase)this.Zoom.ZoomedOutView).ItemsSource = collectionGroups;

While this approach will work, it doesn’t feel like a proper MVVM implementation for two reasons.

  1. You end up with code in the code behind for the view, which from my understanding violates the MVVM pattern.
  2. If you load the bound data asynchronously, which is pretty much required to get the app approved for the Windows Store, your bound collection will likely be empty when the code above runs, so when the data finishes loading, you semantic zoom source will still be empty.
  3. When you start getting into loading your views dynamically, this way gets really difficult to implement. I’m sure there’s a way to do it, but I couldn’t figure it out.

I found a better option when digging into the MVVM Light framework, specifically one of the samples they display. It seems that the Semantic Zoom control is pretty picky and needs its data source to be of type IGrouping. If you structure your data anything like me, you’re probably in the habit of binding to ObservableCollections most or all of the time, which do not implement IGrouping. So you have to implement the POCO classes/models in a way that IGrouping can understand, and then run them through a converter in your view binding. I would recommend looking at the example on the MVVM codeplex page under the section heading “The source code and the slides…” which links to a public SkyDrive folder with a zip file. If this disappears, the relevant converter code is below.

public class CvsToCollectionGroupsConverter : IValueConverter
{
	public object Convert(object value, Type targetType, object parameter, string language)
	{
		var cvs = value as ICollectionView;
		if (cvs != null)
		{
			return cvs.CollectionGroups;
		}
		return null;
	}

	public object ConvertBack(object value, Type targetType, object parameter, string language)
	{
		throw new NotImplementedException();
	}
}

I’ve struggled lately with Nuget at work. Let me preface this by saying that I really like Nuget  and when it works, it’s fantastic. 

So here’s the scenario - a small development team, working on multiple applications, all of which have “Enable Nuget Package Restore” enabled and share a few common libraries. I’m working on an Web API project and add the first Nuget reference into one of the shared assemblies, then check in. Now another developer gets latest in a webforms solution that uses the library that I just checked in and it throws errors about not being able to find the Nuget reference. This has bitten me a few times now. 

I looked at the packages.config file, saw it looked perfect, and was left scratching my head. If any of the other developers delete the reference and add it again, things work just fine. Here’s what went wrong. At the same level as the solution file is the packages folder. This is part of why you have to typically create solutions in their own folder, or else when you enable package restore on multiple solutions that are in source control in the same directory, they battle for supremacy of that directory and the .nuget directory. Within the packages folder is a file called repositories.config that controls which projects in a solution need to have packages restored in. When you add the first Nuget package to a project in a solution, it only modifies the repositories.config file in that solution. So any other solutions that reference the project that just got Nuget-ified aren’t aware they need to restore the package to that project, and since the reference isn’t present on your fellow developers machine, it can’t compile.

The best solution I can find is to locate all of the solutions that reference the project in question and manually add it to the repositories.config file. Yuck.

I’m in the middle of upgrading a web site to .Net 4.5 while reworking some architecture. I found myself wanting to use the same site name on my local IIS so that I can continue to maintain the old code until the migration is done, and hopefully there would be a minimal amount of work to switch over the rest of the developers on the team to the new project when I’m done. I had done some coding for IIS 7 a while ago, but the cobwebs were thick so I was struggling. It took a couple of hours of searching but I finally came across how to do this. 

I ended up making a small winform app that I can just click a button and switch between the two solutions as needed. I’ll probably end up sending this to the other developers on the team when we do the change over to to new code just to make it really easy. Firstly, a couple of textboxes to input the paths, and a couple of buttons on it to trigger the change. Next I added a couple of entries to the settings file to hold the default paths that will end up in the textboxes.

Generate the button click events, then head over to the code behind. Now, add a reference to Microsoft.Web.Administration, which lives in windir\system32\inetsrv, and then add a using statement for it in the code behind file.

        public Form1()
        {
            InitializeComponent();

            txt2010.Text = Properties.Settings.Default.Path2010;
            txt2012.Text = Properties.Settings.Default.Path2012;
        }

        private void btn2010_Click(object sender, EventArgs e)
        {
            Properties.Settings.Default.Path2010 = txt2010.Text;            
            Properties.Settings.Default.Save();

            ServerManager oMan = new ServerManager();
            oMan.Sites["www.example.com"].Applications[0].VirtualDirectories[0].PhysicalPath = Properties.Settings.Default.Path2010;
            oMan.CommitChanges();
        }

        private void btn2012_Click(object sender, EventArgs e)
        {
            Properties.Settings.Default.Path2012 = txt2012.Text;            
            Properties.Settings.Default.Save();

            ServerManager oMan = new ServerManager();
            oMan.Sites["www.example.com"].Applications[0].VirtualDirectories[0].PhysicalPath = Properties.Settings.Default.Path2012;
            oMan.CommitChanges();
        }

In the form load, I’m setting the default settings to the textboxes. In the clicks, I’m saving any changes to the setting, setting it to the correct site, then committing it. Really easy, once you find the right documentation!

Here is a quick list of the things that have tripped me up since I’ve been playing with Azure.

  1. Paths to blobs are case sensitive. I have a camera that saves .jpg as .JPG, which had me scratching my head for a while when they wouldn’t work after uploading to Azure. It turned out I was using lower case when I was trying to access them, so I just had to update my code that uploaded the images to convert the extension to lower case.
  2. SQL Azure doesn’t support full text indexing yet. You have to set up a VM instance running the regular SQL Server if you want this in the Azure cloud.
  3. Always set the blob headers when you upload a file, the content-type header in particular.
  4. You can’t use a CNAME with SSL for blob storage. You have to use the long, odd URL that Azure provides you with when you set up the blob container. So instead of https://blob.example.com, you end up with https://xx000000.xx.msecnd.net.
  5. There is no way to manually expire a file on the Azure CDN. Once you access it through the CDN, it’s there until the refresh algorithm updates it. So you have to be very sure you’re ready before you hit that “Publish” button.

I’ve been playing with the HTML5 canvas element lately. There are some very interesting things you can do with it with images, a couple of which I was hoping to put on my website. There are some security restrictions, however, one of which is that you cannot use some of the most powerful elements of it if the image comes from another domain.

One of the things I was doing with the images for my website was moving them to Azure blob storage with the intention of exposing the blob store on a CDN. This is a pretty common decision for any site that needs to scale. My site doesn’t need it, but I wanted to play with new shiny. However, this renders the HTML5 canvas element fairly useless for anything except small scale experiments like mine.

I can understand the security arguments, but I’m sorry, they’re silly. It’s crippled, except there’s a huge backdoor that renders the security restrictions completely pointless. You simply do an AJAX request for the image and convert the image to a Base64 string, then use that as the canvas image source. It completely defeats the security, but sacrifices performance to do so. So what was the point of pretending to lock down security? How annoying. I may post some code when I finish working through it.

Source: Cleaning remote images for use with HTML5 canvas