Inspired by the truly excellent XAML Power Toys from Karl Shifflett, I’m pleased to introduce MoXAMLPowerToys, which stands for More XAML Power Toys in deference to Karl’s excellent XAMLPowerToys. MoXAMLPowerToys is a Visual Studio addin, which is designed to enhance your productivity in Visual Studio. I have many plans for this utility, but I would love to hear any areas that you think you would like to see adding.

In this initial release, you can comment and uncomment XAML code. To use this addin, simply copy the addin file to C:\Users\xxx\Documents\Visual Studio 2008\Addins\MoXAMLPowerToys – For Testing.AddIn and edit it to point the Assembly at the addin dll.


  • Visual Studio 2008 with SP1


MoXAMLPowerToys menu.
MoXAMLPowerToys menu.


Future versions of this code will include further productivity enhancements:

  • The ability to automatically assign standard command bindings to tags
  • Convert automatic properties into INotifyPropertyChanged

Managed Kernel Transaction Manager – Pt 2.

In my previous posting on using the Kernel Transaction Manager here, I mentioned that I would do a follow up explaining how to ensure that your code behaves on systems that don’t have the KTM present. In other words, how do you ensure that your code runs under pre Vista operating systems? Well, it turns out that this is both very simple and very, very complex at the same time.

Basically, you have the choice of trying to replicate the KTM on older operating systems or you can choose to live with the differences in behaviour between the two areas. If you want the systems to behave exactly the same, then you are going to have to add a lot of code yourself to monitor what’s going on inside the transaction and handle them as appropriate. If you can live with the discrepancy in behaviour, then you can do one of the following two things.

  1. You can rely on an exception to tell you that the API method isn’t available in that particular form and fall back to a simpler method in the catch block.
  2. You can explicitly test the operating system version to see if it is Vista or later. This is the method that I’m going to take here.

Taking yesterdays example of removing a directory, we can create a method that looks like this:

public static void DeleteDirectory(string path)


  if (Environment.OSVersion.Version.Major > 5 && Current.Transaction != null)


    IntPtr txh = null;

    IKernelTx tx = (IKernelTx)TransactionInterop.GetDtcTransaction(Transaction.Current);

    tx.GetHandle(out txh);

    RemoveDirectoryTransacted(path, txh);







Note that we don’t bother trying to call the transacted version if the version is less than or equal to 5 and there is no current transaction. There, you’ve now made it so that your code will run on XP. Again, a word of caution. If your logic relies on ACID being enforced in the file system and the KTM isn’t present, you could end up in big trouble indeed, i.e. only rely on ACID if you know that the minimum version that your software runs on will always be Vista.

Managed Kernel Transaction Manager – Pt 1.

Recently I’ve started playing around with Kernel Transaction Manager introduced with Vista. This remarkable (and largely unheralded) piece of technology allows developers to put transaction handling around certain IO based actions such as creating a directory or creating keys in the registry.

The name causes confusion for people because it implies that the transactions can only be used in kernel-mode transactions. This isn’t the case as KTM supports both kernel and user-mode transactions. The name actually means that the transaction engine is built into the Kernel. I suppose they didn’t just call it the Transaction Manager because they have shipped so many different transaction managers over the years that this would just end up confusing people.

First of all the bad news. As KTM exists solely in the kernel, it isn’t available on operating systems prior to Vista. If you use Vista or Server 2008, then you have all you need to start using it.

So, why do I think that transactional file activity is such good news? Well, it allows you to develop systems that follow the ACID principals for areas other than database activity. Suppose that you want to create a directory and write a file into it, but this should only persist if the operation the file depends on completes successfully. The old way of doing this would be to create the directory, write the file and then (if the operation fails), remove the file and directory. That’s a lot of work for you to keep track or, and potentially it’s very error prone. How much better it would be if you could create a transaction around these operations and only commit them if things work as you expect. I suppose an example is in order here.

First of all, you need to “hook” into the API to call the function. Sorry, but there is no managed code equivalent of this in the BCL. You have to wrap the API yourself.

[DllImport("kernel32.dll", SetLastError = true,
  CharSet = CharSet.Auto)]
static extern bool RemoveDirectoryTransacted(
  [MarshalAs(UnmanagedType.LPWStr)]string path,
  IntPtr transaction);

This part of the code is necessary so that you can get access to the kernel transaction. As this is a COM call, you need to import the interface. You can call the interface what you like, the important thing is to use the Guid below, and the method name must be GetHandle.

internal interface IKernelTx
  void GetHandle([Out] out IntPtr handle);

Then, you need to call this:

static void Main()
  string path = @"c:\test";
  if (!Directory.Exists(path))
    Directory.CreateDirectory(path);   using (TransactionScope tx = new TransactionScope())
    IntPtr txh = null;
    IKernelTx tx = (IKernelTx)TransactionInterop.
    tx.GetHandle(out txh);
    RemoveDirectoryTransacted(test, txh);

There are a couple of bits to notice in this sample. First of all, we are using TransactionScope to create the transaction that we are going to work in. Now, you can’t simply pass this into the KTM method. You need to convert it into a pointer that the KTM method can work with (remember that the KTM is unmanaged). Anyway, it’s a simple matter to get the transaction. All you need to do is call the TransactionInterop.GetDtcTransaction for the current transaction. This maps to the COM interface I mentioned above, and you can retrieve the transaction pointer by calling GetHandle. Once you have this pointer, you can pass this into your transacted code.

Now if you run a sample like this, you will notice that the directory is not actually removed. Well, that’s what appears to happen but it’s not quite true. The directory is removed, but the removal is not committed because we haven’t told the transaction to commit. If you don’t commit the transaction then these operations are implicitly rolled back. So, how do you save the changes? Well, simply call Commit() on the transaction, and that will do it. In other words, after the RemoveDirectoryTransacted call, add tx.Commit();.

In a future post, I’ll discuss how you can be a good OS citizen and ensure this code doesn’t fail on older operating systems.

Microsoft Synchronisation Framework

Microsoft is soon to be releasing their Synchronisation Framework. To some people, it’s Microsoft’s answer to Google Gears, but I would have to disagree with them. Simply assessing the two technologies as being like for like misses some fairly fundamental points.

  1.  Google Gears is aimed at extending Internet applications onto the desktop. In other words, you can run some fairly sophisticated applications in a browser and have them interact as though they are desktop based. I say appear here because they run in some fairly tight restrictions, such as they are sandboxed. Plus, there is a certain lack of control for the user – where is their data stored, offline, online or some weird combination of the two? Effectively, you can think of Gears as being local storage for web applications.
  2. Sync Framework is designed to work the other way round. It’s a desktop based technology so it can run from the desktop up to the server.
  3. Sync Framework extends from synchronising items such as folders, emails, databases or pretty much anything else you can think of.

In this example, you can see how easy it is to synchronize file changes.

public void SyncFiles(SyncId sync, SyncId destinationId, string sourceRoot, string destRoot, 
  FileSyncScopeFilter filter, FileSyncOptions options 
  using (FileSyncProvider source = new FileSyncProvider(sync, sourceRoot, 
      filter, options)) 
    using (FileSyncProvider destination = FileSyncProvider(destinationId, destRoot, 
       filter, options)) 
      destination.AppliedChange += new EventHandler<AppliedChangeEventArgs>(OnApplyChange); 
      SyncAgent agent = new SyncAgent(); 
      agent.LocalProvider = source; 
      agent.RemoteProvider = destination; 
      agent.Direction = SyncDirection.Upload; 
}  public void OnApplyChange(object sender, AppliedChangeEventArgs args) 
  if (args.ChangeType == ChangeType.Create) 
    Console.WriteLine("The file {0} has been moved to {1}", 
      args.OldFilePath, args.NewFilePath); 

As you can see, synchronising the files is very simple indeed. This level of functionality is available for database content as well. I look forward to its release.