Ultimate Coder – The pursuit of Bacon

This is a copy of the post I made on the Intel site here. For the duration of the contest, I am posting a weekly blog digest of my progress with using the Perceptual Computing items. This weeks post shows how Huda has evolved from the application that was created at the end of the fourth week. My fellow competitors are planning to build some amazing applications, so I’d suggest that you head on over to the main Ultimate Coder site and follow along with their progress as well. I’d like to take this opportunity to wish them good luck.

Bring me the Bacon.

Rather than doing a day by day update this week, I’ve waited until near the end of the week to cover the progress I’ve been making this week. It couldn’t be more timely, given Steve’s update this week as I’d reached a lot of the conclusions he did regarding gesture performance and also layout of the interface (a point he raised elsewhere). Hopefully, having a bit of video footage should showcase where we are with Huda right now.

The first part of this week revolved around finalising the core saving and loading of photographs. As I revealed last week, the filters aren’t applied directly to the image; rather, they are associated with the image in a separate file. The code I showed last week demonstrated that each filter would be responsible for serialising itself, so the actual serialisation graph code should be relatively trivial.

using AForge.Imaging.Filters;
using GalaSoft.MvvmLight.Messaging;
using Huda.Messages;
using Huda.Transformations;
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Runtime.Serialization.Formatters.Binary;
using System.Text;
using System.Threading.Tasks;
namespace Huda.Model
{
public sealed class PhotoManager
{
private Dictionary<string, List<IImageTransform>> savedImageList = new Dictionary<string, List<IImageTransform>>();
private readonly object SyncLock = new object();
private string savedirectory;
private string path;
public PhotoManager()
{
}
public List<IImageTransform> GetFilters(string originalImage)
{
if (!savedImageList.ContainsKey(originalImage))
{
lock (SyncLock)
{
if (!savedImageList.ContainsKey(originalImage))
{
savedImageList.Add(originalImage, new List<IImageTransform>());
Save();
}
}
}
return savedImageList[originalImage];
}
private void AddFilter(string key, IImageTransform filter)
{
var filters = GetFilters(key);
lock (SyncLock)
{
filters.Add(filter);
Save();
}
}
private void Save()
{
if (savedImageList == null || savedImageList.Count == 0) return;
using (FileStream fs = new FileStream(path, FileMode.Create, FileAccess.Write))
{
BinaryFormatter formatter = new BinaryFormatter();
formatter.Serialize(fs, savedImageList);
}
}
public void Load()
{
savedirectory = Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.CommonApplicationData), @"Huda");
if (!Directory.Exists(savedirectory))
{
Directory.CreateDirectory(savedirectory);
}
path = Path.Combine(savedirectory, @"Huda.hda");
Messenger.Default.Register<FilterUpdatedMessage>(this, FilterUpdated);
if (!File.Exists(path)) return;
using (FileStream fs = new FileStream(path, FileMode.Open, FileAccess.Read))
{
BinaryFormatter formatter = new BinaryFormatter();
savedImageList = (Dictionary<string, List<IImageTransform>>)formatter.Deserialize(fs);
}
}
private void FilterUpdated(FilterUpdatedMessage filter)
{
if (filter == null) throw new ArgumentNullException("filter");
AddFilter(filter.Key, filter.Filter);
}
}
}

While I could have created a custom format to wrap the Dictionary there, it would have been overkill for what I was trying to achieve. The reason for me using a Dictionary in this class is that it’s a great way to see if an item is already present – helping to satisfy our requirement to load the filters that we previously applied to the photo.

This class subscribes to all the messages that Huda will raise with relation to filters, which means that we can easily save the state of the filters as and when they are updated. A key tenet of MVVM is that we shouldn’t be introducing knowledge of one ViewModel in another ViewModel, so we use messaging as an abstraction to pass information around. The big advantage of this is that we are able to hook into the same messages here in the model, so we don’t have to worry about which ViewModel is going to update the model. In MVVM, it’s okay to have the link between the ViewModel and the Model, but as we already have this messaging mechanism in place, it’s straightforward for us to just use it here.

Saving and loading the actual photo and filter information is accomplished using a simple BinaryFormatter. As the filters are explicitly serializing themselves, the performance should be acceptable as the formatter doesn’t have to rely on reflection to work out what it needs to save. That’s Pete’s big serializer hint for you this week – it may be easy to let the formatter work out what to serialize, but when your file gets bigger, the load takes longer.

You may wonder why I’m only guarding the Save routine through a lock (note, the lock is actually in the AddFilter method). The Load routine is only called once – at application startup, and it has to finish before the application can continue, so we have this in a thread in the Startup routine – and the splash screen doesn’t disappear until all the information is loaded.

Okay, that’s the core of the photo management in place. Now it’s just a matter of maintaining the filter lists, and that’s accomplished through applying the filters.

You may recall from last weeks code that some of the filters had to convert from BitmapSource to Bitmap (needed for AForge support), but what wasn’t apparent was the fact that we need to convert to a standard image format. Knowing this, I decided to simplify my life by introducing the following helper code:

public static Bitmap To24BppBitmap(this BitmapSource source)
{
return ToBitmap(source, System.Drawing.Imaging.PixelFormat.Format24bppRgb);
}
public static Bitmap ToBitmap(BitmapSource source, System.Drawing.Imaging.PixelFormat format)
{
using (Bitmap bmp = ToBitmap(source))
{
return AForge.Imaging.Image.Clone(bmp, format);
}
}

This enabled me to simplify my filter calls quite considerably. So, the Sepia transformation code, for instance, became this:

public System.Windows.Media.Imaging.BitmapSource Transform(System.Windows.Media.Imaging.BitmapSource source)
{
using (Bitmap bitmap = source.To24BppBitmap())
{
using (Bitmap bmp = new Sepia().Apply(bitmap))
{
return ImageSourceHelpers.FromBitmap(bmp);
}
}
}
[/csharp]

Another thing that should be apparent in that code is that I'm wrapping Bitmap allocations inside using statements. This is because Huda was leaking memory all over the place. Basically, Bitmap objects were being created and not released - this needed taking care of. Fortunately, that's a very simple fix - all it requires was for me to wrap the Bitmap allocations in using statements.

I realised after I posted last weeks article that you might actually like to see the code I'm using to get the gesture to work with the tree. It's incredibly simple and very similar to the POC sample I showed last week. So, if you want to hook the gesture code in, all you need to do is:


using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Windows;
using System.Windows.Interactivity;
using System.Windows.Input;
using System.Windows.Controls;
using System.Diagnostics;
using System.Timers;
using System.Windows.Media;
using System.Windows.Threading;
using Goldlight.Perceptual.Sdk;
namespace Huda.Behaviors
{
public class TreeviewGestureItemBehavior : Behavior<TreeView>
{
private bool inBounds;
private Timer selectedTimer = new Timer();
private TreeViewItem selectedTreeview;
private TreeViewItem previousTreeView;
protected override void OnAttached()
{
PipelineManager.Instance.Gesture.HandMoved += Gesture_HandMoved;
selectedTimer.Interval = 2000;
selectedTimer.AutoReset = true;
selectedTimer.Elapsed += selectedTimer_Elapsed;
base.OnAttached();
}
protected override void OnDetaching()
{
PipelineManager.Instance.Gesture.HandMoved -= Gesture_HandMoved;
selectedTimer.Stop();
base.OnDetaching();
}
void selectedTimer_Elapsed(object sender, ElapsedEventArgs e)
{
selectedTimer.Stop();
Dispatcher.Invoke(DispatcherPriority.Normal,
(Action)delegate()
{
if (previousTreeView != null)
{
previousTreeView.IsSelected = false;
}
previousTreeView = selectedTreeview;
selectedTreeview.IsSelected = true;
});
}
void Gesture_HandMoved(object sender, Goldlight.Perceptual.Sdk.Events.GesturePositionEventArgs e)
{
Dispatcher.Invoke((Action)delegate()
{
Point pt = new Point(e.X - ((MainWindow)App.Current.MainWindow).GetWidth() / 2, e.Y - ((MainWindow)App.Current.MainWindow).GetHeight() / 2);
Rect rect = new Rect();
rect = AssociatedObject.RenderTransform.TransformBounds(
new Rect(0, 0, AssociatedObject.ActualWidth, AssociatedObject.ActualHeight));
if (rect.Contains(pt))
{
if (!inBounds)
{
Debug.WriteLine("Here I am");
inBounds = true;
}
// Now, let's test to see if we are interested in this coordinate.
if (selectedRectangle == null || !selectedRectangle.Contains(pt))
{
GetElement(pt);
}
}
else
{
if (inBounds)
{
selectedTimer.Stop();
inBounds = false;
}
}
}, DispatcherPriority.Normal);
}
private Rect selectedRectangle;
private void GetElement(Point pt)
{
IInputElement element = AssociatedObject.InputHitTest(pt);
if (element == null) return;
TreeViewItem t = FindUpVisualTree<TreeViewItem>((DependencyObject)element);
if (t != null)
{
// Get the bounds of t.
if (selectedTreeview != t)
{
selectedTimer.Stop();
// This is a different item.
selectedTreeview = t;
selectedRectangle = selectedTreeview.RenderTransform.TransformBounds(
new Rect(0, 0, t.ActualWidth, t.ActualHeight));
selectedTimer.Start();
}
}
else
{
// Stop processing and drop this out.
selectedTimer.Stop();
selectedTreeview = null;
}
}
private T FindUpVisualTree<T>(DependencyObject initial) where T : DependencyObject
{
DependencyObject current = initial;
while (current != null && current.GetType() != typeof(T))
{
current = VisualTreeHelper.GetParent(current);
}
return current as T;
}
}
}

Of course, this is only useful if you have the Gesture handling code that I use as well. Being a nice guy, here it is:

using Goldlight.Perceptual.Sdk.Events;
using Goldlight.Windows8.Mvvm;
using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Linq;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
namespace Goldlight.Perceptual.Sdk
{
public class GesturePipeline : AsyncPipelineBase
{
private WeakEvent<GesturePositionEventArgs> gestureRaised = new WeakEvent<GesturePositionEventArgs>();
private WeakEvent<MultipleGestureEventArgs> multipleGesturesRaised = new WeakEvent<MultipleGestureEventArgs>();
private WeakEvent<GestureRecognizedEventArgs> gestureRecognised = new WeakEvent<GestureRecognizedEventArgs>();
public event EventHandler<GesturePositionEventArgs> HandMoved
{
add { gestureRaised.Add(value); }
remove { gestureRaised.Remove(value); }
}
public event EventHandler<MultipleGestureEventArgs> FingersMoved
{
add { multipleGesturesRaised.Add(value); }
remove { multipleGesturesRaised.Remove(value); }
}
public event EventHandler<GestureRecognizedEventArgs> GestureRecognized
{
add { gestureRecognised.Add(value); }
remove { gestureRecognised.Remove(value); }
}
public GesturePipeline()
{
EnableGesture();
searchPattern = GetSearchPattern();
}
private int ScreenWidth = 1024;
private int ScreenHeight = 960;
public void SetBounds(int width, int height)
{
this.ScreenWidth = width;
this.ScreenHeight = height;
}
public override void OnGestureSetup(ref PXCMGesture.ProfileInfo pinfo)
{
// Limit how close we have to get.
pinfo.activationDistance = 70;
base.OnGestureSetup(ref pinfo);
}
public override void OnGesture(ref PXCMGesture.Gesture gesture)
{
var handler = gestureRecognised;
if (handler != null)
{
handler.Invoke(new GestureRecognizedEventArgs(gesture.label.ToString()));
}
base.OnGesture(ref gesture);
}
public override bool OnNewFrame()
{
// We only query the gesture if we are connected. If not, we shouldn't
// attempt to query the gesture.
try
{
if (!IsDisconnected())
{
var gesture = QueryGesture();
PXCMGesture.GeoNode[] nodeData = new PXCMGesture.GeoNode[6];
PXCMGesture.GeoNode singleNode;
searchPattern = PXCMGesture.GeoNode.Label.LABEL_BODY_HAND_PRIMARY;
var status = gesture.QueryNodeData(0, GetSearchPattern(), out singleNode);
if (status >= pxcmStatus.PXCM_STATUS_NO_ERROR)
{
var handler = gestureRaised;
if (handler != null)
{
handler.Invoke(new GesturePositionEventArgs(singleNode.positionImage.x - 85,
singleNode.positionImage.y - 75,
singleNode.body.ToString(),
ScreenWidth,
ScreenHeight));
}
}
}
Sleep(20);
}
catch (Exception ex)
{
// Ensure that we log the exception. We don't
// want to rethrow it.
Log(ex);
}
return true;
}
private static readonly object SyncLock = new object();
private void Sleep(int time)
{
lock (SyncLock)
{
Monitor.Wait(SyncLock, time);
}
}
private PXCMGesture.GeoNode.Label searchPattern;
private PXCMGesture.GeoNode.Label GetSearchPattern()
{
return PXCMGesture.GeoNode.Label.LABEL_BODY_HAND_PRIMARY |
PXCMGesture.GeoNode.Label.LABEL_FINGER_INDEX |
PXCMGesture.GeoNode.Label.LABEL_FINGER_MIDDLE |
PXCMGesture.GeoNode.Label.LABEL_FINGER_PINKY |
PXCMGesture.GeoNode.Label.LABEL_FINGER_RING |
PXCMGesture.GeoNode.Label.LABEL_FINGER_THUMB |
PXCMGesture.GeoNode.Label.LABEL_HAND_FINGERTIP;
}
}
}

In order to use this, you’ll need my AsyncPipelineBase class:

using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Linq;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
namespace Goldlight.Perceptual.Sdk
{
public abstract class AsyncPipelineBase : UtilMPipeline
{
private readonly object SyncLock = new object();
public async void Run()
{
try
{
await Task.Run(() => {
while (true)
{
this.LoopFrames();
lock (SyncLock)
{
Monitor.Wait(SyncLock, 1000);
}
}
});
}
finally
{
this.Dispose();
}
}
}
}

If the gesture camera wasn’t hooked up to the computer, the call to LoopFrames drops out, which means that we have to maintain a loop for the lifetime of the application. I’m simply wrapping this in a while loop which sleeps for a second if the LoopFrames drops out. This seems to be a reasonable timeout for the application to wait until it picks up again.

Another big change for me this week was dropping the carousel. I know, it seems as though I’m chopping and changing the interface constantly, but that’s what happens in a development, you try something out, refine it and drop it if there’s a better way to do things. There were three main reasons for dropping the carousel:

  1. Selecting a filter from elsewhere in the carousel also triggered the action associated with that filter. This could have been overcome, but the way it looked would have been compromised.
  2. I put the carousel in place to give me a chance to use some of the pre-canned gestures. Unfortunately, I found using them to be far too cumbersome and “laggy”. I grew impatient waiting for it to cycle through the commands.
  3. Using gestures with this just didn’t feel natural or intuitive. This was the biggest issue as far as I’m concerned.

Fortunately, the modular design of Huda allows me to take the items that were associated with the carousel and just drop them into another container with no effort. Two minutes and I had the filters in place in a different way. The downside is that Huda is becoming less and less like a Perceptual application as the interface favours touch more and more. Hmmmm, okay, I’ll revisit that statement. Huda feels less and less like an application that is trying too hard to be Perceptual. Over the last couple of weeks of development, I’ll be augmenting the touch interface with complementary gestures. For instance, I’d like to experiment with putting gesture pinch in place to trigger a x2 zoom. As the core infrastructure is in place, all I would really need to do here is hook the pinch recognition into the zoom command by passing a message. Now you see why I’ve gone with MVVM and message passing.

Part of the reason for the change in the interface came about because of something Steve talked about this week regarding “UserLaziness”. Quite naturally, the left and right hand sides of the screen relate to the operations that the user can perform to open photographs, and to manipulate the lists, but this just didn’t feel quite right. Basically, reordering the filters is something that I expect the user will do a lot less frequently than they will actually adding them, so it seems to me that we should reorder the interface a bit to move the filters to the bottom of the screen and to put the filter selection at the right hand side, where it hovers more naturally under the right hand thumb. This also means that the toolbar has become superfluous – again, it should be in the same location, minimizing the movements you should have to make.

One thing that is easy to get overlooked is that this is an Ultrabook application first and foremost. To that end, it really should make use of Ultrabook features. As a test, I hooked the Accelerometer up to the filters, so shaking it adds, appropriately enough, a blur filter.

This weeks music:

  • AC/DC – For Those About To Rock
  • AC/DC – Back In Black
  • Bon Jovi – What About Now
  • Eric Johnson – Ah Via Musicom
  • Wolfmother – Cosmic Egg
  • Airbourne – Runnin Wild
  • DeeExpus – The King Of Number 33
  • Dream Theater – A Dramatic Turn Of Events

Status Update

So where are we now? Well, Huda feels like an Ultrabook app to me first and foremost. When I start it up, I don’t really need to use the mouse or keyboard at all – the desktop side of it isn’t as necessary because the Ultrabook really makes it easier for the user to use touch to accomplish pretty much everything. I’m not too happy with the pinch zoom and touch drag though – it does very, very odd things – I’ll sort that out so that it is much closer to what you’d expect with a photo editing package.

One point of design that I’d like to touch upon. Nicole raised an interesting point about not saving the filtered version of the photo into the same location as the original. My intention was always to have a separate Export directory in which to store the images, but I still haven’t decided where that’s going to be. So Nicole – here’s your chance to really shape an application to do what you want – where would you like to see exported images reside?

Finally. While playing around with adding filters, it quickly becomes apparent that it’s possible to add them far quicker than they can be applied and rendered. Now, there are things I can do to help with this, but I am making the tough call here that the most important thing for me to concentrate on right now is adding more gestures, and making sure that it’s as robust and tested as I can possibly make it. If there’s time, and it doesn’t impact on the stability, I will mitigate the filter application. The issue is to do with the fact that Bitmap and BitmapSource conversions happen in a lot of the filters. As the filters are reapplied to the base image every time, this conversion could happen each time – I could cache earlier image conversions and just apply the new filter, but reordering the filters will still incur this overhead. As I say, I’ll assess how best to fit this in.

Well, that’s me for this week. Thanks for reading and if there’s anything you need to know, please let me know.

Advertisement

Ultimate Coder – Week 2: Blog posting

This is a copy of the post I made on the Intel site here. For the duration of the contest, I am posting a weekly blog digest of my progress with using the Perceptual Computing items. The first weeks post is really a scene setter where I explain how I got to this point, and details bits and pieces about the app I intend to build. My fellow competitors are planning to build some amazing applications, so I’d suggest that you head on over to the main Ultimate Coder site and follow along with their progress as well. I’d like to take this opportunity to wish them good luck.

Week 1

Well, this is the first week of coding for the Perceptual Computing challenge, and I thought it might be interesting for you to know how I’m approaching developing the application, what I see the challenges as being, and any roadblocks that I hit on the way. I must say, up front, that this is going to be a long post precisely because there’s so much to put in here. I’ll be rambling on about decisions I make, and I’ll even post some code in for you to have a look at if you’re interested.

As I’m not just blogging for developers here, writing these posts is certainly going to be interesting because I don’t want to bog you down with technical details that you aren’t interested in if you just want to know about my thought processes intead, but I don’t want to leave you wondering how I did something if you are interested in it. Please let me know if there’s anything that you’d like clarification on, but also, please let me know if the article weighs in too heavily on the technical side.

Day 1.

Well, what a day I’ve had with Huda. A lot of what I want to do with Huda is sitting in my head, so I thought I’d start out by roughing out a very, very basic layout of what I wanted to put into place. Armed with my trusty copy of Expression Blend, I mocked out a rough interface which allowed me to get a feel for sizing and positioning. What I really wanted to get the feel of was, would Huda really fit into a layout that was going to allow panels to fly backwards and forwards, and yet still allow the user to see the underlying photo. I want the “chrome” to be unobtrusive, but stylish.

As you can see, this representation is spartan at best, and if this was the end goal of what I was going to put into Huda, I would hang my head in shame, but as it’s a mockup, it’s good enough for my purposes. I’ve divided the screen into three rough areas at the moment. At the right, we have a list of all the filters that have been applied to the image, in the order they were applied. The user is going to be able to drag things around in this list using a variety of inputs, so the text is going to be large enough to cope with a less accurate touch point than from a mouse alone.

The middle part of the picture represents the pictures that are available for editing in the currently selected folder. When the user clicks on a folder in the left hand panel, this rearranges to show that folder at the top, and all it’s children – and the pictures will appear in the centre of the screen. The user will click on an picture to open it for editing. I’ve taken this approach, rather than just using a standard Open File dialog because I want the user to be able to use none-keyboard/mouse input, and the standard dialogs aren’t optimised for things like touch. This does have the advantage of allowing me to really play with the styling and provide a completely unified experience across the different areas of the application.

Well, now that I’ve finished roughing out the first part of the interface, it’s time for me to actually write some code. I’ve decided that the initial codebase is going to be broken down into four projects – I’m using WPF, C#, .NET 4.5 and Visual Studio Ultimate 2012 on Windows 8 Professional for those who care about such things – and it looks like this:

  • Goldlight.Common provides common items such as interfaces that are used in the different projects, and definitions for things like WeakEvents.
  • Goldlight.Perceptual.Sdk is the actual meat of the SDK code.  Initially this will be kept simple, but I will expand and enhance this as we go through the weeks.
  • Goldlight.Windows8 contains the plumbing necessary to use UltrabookTM features such as flipping the display into tablet mode, and it isolates the UI from having to know about all the plumbing that has to be put in place to use the WinRT libraries.
  • Huda is the actual application, so I’m going to spend most of this week and next week deep in this part, with some time spent in Goldlight.Perceptual.Sdk.

When I start writing a UI, I tend to just rough-block things in as a first draft. So that’s what I did today. I’ve created a basic page and removed the standard Windows chrome. I’m doing this because I want to have fine grained control of the interface when it transitions between desktop and tablet mode. The styling is incredibly easy to apply, so that’s where I started.

A quick note if you’re not familiar with WPF development. When styling WPF applications, it’s generally a good idea to put the styling in something called a ResourceDictionary. If you’re familiar with CSS, this is WPF’s equivalent of a separate stylesheet file. I won’t bore you with what this file actually looks like at this point, but please let me know if you would like more information. Once I’ve fine tuned some of the styling, I’ll make this file available and show how it can be used to transition the interface over – this will play a large part when we convert our application from desktop to tablet mode, so it makes sense to put the plumbing in place right at the start.

My first pass on the UI involved creating a couple of basic usercontrols that animate when the user brings the mouse over them or touches them; giving a visual cue that there’s something of interest in this area. I’ve deliberately created them to be ugly – this is a large part of my WPF workflow – concentrate on putting the basics in place and then refine them. I work almost exclusively with a development pattern called MVVM (Model View ViewModel), which basically allows me to decouple the view from the underlying application logic. This is a standard pattern for WPF zealots like myself, and I find that it really shines in situations like this, where I just quickly want to get some logic in place.

The first usercontrol I put in place is just an empty shell that will hold the filters that the user has added. As I need to get an image on the screen before I can add any filters, I don’t want to spend too much time on this just yet – I just needed to have it in the UI as a placeholder, primarily so that I can see how gesture code will affect the UI.

The second conrol is more interesting. This control represents the current selected folder and all its children. My first pass of this was just to put a ListBox in place in this control, and to have the control expand and contract as the user interacts with it. The ListBox holds the children of the current folder, so I put a button in place to allow the user to display the images for the top level folder. When I run the application, it quickly becomes apparent to me that this doesn’t work as a UI design, so I will revisit this with alternative ideas.

I could have left the application here, happy that I had the beginnings of a user interface in place. Granted, it doesn’t do much right now – it displays the child folders associated with My Pictures, and that’s about it, but it does work. However, what’s the point of my doing this development if I don’t bring in gesture and voice control. In the following video, once Huda has started, I’m controlling the interface entirely with gestures and voice recognition (when I say filter, a message box is displayed saying filter – not the most startling achievement, but it’s pretty cool at how easy it is to do). Because I’m going to issue a voice command for this first demonstration, I decided not to do a voice over – it would just sound weird if I paused halfway through a sentence to say “Filter”.

 As you can see – that interface is ugly, but it’s really useful to get an idea of what works and what doesn’t. As I’m not happy with the folder view, that’s what I’ll work on tidying up in day 2.

Note: I’m not going to publish videos every day, and I’m not going to publish a day by day account of the development. The first couple of days are really important in terms of starting the process off and these are the points where I can really make quick wins, but later on, when I start really being finicky with the styling, you really aren’t going to be interested in knowing that I changed a TextBlock.FontSize from 13.333 to 18.666 – at least I hope you’re not.

The important thing for me, at the end of day 1, is that I have something to see from both sides. I have a basic UI in place; there’s a long way to go with it yet, but it’s on the screen but there’s actual Perceptual work going on there, and it’s actually pretty easy to get the basics in place. More importantly, my initial experiments have shown that the gestures are quite jerky, and getting any form of fine grained control is going to take quite a bit of ingenuity. Unfortunately, while I can get the voice recognition to work, it appears to be competing for processing time with the gesture code – both of which are running as background tasks. 

One of the tasks I’ll have to undertake is to profile the application to see where the hold up is – I have suspicions that the weak events might be partly to blame, but I’ll need to verify this. Basically, a weak event is a convenience that allows a developer to right code that isn’t going to be adversely affected if they forget to release an event. While this is a standard pattern in the WPF world, it does have an overhead, and that might not be best when we need to eke out the last drop of performance. I might have to put the onus on any consumer of this library to remember to unhook any events that they have hooked up.

Here’s the gesture recognition code that I put in place today, I know it’s not perfect and there’s a lot needs doing to it to make it production level code, but as it’s the end of the first day I’m pretty happy with it:

using Goldlight.Perceptual.Sdk.Events;
using Goldlight.Windows8.Mvvm;
using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Linq;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
namespace Goldlight.Perceptual.Sdk
{
    public class GesturePipeline : AsyncPipelineBase
    {
        private WeakEvent gestureRaised = new WeakEvent();

        public event EventHandler HandMoved
        {
            add { gestureRaised.Add(value); }
            remove { gestureRaised.Remove(value); }
        }

        public GesturePipeline()
        {
            EnableGesture();
        }

        public override void OnGestureSetup(ref PXCMGesture.ProfileInfo pinfo)
        {
            // Limit how close we have to get.
            pinfo.activationDistance = 75;
            base.OnGestureSetup(ref pinfo);
        }

        public override bool OnNewFrame()
        {
            // We only query the gesture if we are connected. If not, we shouldn't
            // attempt to query the gesture.
            try
            {
                if (!IsDisconnected())
                {
                    var gesture = QueryGesture();
                    PXCMGesture.Gesture gesture1;
                    PXCMGesture.GeoNode nodeData;
                    var status = gesture.QueryNodeData(0, GetSearchPattern(), out nodeData);
                    if (status >= pxcmStatus.PXCM_STATUS_NO_ERROR)
                    {
                        var handler = gestureRaised;
                        if (handler != null)
                        {
                            var node = nodeData;
                            handler.Invoke(new GestureEventArgs(node.positionImage.x, 
 node.positionImage.y, node.body.ToString()));
                            }
                        }
                    }
                }
            }
            catch (Exception ex)
            {
                // Error handling to go here...
                Debug.WriteLine(ex.ToString());
            }
            return base.OnNewFrame();
        }

        private PXCMGesture.GeoNode.Label GetSearchPattern()
        {
            return PXCMGesture.GeoNode.Label.LABEL_BODY_HAND_PRIMARY |
                PXCMGesture.GeoNode.Label.LABEL_FINGER_INDEX |
                PXCMGesture.GeoNode.Label.LABEL_FINGER_MIDDLE |
                PXCMGesture.GeoNode.Label.LABEL_FINGER_PINKY |
                PXCMGesture.GeoNode.Label.LABEL_FINGER_RING |
                PXCMGesture.GeoNode.Label.LABEL_FINGER_THUMB |
                PXCMGesture.GeoNode.Label.LABEL_HAND_FINGERTIP;
        }
    } 
}

At this stage, I’d just like to offer my thanks to Grasshopper.iics for the idea of tying the hand gesture to the mouse. As a quick way to demonstrate that the gesture was working, it’s a great idea. As I need to track individual fingers, it’s not a viable long term solution, but as a way to say “oh yes, that is working”, it’s invaluable.

Day 2

I’ve had a night to think about the folder display, and as I said yesterday, I’m really not happy with the whole button/list approach to the interface. What had started off as an attempt to try to steer clear of the whole file system as logical tree metaphor just feels too alien to me, and I suspect that I would end up having to rework a lot of the styling there to make this appear to be a logical tree. Plus, I really need to hook something up in the UI so that I can select folders and trigger the reload of the selected folder along with child folders. We’ll attend to the styling first.

As I’ve stated that we are going to present this part of the interface as though it’s a tree, it makes sense for us to actually use a tree, so I’m going to rip out the list and button, and replace them with a simple tree. As I’m using MVVM, the only thing I have to update is my UI code (known as the View). My internal logic is going to remain exactly the same – this is why I love MVVM. More importantly, this highlights why I start off with rough blocks. I like the fact that I can quickly get a feel for what’s working before I invest too much time in it. If you’re a developer using a tech stack that you’re comfortable with, and you have a technology that allows you to take this rapid iterative approach, I cannot recommend this quick, rough prototyping enough. It’s saved me from a lot of pain any number of times, and I suspect that it will do the same for you.

The second thing I’m going to do is hook selecting a tree node up to actually doing the refresh. Again, I put most of the plumbing in place for this yesterday – all I need to do today is actually hook the tree selection to the underlying logic. 

Now, I really want to play around with the styling a little bit. I’m going to restyle the folder tree so that it looks a bit more attractive, and I’m going to change the filter control and the folder view so that the user can drag them around the interface – including using touch as I really feel that this really helps to make the UltrabookTM stand out from other form factors. Having played around with the code, I’ve now got this in place:

I’ve added a lot of plumbing code to support the touch drag here. I’m leaving this part of the code “open” and unfinished right now because I want to add support into this for dragging through gestures. By doing this, I won’t have to touch the UI elements code to make them work, they should just work because they are responding to commands from this code.

Day 3+

I’ve really been pushing to get the application to the point where the user can select a photo from my own folder browser and picture selector combo, but the first thing I thought I would address was the voice control. When I really sat down with the code I’d put in place, I realised that I was doing more than I really needed to. Basically, in my architecture, I was creating a set of commands that the application would use as a sort of command and control option and while that seemed to me to be a logical choice when I put it in, sobre reflection pointed out to me that I was overcomplicating things. Basically, the only thing that needs to know about the commands is the application itself, so why was I supplying this to the perceptual part? If I let the Perceptual SDK just let me know about all the voice data it receives, the different parts of Huda could cherry pick as they saw fit. Two minutes of tidy up code, and it’s responding nicely.

As a quick aside here. The voice recognition doesn’t send you a stream of words as you’re reading out. It waits until you’ve paused and it sends you a phrase. This means that you have to be a little bit patient; you can’t expect to say “Filter” in Huda and for the filters to pop up a millisecond later because the voice recognition portion is waiting to see if you’ve finished that portion.

Fortunately, this means that my voice code is currently insanely simple:

using Goldlight.Perceptual.Sdk.Events;
using Goldlight.Windows8.Mvvm;
using System;
namespace Goldlight.Perceptual.Sdk
{
    /// 
    /// Manages the whole voice pipeline.
    /// 
    public class VoicePipeline : AsyncPipelineBase
    {
        private WeakEvent voiceRecognized = new WeakEvent();

        /// 
        /// Event raised when the voice data has been recognized.
        /// 
        public event EventHandler VoiceRecognized
        {
            add { voiceRecognized.Add(value); }
            remove { voiceRecognized.Remove(value); }
        }

        /// 
        /// Instantiates a new instance of <see cref="VoicePipeline"/>.
        /// 
        public VoicePipeline() : base()
        {
            EnableVoiceRecognition();
        }

        public override void OnRecognized(ref PXCMVoiceRecognition.Recognition data)
        {
            var handler = voiceRecognized;

            if (handler != null)
            {
                handler.Invoke(new VoiceEventArgs(data.dictation));
            }
            base.OnRecognized(ref data);
        }
    } 
}

The call to EnableVoiceRecognition lets the SDK know that this piece of functionality is interested in handling voice recognition (cunningly enough – I love descriptive method names). I could have put this functionality into the same class as I’m using for the gesture recognition, but there are a number of reasons I’ve chosen not to, the top two reasons being. 

  • I develop in an Object Oriented language, so I’d be violating best practices by “mixing concerns” here. This basically means that a class should be responsible for doing one thing, and one thing only. If it has to do more than one thing, then you need more than one class.
  • I want to be able to pick and mix what I use and where I use it in my main application. If I have more than one piece of functionality in one class then I have to start putting unnecessarily complicated logic in there to separate the different parts out into areas that I want to use.

The OnRecognized piece of code lets me pick up the phrases as they are recognized, and I just forward those phrases on to Huda. As Huda is going to have to decide what to do when it gets a command, I’m going to let it see all of them and just choose the ones it wants to deal with. This is an incredibly trivial operation. 

“Wow Pete, that’s a lot of text to say not a lot” I hear you say. Well, I would if you were actually talking to me. It could be the voices in my head supplying your dialogue here. The bottom line here is that Huda now has the ability to recognise commands much more readily than it did at the start of the week, and it recognizes them while I’m waving my arms about moving the cursor around the screen. That’s exciting. Not dangerous exciting. Just exciting in the way that it means that I don’t have to sacrifice part of my desired feature set in the first week. So far so good on the voice recognition front.

By the end of the week, Huda is now in the position where it displays images for the user to select, based off whether or not there are pictures in a particular folder. This is real progress, and I’m happy that we can use this to push forwards in week 2. Better still though, the user can select one of those pictures and it opens up in the window in the background.

I’m not quite happy with the look of the back button in the folders – it’s still too disconnected, so I’ve changed it. Next week, I’ll add folder representations so that it’s apparent what these actually are, but as that’s just a minor template change, I’m going to leave it for now. Here’s a sample of Huda in action, opening up folders and choosing a picture to view.

Keeping my head together

So, what do I do to keep my attention on the project? How do I keep focussed? Music and a lot of Cola. So, for your delectation, this weeks playlist included:

  • AC/DC – For those about to rock (one of my favourites)
  • David Lee Roth – Eat ’em and Smile/Skyscraper
  • Andrea Bocelli – Romanza
  • Black Veil Brides – Set the world on fire
  • Deep Purple – Purpendicular
  • Herb Ellis and Joe Pass – Two for theroad
  • The Angels – Two minute warning
  • Chickenfoot – Chickenfoot III

Each week, I’ll let you know what I’ve been listening to, and let’s see if you can judge what part of the application goes with what album. Who knows, there may be a correlation on something or other in there – someone may even end up getting a grant out of this.

Final thoughts for week 1

This has been a busy week. Huda has reached a stage where I can start adding the real meat of the application – the filters. This is going to be part of my push next week; getting the filter management and photo saving into place. The photo is there for the user to see, but that’s nowhere near enough so we’ll be looking at bringing the different parts together, and really making that UI pop out. By the end of next week, we should have all the filters in place – including screens for things like saturation filters. This is where we’ll start to see the benefits of the Ultrabook because we’ll offer touch optimised and keyboard/mouse/touch optimised screens depending on how the Ultrabook is being used.

Right now, the Perceptual features aren’t too refined, and I’ve not even begun to scratch the surface of what I want to do with the Ultrabook. Sure, I can drag things around and select them with touch, but that’s not really utilising the features in a way that I’d like. Next week, I’m incorporating some user interface elements that morph depending on whether you are using the Ultrabook in a desktop mode, or as a tablet. For example, I’ll be adding some colour adjustment filters where you can adjust values using text boxes in desktop mode, but I’ll be using another mechanism for adjusting these values when it’s purely tablet (again, I don’t want to spoil the surprise here, but I think there is a pretty cool alternative way of doing this).

The big challenge this week has been putting together this blog entry. This is the area where the solo contestants have the biggest disadvantage – time blogging is time we aren’t coding, so there’s a fine line we have to tread here.

One thing I haven’t articulated is why I’m using WPF over WinRT/Metro for the application development. As I’ve hinted, I’ve a long history with WPF, and I’m a huge fan of it for developing applications. On the surface (no pun intended), WinRT XAML apps would appear to be a no brainer as a choice, but there are things that I can do quickly in WPF that will take me longer to achieve with WinRT XAML, simply because WPF is feature rich and XAML support in Windows 8 has a way to go to match this. That’s not to say that I won’t port this support across to WinRT at some point, but as speed is of the essence here, I want to be able to use what I know well, and what I have a huge backlog of code that I can draw on as and when I need to.

I’d like to thank the judges, and my fellow contestants for their support, ideas and comments this week. No matter what happens at the end of this contest, I can’t help think that we are all winners. Some of the ideas that the other teams have are so way out there, that I can’t help but want to incorporate more. Whether or not I get the time to add extra is something that is up for grabs, but right now I think that I want to try and bring gaze into the mix as an input, possibly to help with people with accessibility issues – an area that I haven’t really explored with the gesture SDK yet.

Ultimate Coder: Going Perceptual – Week 1 Blog Posting.

This is a copy of the post I made on the Intel site here. For the duration of the contest, I am posting a weekly blog digest of my progress with using the Perceptual Computing items. The first weeks post is really a scene setter where I explain how I got to this point, and details bits and pieces about the app I intend to build. My fellow competitors are planning to build some amazing applications, so I’d suggest that you head on over to the main Ultimate Coder site and follow along with their progress as well. I’d like to take this opportunity to wish them good luck.

A couple of months ago, I was lucky enough to be asked if I would like to participate in an upcoming Intel® challenge, known at the time as Ultimate Coder 2. Having followed the original Ultimate Coder competition, I was highly chuffed to even be considered. I had to submit a proposal for an application that would work on a convertible Ultrabook™ and would make use of something called Perceptual Computing. Fortunately for me, I’d been inspired a couple of days earlier to write an application and describe how it was developed on CodeProject – my regular hangout for writing about things that interest me and that haven’t really been covered much by others. This seemed to me to be too good an opportunity to resist; I did some research on what Perceptual Computing was and I’d write the application to incorporate features that I thought would be a good match. As a bonus, I’d get to write about this and as I like giving code away, I’d publish the source to the actual Perceptual Computing side as a CodeProject article at the end of the competition.

Okay, at this point, you’re probably wondering what the application is going to be. It’s a photo editing application, called Huda, and I bet your eyes just glazed over at that point because there are a bazillion and one photo editing applications out there and I said this was going to be original. Right now you’re probably wondering if I’ve ever heard of Photoshop® or Paint Shop Pro®, and you possibly think I’ve lost my mind. Bear with me though, I did say it would be different and I do like to try and surprise people. 

A slight sidebar here. I apologise in advance if my assault on the conventions of the English language become too much for you. Over the years I’ve developed a chatty style of writing and I will slip backwards and forwards between the first and third person as needed to illustrate a point – when I get into the meat of the code that you need to write to use the frameworks, I will use the third person.

So what’s so different about Huda? Why do I think it’s worth writing articles about? In traditional photo editing applications, when you change the image and save it, the original image is lost (I call this a destuctive edit) – Fireworks did offer something of this ability, but only if you work in .png format. In Huda, the original image isn’t lost because the edits aren’t applied to it – instead, the edits are effectively kept as a series of commands that can be replayed against the image, which gives the user the chance to come back to a photo months after they last edited it and do things like insert filters between others, or possibly rearrange and delete filters. The bottom line is, whatever edit you want to apply, you can still open your original image. Huda will, however, provide the ability to export the edited image so that everyone can appreciate your editing genius.

At this stage, you should be able to see where I’m going with Huda (BTW it’s pronounced Hooda), but what really excited me was the ability to offer alternative editing capabilities to users. This, to me, has the potential to really open up the whole photo editing experience for people, and to make it accessible beyond the traditional mouse/keyboard/digitizer inputs. After all, we now have the type of hardware available to us that we used to laugh at in Hollywood films, so let’s develop the types of applications that we used to laugh at. In fact, I’ve turned to Hollywood for ideas because users have been exposed to these ideas already and this should help to make it a less daunting experience for users.

Why is this learning curve so important? Well, to answer this, we really need to understand what I think Perceptual Computing will bring to Huda. You might be thinking that Perceptual Computing is a buzz phrase, or marketing gibberish, but I really believe that it is the next big thing for users. We have seen the first wave of devices that can do this with the Wii and then the XBox/Kinect combination, and people really responded to this, but these stopped short of what we can achieve with the next generation of devices and technologies. I’ll talk about some of the features that I will be fitting into Huda over the next few weeks and we should see why I’m so excited about the potential and, more importantly, what I think the challenges will be.

Touch computing. Touch is an important feature that people are used to already, and while this isn’t being touted in the Perceptual Computing SDK, I do feel that it will play a vital part in the experience for the user. As an example, when the user wants to crop an image, they’ll just touch the screen where they want to crop to – more on this in a minute because this ties into another of the features we’ll use. Now this is all well and good but we can do more, perhaps we can drag those edits round that we were talking about to reorder them. But wait, didn’t we say we want our application to be more Hollywoody? Well, how about we drag the whole interface around? Why do we have to be constrained for it to look like a traditional desktop application? Let’s throw away the rulebook here and have some fun.

Gestures. Well, touch is one level of input, but gestures take us to a whole new level. Whatever you can do with touch, you really should be able to do with gesture, so Huda will mimic touch with gestures, but that’s not enough. Touch is 2D, and gestures are 3D, so we really should be able to use that to our advantage. As an example of what I’ll be doing with this – you’ll reach towards the screen to zoom in, and pull back to zoom out. The big challenge with gestures will be to provide visual cues and prompts to help the user, and to cope with the fact that gestures are a bit less accurate. Gestures are the area that really excite me – I really want to get that whole Minority Report feel and have the user drag the interface through the air. Add some cool glow effects to represent the finger tips and you’re well on the way to creating a compelling user experience.

Voice. Voice systems aren’t new. They’ve been around for quite a while now, but their potential has remained largely unrealised. Who can forget Scotty, in Star Trek, picking up a mouse and talking to it? Well, voice recognition should play a large part in any Perceptual system. In the crop example, I talked about using touch, or gestures, to mark the cropping points; well, at this point your hands are otherwise occupied, so how do you actually perform the crop? With a Perceptual system, you merely need to say “Crop” and the image will be cropped to the crop points. In Huda, we’ll have the ability to add a photographic filter merely by issuing a command like “Add Sepia”. In playing round with the voice code, I have found that while it’s incredibly easy to use this, the trick is to really make the commands intuitive and memorable. There are two ways an application can use voice; either dictation or command mode. Huda is making use of command mode because that’s a good fit. Interestingly enough, my accent causes problems with the recognition code, so I’ll have to make sure that it can cope with different accents. If I’d been speaking with a posh accent, I might have missed this.

A feature that I’ll be evaluating for usefulness is the use of facial recognition. An idea that’s bubbling around in my mind is having facial recognition provide things like different UI configurations and personalising the most recently used photos depending on who’s using the application. The UI will be fluid, in any case, because it’s going to cope with running as a standard desktop, and then work in tablet mode – one of the many features that makes Ultrabooks™ cool.

So, how much of Huda is currently built? Well, in order to keep a level playing field, I only started writing Huda on the Friday at the start of the competition. Intel were kind enough to supply a Lenovo® Yoga 13 and a Gesture Camera to play around with, and I’ve spent the last couple of weeks getting up to speed with the Perceptual SDK. Huda is being written in WPF because this is a framework that I’m very comfortable in and I believe that there’s still a place for desktop applications, plus it’s really easy to develop different types of user interfaces, which is going to be really important for the applicatino. My aim here is to show you how much you can accomplish in a short space of time, and to provide you with the same functionality at the end as I have available. This, after all, is what I like doing best. I want you to learn from my code and experiences, and really push this forward to the next level. Huda isn’t the end point. Huda is the starting point for something much, much bigger.

Final thoughts. Gesture applications shouldn’t be hard to use, but the experience of using it should be easily discoverable. I want the application to let the user know what’s right, and to be intuitive enough to us the without having to watch a 20 minute getting started video. It should be familiar and new at the same time. Hopefully, by the end of the challenge, we’ll be in a much better position to create compelling Perceptual applications, and I’d like to thank Intel® for giving me the chance to try and help people with this journey. And to repay that favour, I’m making the offer that you will get access to all the perceptual library code I write.

Of Mice and Men and Computed Observables. Oh my.

I have recently been playing around with Knockout.js, and have been hugely impressed with just how clever it is, and how intuitive. One of the really great features that Knockout provides is the ability to have a computed observable object. That’s a fancy way of saying that it has functionality to update when objects that it is observing update. Borrowing an example from the Knockout website, this simply translates into code like this:

function AppViewModel() {
    this.firstName = ko.observable('Bob');
    this.lastName = ko.observable('Smith');
 
    this.fullName = ko.computed(function() {
        return this.firstName() + " " + this.lastName();
    }, this);
}

So, whenever firstName or lastName change, the fullName automatically changes as well.

Wouldn’t it be great if we could do this in WPF as well? I would love it if MVVM allowed you to update one property and have all dependent properties update with it – currently, if you have a dependent property then you have to update that manually.

Well, Dan Vaughan has an amazing implementation of this functionality over on his site, and it’s quite astounding code – he really does have the brain the size of a planet. Being a simpler person though, I had to wonder if there was a simpler way that didn’t involve rewriting a lot of old ViewModels, and the word that kept coming to my mind is MarkupExtension. For those of you who don’t know what a MarkupExtension is, it’s a way to let XAML know that it has something to do that’s a little bit special – you’ve probably already used some even without knowing it – if you’ve used StaticResource, DynamicResource or Binding then you’ve used a MarkupExtension.

So, having thought it through from a few different angles, it seemed as though a MarkupExtension will do the job – and surprise surprise, it does. Before I go into the code, though, I’m going to share some code with you that you will definitely find handy if you write your own extensions that rely on the DataContext. Basically, because of the different places and ways you can declare a DataContext, I wanted some code that I could use to cope with the DataContext not being present when we fired off our extension.

So, let’s break down that extension first. Unsurprisingly, we’re going to make this an abstract extension, as it’s not much use on its own – it’s a great way to support other extensions, but it does nothing useful that we’d want to hook into in the XAML:

public abstract class DataContextBaseExtension : MarkupExtension

If you notice, we are inheriting from MarkupExtension – this is vital if we want to be able to use this as an extension. Now, by convention, extensions always end in Extension, but we don’t have to include that when we declare it in the XAML. Even though we don’t directly use this class in XAML, it’s good to name it this way so that we preserve the intent.

Next, we are going to declare some properties that we will find useful later on.

/// <summary>
/// Gets or sets the target object from the service provider
/// </summary>
protected object TargetObject { get; set; }

/// <summary>
/// Gets or sets the target property from the service provider
/// </summary>
protected object TargetProperty { get; set; }

/// <summary>
/// Gets or sets the target Dependency Object from the service provider
/// </summary>
protected DependencyObject TargetObjectDependencyObject { get; set; }

/// <summary>
/// Gets or sets the target Dependency Property from the service provider;
/// </summary>
protected DependencyProperty TargetObjectDependencyProperty { get; set; }

/// <summary>
/// Gets or sets the DataContext that this extension is hooking into.
/// </summary>
protected object DataContext { get; private set; }

Don’t worry about what they do yet – we’ll cover them more as we examine the code. Assigning these properties is taken care of in the overriden MarkupExtension.ProvideValue implementation:

/// <summary>
/// By sealing this method, we guarantee that the developer will always
/// have to use this implementation.
/// </summary>
public sealed override object ProvideValue(IServiceProvider serviceProvider)
{
    IProvideValueTarget target = serviceProvider.GetService(typeof(IProvideValueTarget)) as IProvideValueTarget;

    if (target != null)
    {
        TargetObject = target.TargetObject;
        TargetProperty = target.TargetProperty;

        // Convert to DP and DO if possible...
        TargetObjectDependencyProperty = TargetProperty as DependencyProperty;
        TargetObjectDependencyObject = TargetObject as DependencyObject;

        if (!FindDataContext())
        {
            SubscribeToDataContextChangedEvent();
        }
    }

    return OnProvideValue(serviceProvider);
}

As you can see, assigning these values is straight forward, but we start to see the meat of this class now. By sealing this method, we ensure that we cannot override it (giving us a stable base for this class – we can’t accidentally forget to call this implementation), which means that we have to provide a way for derived classes can do any processing that they need as well. This is why we return an abstract method that derived classes can hook into to do any further processing:

protected abstract object OnProvideValue(IServiceProvider serviceProvider);

Now, this method is only fired once, which means the DataContext could be set long after this code has finished. Fortunately, we can get round this by hooking into the DataContextChanged event.

private void SubscribeToDataContextChangedEvent()
{
    DependencyPropertyDescriptor.FromProperty(FrameworkElement.DataContextProperty, 
        TargetObjectDependencyObject.GetType())
        .AddValueChanged(TargetObjectDependencyObject, DataContextChanged);
}

But how do we actually find the DataContext? Well, this is straightforward as well

private void DataContextChanged(object sender, EventArgs e)
{
    if (FindDataContext())
    {
        UnsubscribeFromDataContextChangedEvent();
    }
}

private bool FindDataContext()
{
    var dc = TargetObjectDependencyObject.GetValue(FrameworkElement.DataContextProperty) ??
        TargetObjectDependencyObject.GetValue(FrameworkContentElement.DataContextProperty);
    if (dc != null)
    {
        DataContext = dc;

        OnDataContextFound();
    }

    return dc != null;
}

private void UnsubscribeFromDataContextChangedEvent()
{
    DependencyPropertyDescriptor.FromProperty(FrameworkElement.DataContextProperty, 
         TargetObjectDependencyObject.GetType())
        .RemoveValueChanged(TargetObjectDependencyObject, DataContextChanged);
}

The only things of any real note in there is that, once the DataContext is found in the event handler, we disconnect from the DataContextChanged event and call an abstract method to let derived classes know that the DataContext is has been found.

protected abstract void OnDataContextFound();

Right, that’s our handy DataContextBaseExtension class out the way (hold onto it – you might find it useful at some point in the future). Now let’s move onto the extension that does the markup mojo. This class adheres to the convention of ending in Extension:

public class ComputedObservableExtension : DataContextBaseExtension

Now, we want to provide the ability for the user to use a parameter name, or to have one automatically applied if we leave it out. You’ve seen this behaviour when you’ve done {Binding Name} or {Binding Path=Name}. In both cases, Path was being referred to, but MarkupExtensions are clever enough to work this out. To do this, we need to supply two constructors, and use something called a ConstructorArgument to tell the extension that a particular property is being defaulted.

public ComputedObservableExtension()
    : this(string.Empty)
{

}

public ComputedObservableExtension(string path)
    : base()
{
    this.Path = path;
}

[ConstructorArgument("path")]
public string Path
{
    get;
    set;
}

We are going to use Path to refer to the computed property. We also need something to tell the extension what properties we are going to observe the change notification on:

public string ObservableProperties
{
    get;
    set;
}

I’m not being fancy with that list. In this implementation, you simply need to supply a comma separated list of properties to this method. Now we are going to hook into the override OnProvideValue method to do a little bit of processing at the startup of this class.

protected override object OnProvideValue(IServiceProvider serviceProvider)
{
    if (TargetProperty != null)
    {
        targetProperty = TargetProperty as PropertyInfo;
        
    }

    if (!string.IsNullOrWhiteSpace(ObservableProperties))
    {
        string[] observables = ObservableProperties
            .Replace(" ", string.Empty)
            .Split(new char[] { ',' }, StringSplitOptions.RemoveEmptyEntries);

        if (observables.Count() > 0)
        {
            observedProperties.AddRange(observables);

            // And update the property...
            UpdateProperty(ComputedValue());
        }
    }

    return string.Empty;
}

Basically, we start off by caching the TargetProperty as a PropertyInfo. We’ll make heavy use of this later on, so it removes a bit of reflection burden if we already have the PropertyInfo at this point. Next, we split the list of observed properties up into a list that we will use later on. The UpdateProperty method is used to actually update the computed property on the screen, so let’s take a look at that next.

private void UpdateProperty(object value)
{
    if (TargetObjectDependencyObject != null)
    {
        if (TargetObjectDependencyProperty != null)
        {
            Action update = () => TargetObjectDependencyObject
                .SetValue(TargetObjectDependencyProperty, value);

            if (TargetObjectDependencyObject.CheckAccess())
            {
                update();
            }
            else
            {
                TargetObjectDependencyObject.Dispatcher.Invoke(update);
            }
        }
        else
        {
            if (targetProperty != null)
            {
                targetProperty.SetValue(TargetObject, value, null);
            }
        }
    }
}

As you can see, this code simply sets the value – either using the DependencyProperty and DependencyObject that we first obtained in the base class, or by updating the standard property if the DP isn’t present.

The ComputedValue method is responsible for retrieving the value from the underlying DataContext.

private string ComputedValue()
{
    if (DataContext == null) return string.Empty;
    if (string.IsNullOrWhiteSpace(Path)) throw new ArgumentException("Path must be set");

    object value = GetPropertyInfoForPath().GetValue(DataContext, null);

    if (value == null)
        return string.Empty;
    return value.ToString();
}

private PropertyInfo propInfo;

private PropertyInfo GetPropertyInfoForPath()
{
    if (propInfo != null) return propInfo;

    propInfo = DataContext.GetType().GetProperty(Path);
    if (propInfo == null) throw new InvalidOperationException(string.Format("{0} is not a valid property", Path));

    return propInfo;
}

We seem to be missing a bit though. If you thought that, you would be right – the bit that’s missing is what happens when we find the DataContext. Well, when we have a DataContext, we use the fact that it’s implemented INotifyPropertyChanged to our own advantage.

protected override void OnDataContextFound()
{
    if (DataContext != null)
    {
        propChanged = DataContext as INotifyPropertyChanged;
        if (propChanged == null) 
            throw new InvalidOperationException("The data context item must support INotifyPropertyChanged");

        UpdateProperty(ComputedValue());

        if (TargetObject != null)
        {
            ((FrameworkElement)TargetObject).Unloaded += ComputedObservableExtension_Unloaded;
        }
        propChanged.PropertyChanged += propChanged_PropertyChanged;
    }
}

void ComputedObservableExtension_Unloaded(object sender, RoutedEventArgs e)
{
    if (TargetObject != null)
    {
        ((FrameworkElement)TargetObject).Unloaded -= ComputedObservableExtension_Unloaded;
    }
    propChanged.PropertyChanged -= propChanged_PropertyChanged;
}

void propChanged_PropertyChanged(object sender, PropertyChangedEventArgs e)
{
    if (observedProperties.Count == 0 || string.IsNullOrWhiteSpace(e.PropertyName) || 
      observedProperties.Contains(e.PropertyName))
    {
        UpdateProperty(ComputedValue());
    }
}

And that’s pretty much it – that’s all we need to give the XAML the markup that we wanted.

But, what does our MVVM code look like now? Well, let’s start with a familiar POCO VM:

public class PersonViewModel : INotifyPropertyChanged
{
    public event PropertyChangedEventHandler PropertyChanged;

    private string firstName;
    private string surname;
    private int age;

    public string Firstname
    {
        get { return firstName; }
        set
        {
            if (firstName == value) return;
            firstName = value;
            Notify("Firstname");
        }
    }

    public string Surname
    {
        get { return surname; }
        set
        {
            if (surname == value) return;
            surname = value;
            Notify("Surname");
        }
    }

    public int Age
    {
        get { return age; }
        set
        {
            if (age == value) return;
            age = value;
            Notify("Age");
        }
    }

    public string Details
    {
        get
        {
            return firstName + " " + surname + " is " + age.ToString();
        }
    }

    private void Notify(string prop)
    {
        PropertyChangedEventHandler handler = PropertyChanged;
        if (handler == null) return;

        handler(this, new PropertyChangedEventArgs(prop));
    }
}

Now, let’s attach it to our main window.

/// <summary>
/// Interaction logic for MainWindow.xaml
/// </summary>
public partial class MainWindow : Window
{
    private PersonViewModel person;
    public MainWindow()
    {
        InitializeComponent();
        person = new PersonViewModel { Age = 21, Firstname = "Bob", Surname = "Smith" };
        DataContext = person;
    }
}

As you can see – we’ve put no magic in here. It’s just a simple VM. So, what does the XAML look like?

<Window x:Class="ComputedObservableExtensionExample.MainWindow"
        xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
        xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
        xmlns:co="clr-namespace:ComputedObservableExtensionExample"
        Title="Computed ObservableExample" Height="350" Width="525">
    <Window.Resources>
        <Style TargetType="{x:Type TextBlock}">
            <Setter Property="FontSize" Value="13.333" />
            <Setter Property="HorizontalAlignment" Value="Left" />
            <Setter Property="VerticalAlignment" Value="Center" />
        </Style>
        <Style TargetType="{x:Type TextBox}">
            <Setter Property="FontSize" Value="13.333" />
            <Setter Property="HorizontalAlignment" Value="Stretch" />
            <Setter Property="VerticalAlignment" Value="Center" />
            <Setter Property="VerticalContentAlignment" Value="Center" />
            <Setter Property="Margin" Value="2" />
        </Style>
    </Window.Resources>
    <Grid>
        <Grid.ColumnDefinitions>
            <ColumnDefinition Width="100"/>
            <ColumnDefinition />
        </Grid.ColumnDefinitions>

        <Grid.RowDefinitions>
            <RowDefinition Height="30" />
            <RowDefinition Height="30" />
            <RowDefinition Height="30" />
            <RowDefinition Height="30" />
        </Grid.RowDefinitions>
        
        <TextBlock Text="Firstname" Grid.Row="0" Grid.Column="0" />
        <TextBlock Text="Surname" Grid.Row="1" Grid.Column="0" />
        <TextBlock Text="Age" Grid.Row="2" Grid.Column="0" />

        <TextBox Text="{Binding Firstname, UpdateSourceTrigger=PropertyChanged}" Grid.Row="0" Grid.Column="1" />
        <TextBox Text="{Binding Surname, UpdateSourceTrigger=PropertyChanged}" Grid.Row="1" Grid.Column="1" />
        <TextBox Text="{Binding Age, UpdateSourceTrigger=PropertyChanged}" Grid.Row="2" Grid.Column="1" />

        <TextBlock Text="{co:ComputedObservable Details, 
            ObservableProperties='Firstname,Surname,Age'}" Grid.Row="3" Grid.Column="0" Grid.ColumnSpan="2" />
    </Grid>
</Window>

And that’s it – the TextBlock with the ComputedObservable extension has been told that the Path is the Details property, and the properties that it’s observing are Firstname, Surname and Age – so changing any of those properties results in the computed property being updated as well.

Any comments, enhancements or observations, please feel to let me know. This is a first pass at this extension, and I’ve no doubt that I will be revisiting and enhancing it as I use it more and more. I will, of course, come back here and update this post as I do so.

Note – this project is a VS2012 solution, but it does not make use of any .NET 4.5 specific features, so you should be able to use the code in a VS2010 app.

Source is available here. Examplezip

Keeping it focused

Well, it’s been a while since I’ve blogged, as I’ve been a busy little bee working on CodeStash. I feel guilty about this, so I’d like to take the chance to pass on the solution to a problem that was posted on Code Project today. The scenario goes like this:

You have a ViewModel with some text that you want to be updated, and you do all the usual binding work with some TextBox elements. The thing is, you want to update your ViewModel properties when the TextBox loses focus. OK, so that’s easily achieved – in fact, it’s the default behaviour of the TextBox. However, you have a button with IsDefault set on it, so pressing Enter in the TextBox triggers the button click – but as the TextBox hasn’t lost focus, the property you are binding to doesn’t get updated. This is, of course, a problem. Fortunately, there’s an easy little trick that you can deploy to update the property, and it’s easily achieved using an attached behavior.

All you need to do is associate the behaviour to the Button you have set the IsDefault property on, and Bob is your mothers brother, the Text property updates. So, what does this behavior look like:

using System.Windows.Interactivity;
using System.Windows.Controls;
using System.Windows;
using System.Windows.Input;
using System.Windows.Data;
/// <summary>
/// Associate this behaviour with the button that you mark as IsDefault
/// to trigger the ViewModel update when the user clicks enter in a textbox
/// and the property doesn't update because the update source is set to
/// lost focus.
/// </summary>
public class DefaultButtonUpdateTextBoxBindingBehavior : Behavior<Button>
{
    /// <summary>
    /// Hook into the button click event.
    /// </summary>
    protected override void OnAttached()
    {
        AssociatedObject.Click += AssociatedObject_Click;
        base.OnAttached();
    }

    /// <summary>
    /// Unhook the button click event.
    /// </summary>
    protected override void OnDetaching()
    {
        AssociatedObject.Click -= AssociatedObject_Click;
    }

    /// <summary>
    /// The click event handler.
    /// </summary>
    void AssociatedObject_Click(object sender, System.Windows.RoutedEventArgs e)
    {
        // Get the element with the keyboard focus
        FrameworkElement el = Keyboard.FocusedElement as FrameworkElement;
        if (el != null && el is TextBox)
        {
            // Get the binding expression associated with the text property
            // for this element.
            BindingExpression expression = el.GetBindingExpression(TextBox.TextProperty);
            if (expression != null)
            {
                // Now, trigger the update.
                expression.UpdateSource();
            }
        }
    }
}

As you can see, there’s not that much code needed. Basically, we get the element that has focus, and retrieve the binding expression associated with it. Once you get the binding expression, we trigger the update.

I’ve copied this behaviour into CodeStash – the handy repository that I will be putting future snippets in for your delectation.

Adding to MoXAML.

In my last post, I talked about the enhancements and new architecture I have put into place for the new version of MoXAML Power Toys. I also mentioned that I’d talk about adding a new command. Well, in this post I’m going to cover how I coded the Scrubber command that’s available in the new version.

One of the first things I did was break the Scrubber options from the actual Scrubber command. By doing this, the user no longer needs to set the options every time they need to run Scrubber. So, I created a traditional model for the options which could be used by the Scrubber commands and the Scrubber options dialog. This model looks like this:

using System.ComponentModel;
using System.IO;
using System.IO.IsolatedStorage;
using System.Linq;

namespace MoXAML.Scrubber.Model
{
    public class ScrubberOptionsModel : INotifyPropertyChanged
    {
        private const string FILENAME = "MoXAMLSettings.dat";

        private int _attributeCountTolerance = 3;
        private bool _reorderAttributes = true;
        private bool _reducePrecision = true;
        private int _precision = 3;
        private bool _removeCommonDefaults = true;
        private bool _forceLineMinimum = true;
        private int _spaceCount = 2;
        private bool _convertTabsToSpaces = true;

        public ScrubberOptionsModel()
        {
            LoadModel();
        }

        public bool ConvertTabsToSpaces
        {
            get
            {
                return _convertTabsToSpaces;
            }
            set
            {
                if (_convertTabsToSpaces == value) return;
                _convertTabsToSpaces = value;
                OnChanged("ConvertTabsToSpaces");
            }
        }

        public int SpaceCount
        {
            get
            {
                return _spaceCount;
            }
            set
            {
                if (_spaceCount == value) return;
                _spaceCount = value;
                OnChanged("SpaceCount");
            }
        }

        public bool ForceLineMinimum
        {
            get
            {
                return _forceLineMinimum;
            }
            set
            {
                if (_forceLineMinimum == value) return;
                _forceLineMinimum = value;
                OnChanged("ForceLineMinimum");
            }
        }

        public bool RemoveCommonDefaults
        {
            get
            {
                return _removeCommonDefaults;
            }
            set
            {
                if (_removeCommonDefaults == value) return;
                _removeCommonDefaults = value;
                OnChanged("RemoveCommonDefaults");
            }
        }

        public int Precision
        {
            get
            {
                return _precision;
            }
            set
            {
                if (_precision == value) return;
                _precision = value;
                OnChanged("Precision");
            }
        }

        public bool ReducePrecision
        {
            get
            {
                return _reducePrecision;
            }
            set
            {
                if (_reducePrecision == value) return;
                _reducePrecision = value;
                OnChanged("ReducePrecision");
            }
        }

        public bool ReorderAttributes
        {
            get
            {
                return _reorderAttributes;
            }
            set
            {
                if (_reorderAttributes == value) return;
                _reorderAttributes = value;
                OnChanged("ReorderAttributes");
            }
        }

        public int AttributeCountTolerance
        {
            get { return _attributeCountTolerance; }
            set
            {
                if (_attributeCountTolerance == value) return;
                _attributeCountTolerance = value;
                OnChanged("AttributeCountTolerance");
            }
        }

        public event PropertyChangedEventHandler PropertyChanged;

        private void OnChanged(string propertyName)
        {
            var handler = PropertyChanged;
            if (handler == null) return;
            handler(this, new PropertyChangedEventArgs(propertyName));
        }

        private void LoadModel()
        {
            using (IsolatedStorageFile file = IsolatedStorageFile.GetUserStoreForAssembly())
            {
                if (file.GetFileNames(FILENAME).Count() == 0) return;

                using (IsolatedStorageFileStream stream = new IsolatedStorageFileStream(FILENAME, System.IO.FileMode.Open, file))
                {
                    using (StreamReader sr = new StreamReader(stream))
                    {
                        _attributeCountTolerance = ConvertInt(sr.ReadLine());
                        _reorderAttributes = ConvertBool(sr.ReadLine());
                        _reducePrecision = ConvertBool(sr.ReadLine());
                        _precision = ConvertInt(sr.ReadLine());
                        _removeCommonDefaults = ConvertBool(sr.ReadLine());
                        _forceLineMinimum = ConvertBool(sr.ReadLine());
                        _spaceCount = ConvertInt(sr.ReadLine());
                        _convertTabsToSpaces = ConvertBool(sr.ReadLine());
                    }
                }
            }
        }

        public void SaveModel()
        {
            using (IsolatedStorageFile file = IsolatedStorageFile.GetUserStoreForAssembly())
            {
                using (IsolatedStorageFileStream stream = new IsolatedStorageFileStream(FILENAME, System.IO.FileMode.Create, file))
                {
                    using (StreamWriter sr = new StreamWriter(stream))
                    {
                        sr.WriteLine(_attributeCountTolerance);
                        sr.WriteLine(_reorderAttributes);
                        sr.WriteLine(_reducePrecision);
                        sr.WriteLine(_precision);
                        sr.WriteLine(_removeCommonDefaults);
                        sr.WriteLine(_forceLineMinimum);
                        sr.WriteLine(_spaceCount);
                        sr.WriteLine(_convertTabsToSpaces);
                    }
                }
            }
        }

        private int ConvertInt(string value)
        {
            int retVal = 0;
            if (int.TryParse(value, out retVal))
            {
                return retVal;
            }
            return 0;
        }

        private bool ConvertBool(string value)
        {
            bool retVal = false;
            if (bool.TryParse(value, out retVal))
            {
                return retVal;
            }
            return false;
        }
    }
}

As you can see, there’s nothing remarkable about it. It’s a straightforward model implementation, and there’s nothing special about using this inside MoXAML (the point I’m making here is that you can mix and match your MoXAML implementation with standard .NET code).

OK Pete, let’s have a look at actually hooking this into MoXAML. Well, as I mentioned in the MoXAML page, I wanted to have two versions of Scrubber, one for the current file and one for the project files. In order to do this, it made sense to pull the core Scrubber functionality into a base class which the actual commands would hook into:

using MoXAML.Infrastructure;
using MoXAML.Scrubber.Model;
using System.IO;
using System.Xml;
using System.Collections.Generic;
using System;
namespace MoXAML.Scrubber
{
    public partial class ScrubberCommandBase : CommandBase
    {
        private ScrubberOptionsModel _model;
        public ScrubberCommandBase() : base()
        {
            _model = new ScrubberOptionsModel();
        }

        protected void ParseFile(string file)
        {
            string text = File.ReadAllText(file);
            text = Perform(text);
            File.WriteAllText(file, text);
        }

        private string Perform(string text)
        {
            text = Indent(text);
            return ReducePrecision(text);
        }

        private string IndentString
        {
            get
            {
                if (_model.ConvertTabsToSpaces)
                {
                    string spaces = string.Empty;
                    spaces = spaces.PadRight(_model.SpaceCount, ' ');

                    return spaces;
                }
                else
                {
                    return "\t";
                }
            }
        }

        private string ReducePrecision(string s)
        {
            string old = s;

            if (_model.ReducePrecision)
            {
                int begin = 0;
                int end = 0;

                while (true)
                {
                    begin = old.IndexOf('.', begin);
                    if (begin == -1) break;

                    // get past the period
                    begin++;

                    for (int i = 0; i &lt; _model.Precision; i++)
                    {
                        if (old[begin] &gt;= '0' && old[begin] &lt;= '9') begin++;
                    }

                    end = begin;

                    while (end &lt; old.Length && old[end] &gt;= '0' && old[end] &lt;= '9') end++;

                    old = old.Substring(0, begin) + old.Substring(end, old.Length - end);

                    begin++;
                }
            }

            return old;
        }

        public string Indent(string s)
        {
            string result;

            s = s.Replace("&", "¬¬");
            using (MemoryStream ms = new MemoryStream(s.Length))
            {
                using (StreamWriter sw = new StreamWriter(ms))
                {
                    sw.Write(s);
                    sw.Flush();

                    ms.Seek(0, SeekOrigin.Begin);

                    using (StreamReader reader = new StreamReader(ms))
                    {
                        XmlReaderSettings settings = new XmlReaderSettings();
                        settings.CheckCharacters = false;
                        settings.ConformanceLevel = ConformanceLevel.Auto;
                        //XmlReader xmlReader = XmlReader.Create(reader.BaseStream, settings);
                        XmlTextReader xmlReader = new XmlTextReader(reader.BaseStream);
                        xmlReader.Normalization = false;
                        xmlReader.Read();
                        xmlReader.Normalization = false;

                        string str = "";

                        while (!xmlReader.EOF)
                        {
                            string xml;
                            int num;
                            int num6;
                            int num7;
                            int num8;

                            switch (xmlReader.NodeType)
                            {
                                case XmlNodeType.Element:
                                    xml = "";
                                    num = 0;
                                    goto Element;

                                case XmlNodeType.Text:
                                    {
                                        string str4 = xmlReader.Value.Replace("&", "&amp;").Replace("&lt;", "&lt;").Replace("&gt;", "&gt;").Replace("\"", "&quot;");
                                        str = str + str4;
                                        xmlReader.Read();
                                        continue;
                                    }
                                case XmlNodeType.ProcessingInstruction:
                                    xml = "";
                                    num7 = 0;
                                    goto ProcessingInstruction;

                                case XmlNodeType.Comment:
                                    xml = "";
                                    num8 = 0;
                                    goto Comment;

                                case XmlNodeType.Whitespace:
                                    {
                                        xmlReader.Read();
                                        continue;
                                    }
                                case XmlNodeType.EndElement:
                                    xml = "";
                                    num6 = 0;
                                    goto EndElement;

                                default:
                                    goto Other;
                            }

                        Label_00C0:
                            xml = xml + IndentString;
                            num++;

                        Element:
                            if (num &lt; xmlReader.Depth)
                            {
                                goto Label_00C0;
                            }

                            string elementName = xmlReader.Name;

                            string str5 = str;
                            str = str5 + "\r\n" + xml + "&lt;" + xmlReader.Name;
                            bool isEmptyElement = xmlReader.IsEmptyElement;

                            if (xmlReader.HasAttributes)
                            {
                                // construct an array of the attributes that we reorder later on
                                List&lt;AttributeValuePair&gt; attributes = new List&lt;AttributeValuePair&gt;(xmlReader.AttributeCount);

                                for (int k = 0; k &lt; xmlReader.AttributeCount; k++)
                                {
                                    xmlReader.MoveToAttribute(k);

                                    string value = xmlReader.Value;

                                    if (_model.RemoveCommonDefaults)
                                    {
                                        if (!AttributeValuePair.IsCommonDefault(elementName, xmlReader.Name, value))
                                        {
                                            attributes.Add(new AttributeValuePair(elementName, xmlReader.Name, value));
                                        }
                                    }
                                    else
                                    {
                                        attributes.Add(new AttributeValuePair(elementName, xmlReader.Name, value));
                                    }
                                }

                                if (_model.ReorderAttributes)
                                {
                                    attributes.Sort();
                                }

                                xml = "";
                                string str3 = "";
                                int depth = xmlReader.Depth;

                                //str3 = str3 + IndentString;

                                for (int j = 0; j &lt; depth; j++)
                                {
                                    xml = xml + IndentString;
                                }

                                foreach (AttributeValuePair a in attributes)
                                {
                                    string str7 = str;

                                    if (attributes.Count &gt; _model.AttributeCountTolerance && !AttributeValuePair.ForceNoLineBreaks(elementName))
                                    {
                                        // break up attributes into different lines
                                        str = str7 + "\r\n" + xml + str3 + a.Name + "=\"" + a.Value + "\"";
                                    }
                                    else
                                    {
                                        // attributes on one line
                                        str = str7 + " " + a.Name + "=\"" + a.Value + "\"";
                                    }
                                }

                            }
                            if (isEmptyElement)
                            {
                                str = str + "/";
                            }
                            str = str + "&gt;";
                            xmlReader.Read();
                            continue;
                        Label_02F4:
                            xml = xml + IndentString;
                            num6++;
                        EndElement:
                            if (num6 &lt; xmlReader.Depth)
                            {
                                goto Label_02F4;
                            }
                            string str8 = str;
                            str = str8 + "\r\n" + xml + "&lt;/" + xmlReader.Name + "&gt;";
                            xmlReader.Read();
                            continue;
                        Label_037A:
                            xml = xml + "    ";
                            num7++;
                        ProcessingInstruction:
                            if (num7 &lt; xmlReader.Depth)
                            {
                                goto Label_037A;
                            }
                            string str9 = str;
                            str = str9 + "\r\n" + xml + "&lt;?Mapping " + xmlReader.Value + " ?&gt;";
                            xmlReader.Read();
                            continue;

                        Comment:

                            if (num8 &lt; xmlReader.Depth)
                            {
                                xml = xml + IndentString;
                                num8++;
                            }
                            str = str + "\r\n" + xml + "&lt;!--" + xmlReader.Value + "--&gt;";

                            xmlReader.Read();
                            continue;

                        Other:
                            xmlReader.Read();
                        }

                        xmlReader.Close();

                        result = str;
                    }
                }
            }
            return result.Replace("¬¬", "&");

        }

        private class AttributeValuePair : IComparable
        {
            public string Name = "";
            public string Value = "";
            public AttributeType AttributeType = AttributeType.Other;

            public AttributeValuePair(string elementname, string name, string value)
            {
                Name = name;
                Value = value;

                // compute the AttributeType
                if (name.StartsWith("xmlns"))
                {
                    AttributeType = AttributeType.Namespace;

                }
                else
                {
                    switch (name)
                    {
                        case "Key":
                        case "x:Key":
                            AttributeType = AttributeType.Key;
                            break;

                        case "Name":
                        case "x:Name":
                            AttributeType = AttributeType.Name;
                            break;

                        case "x:Class":
                            AttributeType = AttributeType.Class;
                            break;

                        case "Canvas.Top":
                        case "Canvas.Left":
                        case "Canvas.Bottom":
                        case "Canvas.Right":
                        case "Grid.Row":
                        case "Grid.RowSpan":
                        case "Grid.Column":
                        case "Grid.ColumnSpan":
                            AttributeType = AttributeType.AttachedLayout;
                            break;

                        case "Width":
                        case "Height":
                        case "MaxWidth":
                        case "MinWidth":
                        case "MinHeight":
                        case "MaxHeight":
                            AttributeType = AttributeType.CoreLayout;
                            break;

                        case "Margin":
                        case "VerticalAlignment":
                        case "HorizontalAlignment":
                        case "Panel.ZIndex":
                            AttributeType = AttributeType.StandardLayout;
                            break;

                        case "mc:Ignorable":
                        case "d:IsDataSource":
                        case "d:LayoutOverrides":
                        case "d:IsStaticText":

                            AttributeType = AttributeType.BlendGoo;
                            break;

                        default:
                            AttributeType = AttributeType.Other;
                            break;
                    }
                }
            }

            #region IComparable Members

            public int CompareTo(object obj)
            {
                AttributeValuePair other = obj as AttributeValuePair;

                if (other != null)
                {
                    if (this.AttributeType == other.AttributeType)
                    {
                        // some common special cases where we want things to be out of the normal order

                        if (this.Name.Equals("StartPoint") && other.Name.Equals("EndPoint")) return -1;
                        if (this.Name.Equals("EndPoint") && other.Name.Equals("StartPoint")) return 1;

                        if (this.Name.Equals("Width") && other.Name.Equals("Height")) return -1;
                        if (this.Name.Equals("Height") && other.Name.Equals("Width")) return 1;

                        if (this.Name.Equals("Offset") && other.Name.Equals("Color")) return -1;
                        if (this.Name.Equals("Color") && other.Name.Equals("Offset")) return 1;

                        if (this.Name.Equals("TargetName") && other.Name.Equals("Property")) return -1;
                        if (this.Name.Equals("Property") && other.Name.Equals("TargetName")) return 1;

                        return Name.CompareTo(other.Name);
                    }
                    else
                    {
                        return this.AttributeType.CompareTo(other.AttributeType);
                    }
                }

                return 0;
            }

            public static bool IsCommonDefault(string elementname, string name, string value)
            {

                if (
                    (name == "HorizontalAlignment" && value == "Stretch") ||
                    (name == "VerticalAlignment" && value == "Stretch") ||
                    (name == "Margin" && value == "0") ||
                    (name == "Margin" && value == "0,0,0,0") ||
                    (name == "Opacity" && value == "1") ||
                    (name == "FontWeight" && value == "{x:Null}") ||
                    (name == "Background" && value == "{x:Null}") ||
                    (name == "Stroke" && value == "{x:Null}") ||
                    (name == "Fill" && value == "{x:Null}") ||
                    (name == "Visibility" && value == "Visible") ||
                    (name == "Grid.RowSpan" && value == "1") ||
                    (name == "Grid.ColumnSpan" && value == "1") ||
                    (name == "BasedOn" && value == "{x:Null}") ||
                    (elementname != "ColumnDefinition" && elementname != "RowDefinition" && name == "Width" && value == "Auto") ||
                    (elementname != "ColumnDefinition" && elementname != "RowDefinition" && name == "Height" && value == "Auto")

                    )
                {
                    return true;
                }

                return false;
            }

            public static bool ForceNoLineBreaks(string elementname)
            {
                if (
                    (elementname == "RadialGradientBrush") ||
                    (elementname == "GradientStop") ||
                    (elementname == "LinearGradientBrush") ||
                    (elementname == "ScaleTransfom") ||
                    (elementname == "SkewTransform") ||
                    (elementname == "RotateTransform") ||
                    (elementname == "TranslateTransform") ||
                    (elementname == "Trigger") ||
                    (elementname == "Setter")
                    )
                {
                    return true;
                }
                else
                {
                    return false;
                }

            }

            #endregion
        }

        // note that these are declared in priority order for easy sorting
        private enum AttributeType
        {
            Key = 10,
            Name = 20,
            Class = 30,
            Namespace = 40,
            CoreLayout = 50,
            AttachedLayout = 60,
            StandardLayout = 70,
            Other = 1000,
            BlendGoo = 2000
        }
    }
}

I appreciate that this is a long listing, but don’t worry about what’s going on inside. To hook into the MoXAML plugin architecture, only one line in there is really important:to be picked up

public partial class ScrubberCommandBase : CommandBase

By inheriting from CommandBase, MoXAML picks the command up because CommandBase implements ICommandBase. The cunning thing about MoXAML is that it uses MEF which allows you to mark interfaces so that any implementations of the interface will automatically be picked up. Well, we have most of the infrastructure in place – all we actually need to do is write the actual command. You’ll be surprised how little that actually takes:

using MoXAML.Infrastructure;

namespace MoXAML.Scrubber
{
    public class ScrubFileCommand : ScrubberCommandBase
    {
        public ScrubFileCommand()
            : base()
        {
            CommandName = "ScrubFile";
            Caption = "Scrubber";
            ParentCommandBar.Add(CommandBarType.XamlContextMenu);
        }

        public override void Execute()
        {
            base.Execute();
            ParseFile(Application.ActiveDocument.FullName);
        }
    }
}

Let’s break it down. In the constructor, we’re giving the command a unique name that MoXAML uses to track whether or not the command was previously installed. The Caption is what will actually appear in the menu, and the line ParentCommandBar.Add is used to add the command to the appropriate menu.

Finally, we actually need our command to do something, and that’s where the Execute method comes in – MoXAML uses this method to execute the command, so it’s the place to hook our command logic to. In this case, we’re hooking into the ParseFile method in the class we inherit from.

Losing your identity

Recently I’ve had time to revisit the question of identity columns (or sequences if you like). A client had come up with a screen that they really wanted us to incorporate in to their application, and the design of it had been done by some of their business analysts. One of the fields that was present on the screen was a unique identifier. Now, being a bit of a nosy so and so, I wanted to know where the unique identifier came from and was told that it was just an identify column padded out to 9 digits with the letter C in front of it (apparently C stood for client). This lead me to have an interesting discussion with the analysts:

Me: Why are we taking up valuable screen real estate with this field?
Analyst: It’s on there so the user knows the id of the customer.
Me: Fair enough. Can they search on this field?
Analyst: No.
Me: Does the client know their id value?
Analyst: No
Me: So, what purpose does this field have?
Analyst (in a sneering tone): It’s there to ensure referential integrity, and to give us a unique value to update the client on. Don’t you know anything about relational design?

Now, at this point, you might imagine that I was less than pleased with the design based on my questioning it, but why was I so put off by this field? First of all, when you are designing a screen, you have to ask yourself what the user will be doing with the screen. How will they interact with it? What do you need to put on there to let the user do their job? By putting an identifier on the screen that had no other purpose than to hold the identifier they were going to update the record against, the analysts had made a classic UI design. This, by the way, is why you need to have a User experience expert on your project, and why users, not just analysts, should have input into the UI.

By putting this field onto the screen, the analyst had given it an importance that it didn’t have. It’s a distraction for the user; always try to put on the screen the information that they need to do their job, and give them an easy flow through to discover additional information if they need it. With newer technologies like Silverlight and WPF, it’s incredibly easy to design attractive screens that show and hide information in visually appealing ways, so it’s a shame not to take advantage of these features while you can.

Don’t get me wrong, there can be a case for putting identifiers on a screen. If the user can search on the identifier, or the client could be reasonably expected to know the identifier, then it’s perfectly valid to have these on the screen. Nine times out of ten though, if the value is an auto-generated identity or sequence and it’s sole purpose is to enable referential integrity, then you don’t need to display it.

Please remember, when you have a piece of UI design in front of you, question everything. Ask why things are taking up valuable screen real estate. Ask if there are other options, such as flyouts, that could be used to present the additional information in none-intrusive ways. Most importantly of all, ask the users what they need to see – they are the experts after all.

Keeping it regular

Last week I posted an example of using a regular expression to control the input of numbers into a TextBox. A couple of my fellow disciples commented that they’d just use a regular expression Behavior or DP to control the input of the text. Now, there are a couple of reasons that I’d use the dedicated NumericTextBoxBehavior.

The first reason is that it’s a simple control for those that aren’t comfortable writing regular expressions. After all, why should you write a complex regular expression when I can write one for you?

The second reason is that the numeric control is internationalised from the get-go. I’ve already take care of sorting out the whole internationalised decimal place issue, so you don’t have to worry about it with your regular expression.

Saying that, the regular expression behavior is a cracking idea, and one I could kick myself for not thinking of earlier. So, in order to add regular expression functionality in your TextBox, all you need do is add the following code:

namespace Goldlight.Base.Behaviors
{
  using System.Linq;
  using System.Windows.Controls;
  using System.Windows.Interactivity;
  using System.Windows;
  using System.Windows.Input;
  using System.Text.RegularExpressions;

  /// <summary>
  /// Apply this behavior to a TextBox to ensure that input matches a regular
  /// expression.
  /// <para>
  /// <remarks>
  /// In the view, this behavior is attached in the following way:
  /// <code>
  /// <TextBox Text="{Binding Price}">
  ///   <i:Interaction.Behaviors>
  ///   <gl:RegularExpressionTextBoxBehavior 
  ///    Mask="^([\(]{1}[0-9]{3}[\)]{1}[ ]{1}[0-9]{3}[\-]{1}[0-9]{4})$" />
  ///   </i:Interaction.Behaviors>
  /// </TextBox>
  /// </code>
  /// <para>
  /// Add references to System.Windows.Interactivity to the view to use
  /// this behavior.
  /// </para>
  /// </remarks>
  public class RegularExpressionTextBoxBehavior : Behavior<TextBox>
  {
    /// <summary>
    /// Gets or sets the regular expression mask.
    /// </summary>
    public string Mask { get; set; }

    #region Overrides
    protected override void OnAttached()
    {
      base.OnAttached();

      AssociatedObject.PreviewTextInput += AssociatedObject_PreviewTextInput;
#if !SILVERLIGHT
      DataObject.AddPastingHandler(AssociatedObject, OnClipboardPaste);
#endif
    }

    protected override void OnDetaching()
    {
      base.OnDetaching();
      AssociatedObject.PreviewTextInput -= AssociatedObject_PreviewTextInput;
#if !SILVERLIGHT
      DataObject.RemovePastingHandler(AssociatedObject, OnClipboardPaste);
#endif
    }
    #endregion

#if !SILVERLIGHT
    /// <summary>
    /// Handle paste operations into the textbox to ensure that the behavior
    /// is consistent with directly typing into the TextBox.
    /// </summary>
    /// <param name="sender">The TextBox sender.</param>
    /// <param name="dopea">Paste event arguments.</param>
    /// <remarks>This operation is only available in WPF.</remarks>
    private void OnClipboardPaste(object sender, DataObjectPastingEventArgs dopea)
    {
      string text = dopea.SourceDataObject.GetData(dopea.FormatToApply).ToString();

      if (!string.IsNullOrWhiteSpace(text) && !Validate(text))
        dopea.CancelCommand();
    }
#endif

    /// <summary>
    /// Preview the text input.
    /// </summary>
    /// <param name="sender">The TextBox sender.</param>
    /// <param name="e">The composition event arguments.</param>
    void AssociatedObject_PreviewTextInput(object sender, TextCompositionEventArgs e)
    {
      e.Handled = !Validate(e.Text);
    }

    /// <summary>
    /// Validate the contents of the textbox with the new content to see if it is
    /// valid.
    /// </summary>
    /// <param name="value">The text to validate.</param>
    /// <returns>True if this is valid, false otherwise.</returns>
    protected bool Validate(string value)
    {
      TextBox textBox = AssociatedObject;

      string pre = string.Empty;
      string post = string.Empty;

      if (!string.IsNullOrWhiteSpace(textBox.Text))
      {
        pre = textBox.Text.Substring(0, textBox.SelectionStart);
        post = textBox.Text.Substring(textBox.SelectionStart + textBox.SelectionLength, 
          textBox.Text.Length - (textBox.SelectionStart + textBox.SelectionLength));
      }
      else
      {
        pre = textBox.Text.Substring(0, textBox.CaretIndex);
        post = textBox.Text.Substring(textBox.CaretIndex, 
          textBox.Text.Length - textBox.CaretIndex);
      }
      string test = string.Concat(pre, value, post);

      string pattern = Mask;

      if (string.IsNullOrWhiteSpace(pattern))
        return true;

      return new Regex(pattern).IsMatch(test);
    }
  }
}

As you can see, it’s similar in code to the other behaviour. The only real difference in it is that it has a Mask string which is used to add the regular expression text.

Getting control of your numbers

[Edit]Dominik posed a problem whereby this didn’t work where the update source was set to PropertyChanged and I promised I would get around to fixing this post. Today I finally had the time to sit down and figure out what was wrong, so here is the edited article which contains the fix for Dominik[/Edit]

Today on Code Project, one of the regulars asked how to set up a textbox so that it only accepted a currency amount. He was concerned that there doesn’t seem to be a simple mechanism to limit the input of data so that it only accepted the relevant numeric amount. Well, this is a feature I recently added into Goldlight, so I thought I’d post it here, along with an explanation of how it works.

Basically, and this will come as no surprise to you, it’s an Attached Behavior that you associate to the TextBox. There are many numeric only behaviors out there, so this one goes a little bit further. First of all, if you want, you can limit it to integers by setting AllowDecimal to false. If you want to limit it to a set number of decimal places, set DecimalLimit to the number of decimal places. If you don’t want to allow the developer to use negative numbers, set AllowNegatives to false. It’s that simple, so the solution to the problem would be to add the behaviour to the TextBox like this:

<TextBox Text="{Binding Price}">
  <i:Interaction.Behaviors>
    <gl:NumericTextBoxBehavior AllowNegatives="False" />
  </i:Interaction.Behaviors>
</TextBox>

The full code to do this is shown below:


namespace Goldlight.Extensions.Behaviors
{
  using System.Windows.Controls;
  using System.Windows.Interactivity;
  using System.Windows.Input;
  using System.Text.RegularExpressions;
  using System.Windows;
  using System.Globalization;

  /// <summary>
  /// Apply this behavior to a TextBox to ensure that it only accepts numeric values.
  /// The property <see cref="NumericTextBoxBehavior.AllowDecimal"/> controls whether or not
  /// the input is an integer or not.
  /// <para>
  /// A common requirement is to constrain the number count that appears after the decimal place.
  /// Setting <see cref="NumericTextBoxBehavior.DecimalLimit"/> specifies how many numbers appear here.
  /// If this value is 0, no limit is applied.
  /// </para>
  /// </summary>
  /// <remarks>
  /// In the view, this behavior is attached in the following way:
  /// <code>
  /// <TextBox Text="{Binding Price}">
  ///   <i:Interaction.Behaviors>
  ///     <gl:NumericTextBoxBehavior AllowDecimal="False" />
  ///   </i:Interaction.Behaviors>
  /// </TextBox>
  /// </code>
  /// <para>
  /// Add references to System.Windows.Interactivity to the view to use
  /// this behavior.
  /// </para>
  /// </remarks>
  public partial class NumericTextBoxBehavior : Behavior<TextBox>
  {
    private bool _allowDecimal = true;
    private int _decimalLimit = 0;
    private bool _allowNegative = true;
    private string _pattern = string.Empty;

    /// <summary>
    /// Initialize a new instance of <see cref="NumericTextBoxBehavior"/>.
    /// </summary>
    public NumericTextBoxBehavior()
    {
      AllowDecimal = true;
      AllowNegatives = true;
      DecimalLimit = 0;
    }

    /// <summary>
    /// Get or set whether the input allows decimal characters.
    /// </summary>
    public bool AllowDecimal
    {
      get
      {
        return _allowDecimal;
      }
      set
      {
        if (_allowDecimal == value) return;
        _allowDecimal = value;
        SetText();
      }
    }
    /// <summary>
    /// Get or set the maximum number of values to appear after
    /// the decimal.
    /// </summary>
    /// <remarks>
    /// If DecimalLimit is 0, then no limit is applied.
    /// </remarks>
    public int DecimalLimit
    {
      get
      {
        return _decimalLimit;
      }
      set
      {
        if (_decimalLimit == value) return;
        _decimalLimit = value;
        SetText();
      }
    }
    /// <summary>
    /// Get or set whether negative numbers are allowed.
    /// </summary>
    public bool AllowNegatives
    {
      get
      {
        return _allowNegative;
      }
      set
      {
        if (_allowNegative == value) return;
        _allowNegative = value;
        SetText();
      }
    }

    #region Overrides
    protected override void OnAttached()
    {
      base.OnAttached();

      AssociatedObject.PreviewTextInput += new TextCompositionEventHandler(AssociatedObject_PreviewTextInput);
#if !SILVERLIGHT
      DataObject.AddPastingHandler(AssociatedObject, OnClipboardPaste);
#endif
    }

    protected override void OnDetaching()
    {
      base.OnDetaching();
      AssociatedObject.PreviewTextInput -= new TextCompositionEventHandler(AssociatedObject_PreviewTextInput);
#if !SILVERLIGHT
      DataObject.RemovePastingHandler(AssociatedObject, OnClipboardPaste);
#endif
    }
    #endregion

    #region Private methods
    private void SetText()
    {
      _pattern = string.Empty;
      GetRegularExpressionText();
    }

#if !SILVERLIGHT
    /// <summary>
    /// Handle paste operations into the textbox to ensure that the behavior
    /// is consistent with directly typing into the TextBox.
    /// </summary>
    /// <param name="sender">The TextBox sender.</param>
    /// <param name="dopea">Paste event arguments.</param>
    /// <remarks>This operation is only available in WPF.</remarks>
    private void OnClipboardPaste(object sender, DataObjectPastingEventArgs dopea)
    {
      string text = dopea.SourceDataObject.GetData(dopea.FormatToApply).ToString();

      if (!string.IsNullOrWhiteSpace(text) && !Validate(text))
        dopea.CancelCommand();
    }
#endif

    /// <summary>
    /// Preview the text input.
    /// </summary>
    /// <param name="sender">The TextBox sender.</param>
    /// <param name="e">The composition event arguments.</param>
    void AssociatedObject_PreviewTextInput(object sender, TextCompositionEventArgs e)
    {
      e.Handled = !Validate(e.Text);
    }

    /// <summary>
    /// Validate the contents of the textbox with the new content to see if it is
    /// valid.
    /// </summary>
    /// <param name="value">The text to validate.</param>
    /// <returns>True if this is valid, false otherwise.</returns>
    protected bool Validate(string value)
    {
      TextBox textBox = AssociatedObject;

      string pre = string.Empty;
      string post = string.Empty;

      if (!string.IsNullOrWhiteSpace(textBox.Text))
      {
        int selStart = textBox.SelectionStart;
        if (selStart > textBox.Text.Length)
            selStart--;
        pre = textBox.Text.Substring(0, selStart);
        post = textBox.Text.Substring(selStart + textBox.SelectionLength, textBox.Text.Length - (selStart + textBox.SelectionLength));
      }
      else
      {
        pre = textBox.Text.Substring(0, textBox.CaretIndex);
        post = textBox.Text.Substring(textBox.CaretIndex, textBox.Text.Length - textBox.CaretIndex);
      }
      string test = string.Concat(pre, value, post);

      string pattern = GetRegularExpressionText();

      return new Regex(pattern).IsMatch(test);
    }

    private string GetRegularExpressionText()
    {
      if (!string.IsNullOrWhiteSpace(_pattern))
      {
        return _pattern;
      }
      _pattern = GetPatternText();
      return _pattern;
    }

    private string GetPatternText()
    {
      string pattern = string.Empty;
      string signPattern = "[{0}+]";

      // If the developer has chosen to allow negative numbers, the pattern will be [-+].
      // If the developer chooses not to allow negatives, the pattern is [+].
      if (AllowNegatives)
      {
        signPattern = string.Format(signPattern, "-");
      }
      else
      {
        signPattern = string.Format(signPattern, string.Empty);
      }

      // If the developer doesn't allow decimals, return the pattern.
      if (!AllowDecimal)
      {
        return string.Format(@"^({0}?)(\d*)$", signPattern);
      }

      // If the developer has chosen to apply a decimal limit, the pattern matches
      // on a
      if (DecimalLimit > 0)
      {
        pattern = string.Format(@"^({2}?)(\d*)([{0}]?)(\d{{0,{1}}})$",
          NumberFormatInfo.CurrentInfo.CurrencyDecimalSeparator,
          DecimalLimit,
          signPattern);
      }
      else
      {
        pattern = string.Format(@"^({1}?)(\d*)([{0}]?)(\d*)$", NumberFormatInfo.CurrentInfo.CurrencyDecimalSeparator, signPattern);
      }

      return pattern;
    }
    #endregion
  }
}

The clever thing is that this behavior doesn’t allow the user to paste an incorrect value in either – the paste operation is subject to the same rules as directly entering the value in the first place.

Anyway, I hope this behavior is as much use to you as it is to me.

Scratching that old itch.

First of all, I must apologise that it’s been so long since I last blogged. It’s been an insanely busy time for me (and I don’t mean that I’ve been coding with my underpants on my head). As you may be aware, I’m a big fan of Blend behaviours, so I thought that I’d take the time to revisit an old favourite of mine. To that end, I present all the code you’ll need to create a watermarked textbox.

namespace Goldlight.Extensions.Behaviors
{
  using System.Windows.Interactivity;
  using System.Windows.Controls;
  using System.Windows.Media;
  using System.Windows;

  public class WatermarkTextBoxBehavior : Behavior<TextBox>
  {
    protected override void OnAttached()
    {
      base.OnAttached();
      AssociatedObject.LostFocus += new RoutedEventHandler(LostFocus);
      AssociatedObject.GotFocus += new RoutedEventHandler(GotFocus);
      SetWatermark();
    }

    protected override void OnDetaching()
    {
      base.OnDetaching();
      AssociatedObject.LostFocus -= new RoutedEventHandler(LostFocus);
      AssociatedObject.GotFocus -= new RoutedEventHandler(GotFocus);
    }
    /// <summary>
    /// Get or set the brush to use as the foreground.
    /// </summary>
    public Brush WatermarkForeground { get; set; }
    /// <summary>
    /// Get or set the brush to use as the background.
    /// </summary>
    public Brush WatermarkBackground { get; set; }
    /// <summary>
    /// Get or set the text to apply as the watermark.
    /// </summary>
    public string WatermarkText { get; set; }

    /// <summary>
    /// Reset the colours of the textbox.
    /// </summary>
    private void SetStandard()
    {
      AssociatedObject.ClearValue(TextBox.ForegroundProperty);
      AssociatedObject.ClearValue(TextBox.BackgroundProperty);

      if (AssociatedObject.Text == WatermarkText)
      {
        AssociatedObject.Text = string.Empty;
      }
    }
    /// <summary>
    /// Set the watermark colours for the textbox.
    /// </summary>
    private void SetWatermark()
    {
      if (WatermarkForeground != null)
      {
        AssociatedObject.Foreground = WatermarkForeground;
      }
      if (WatermarkBackground != null)
      {
        AssociatedObject.Background = WatermarkBackground;
      }
      AssociatedObject.Text = WatermarkText;
    }
    void GotFocus(object sender, RoutedEventArgs e)
    {
      SetStandard();
    }
    void LostFocus(object sender, RoutedEventArgs e)
    {
      CheckText(AssociatedObject.Text);
    }
    private void CheckText(string value)
    {
      if (string.IsNullOrWhiteSpace(value))
      {
        SetWatermark();
      }
      else
      {
        SetStandard();
      }
    }
  }
}

The code is pretty straightforward. When the textbox receives focus, if it contains just the Watermark text, the watermark text is cleared out and the original foreground and background brushes are restored. When the textbox loses focus, if it’s empty the watermark text is displayed and the watermark fore and background brushes are set. Now, for the clever bit, because we’re updating the textbox text directly we aren’t going to be updating any underlying binding