FluentRealSense – the first steps to a simpler RealSense

Those who know me are aware that I have a long term association (nay, let’s say it for what it is, love affair) with the RealSense platforms from Intel. If you’ve been following developments in that space for any length of time, you’re probably aware that things have moved on a lot and the cameras are available in all sorts of devices and are no longer just limited to the Windows platform. This shift of emphasis means that Intel have moved away from concentrating on the old monolithic RealSense SDK and have moved to supporting a new open source API (available here: https://github.com/IntelRealSense/librealsense).

I have spent a period of time working with this API as it has evolved and have started “wrapping” it to make it a little bit friendlier to use (in my opinion) for people who don’t want to get bogged down with the “nitty gritty” of how to get devices, and so on. This work has been in C++, the language the API is available in, and I am going to cover, over a series of posts, how this library evolved. I’ll show you the code I’ve written at each step so that you can also see how I have gone about relearning modern C++ (bearing in mind it has been 18 years since I last worked with it in anger) so you will definitely find that early iterations of the code are naive, at best. My aim with this library was to make it as fluent as possible, hence it’s name, FluentRealSense.

An upfront note. Early iterations of this codebase have been done entirely on Windows running Visual Studio 2017. I chose this particular environment because, a) Visual Studio is, to my mind, the finest IDE around and b) because 2017 supports porting your C++ code directly over to Linux. This lets me leverage the rapid turnaround time I’m used to in my dev environment and the ease of deploying to my intended targets later on. Thank you Microsoft for keeping this old developer happy.

Let’s start with the first iteration. The first thing I wanted the code to do was to provide me with the ability to iterate over all the cameras on my system and print out some diagnostic information. This meant that my code would be broken down initially into three classes:
information: This class reads information from the API about a single camera and returns it as a string.
camera: This is a single device. Initially, it’s just going to instantiate and provide access to the information class.
cameras: The entry point for consuming applications, this class is responsible for finding out about and managing access to each camera; this class is also enumerable so that the calling code can quickly iterate over each camera.

Let’s start off by looking at the information class. First we’re going to add some includes and namespaces:

#pragma once
#include "hpp/rs_context.hpp"

using namespace std;
using namespace rs2;

class information
{
public:
class information() {}
~information() {}
}

That’s all very straightforward and nothing you won’t have seen before. Let’s start fleshing this out.

The first thing we’re going to need is a RealSense device to get information from – this is going to be passed in so let’s add a member to store it and replace our constructor with this:

explicit information(const device device) : device_(device) {}
:::
private:
device device_;

At this stage, our code looks like this:

#pragma once
#include "hpp/rs_context.hpp"

using namespace std;
using namespace rs2;

class information
{
public:
explicit information(const device device) : device_(device) {}
~information() {}

private:
device device_;
}

I have to say, at this point, I love the improvements to instantiating members that C++ now provides. This is a wonderful little innovation.

Now, in order to get the information out of the API, I’m going to add a handy little private helper method. This makes use of the fact that the API exposes this information via an rs2_camera_info enumeration:

const char* get_info(const rs2_camera_info info) const
{
if (!device_.supports(info))
{
return "Not supported";
}
return device_.get_info(info);
}

This is the first point that our code is going to add any value. Whenever we call get_info against the underlying device, we have to check that that particular type of info can be retrieved for that device. It’s very easy to write code that makes assumptions that each camera supports exactly the same set of properties here so the underlying call could throw an exception. To get around this, our code checks to see if the device supports that particular rs2_camera_info type and returns “Not supported” if it doesn’t. It is so easy to forget that you should always pair the get_info with the supports, so this helper method removes the need to remember that.

So, how do we use this? Well, with some public methods like these:

const char* name() const
{
return get_info(RS2_CAMERA_INFO_NAME);
}

const char* serial_number() const
{
return get_info(RS2_CAMERA_INFO_SERIAL_NUMBER);
}

const char* port() const
{
return get_info(RS2_CAMERA_INFO_PHYSICAL_PORT);
}

const char* firmware_version() const
{
return get_info(RS2_CAMERA_INFO_FIRMWARE_VERSION);
}

const char* debug_opCode() const
{
return get_info(RS2_CAMERA_INFO_DEBUG_OP_CODE);
}

const char* advanced_mode() const
{
return get_info(RS2_CAMERA_INFO_ADVANCED_MODE);
}

const char* product_id() const
{
return get_info(RS2_CAMERA_INFO_PRODUCT_ID);
}

const char* camera_locked() const
{
return get_info(RS2_CAMERA_INFO_CAMERA_LOCKED);
}

string dump_diagnostic() const
{
string text = "\nDevice Name: ";
text += name();
text += "\n Serial number: ";
text += serial_number();
text += "\n Port: ";
text += port();
text += "\n Firmware version: ";
text += firmware_version();
text += "\n Debug op code: ";
text += debug_opCode();
text += "\n Advanced Mode: ";
text += advanced_mode();
text += "\n Product id: ";
text += product_id();
text += "\n Camera locked: ";
text += camera_locked();

return text;
}

We now have our complete information class. It looks like this:

#pragma once
#include "hpp/rs_context.hpp"

using namespace std;
using namespace rs2;

class information
{
public:
explicit information(const device device) : device_(device) {}
~information()
{
}

const char* name() const
{
  return get_info(RS2_CAMERA_INFO_NAME);
}

const char* serial_number() const
{
  return get_info(RS2_CAMERA_INFO_SERIAL_NUMBER);
}

const char* port() const
{
  return get_info(RS2_CAMERA_INFO_PHYSICAL_PORT);
}

const char* firmware_version() const
{
  return get_info(RS2_CAMERA_INFO_FIRMWARE_VERSION);
}

const char* debug_opCode() const
{
  return get_info(RS2_CAMERA_INFO_DEBUG_OP_CODE);
}

const char* advanced_mode() const
{
  return get_info(RS2_CAMERA_INFO_ADVANCED_MODE);
}

const char* product_id() const
{
  return get_info(RS2_CAMERA_INFO_PRODUCT_ID);
}

const char* camera_locked() const
{
  return get_info(RS2_CAMERA_INFO_CAMERA_LOCKED);
}

string dump_diagnostic() const
{
  string text = "\nDevice Name: ";
  text += name();
  text += "\n Serial number: ";
  text += serial_number();
  text += "\n Port: ";
  text += port();
  text += "\n Firmware version: ";
  text += firmware_version();
  text += "\n Debug op code: ";
  text += debug_opCode();
  text += "\n Advanced Mode: ";
  text += advanced_mode();
  text += "\n Product id: ";
  text += product_id();
  text += "\n Camera locked: ";
  text += camera_locked();
  return text;
}

private:
  const char* get_info(const rs2_camera_info info) const
  {
    if (!device_.supports(info))
    {
      return "Not supported";
    }
    return device_.get_info(info);
  }
  device device_;
};

The next thing we have to do is write a class that represents a single RealSense camera. There’s not much to this class, at the moment, so let’s look at it in its entirety.

#pragma once
#include "hpp/rs_context.hpp"
#include "information.h"

using namespace std;
using namespace rs2;

class camera
{
public:
  explicit camera(const device dev) : information_(make_shared(dev)) {}

  ~camera()
  {
  }

  shared_ptr get_information() const
  {
    return information_;
  }

private:
  shared_ptr information_;
};

I did say this class was pretty light at the moment. The class accepts a single RealSense device which is used to instantiate the information class. We provide one method which is used to get the instance of the information class. That’s it so far.

Finally, we come to the entry point of our code, the cameras class. This class let’s us enumerate all of the cameras on our system and access the functions inside. As usual, we’ll start off with the definition:

#pragma once
#include 
#include 
#include "camera.h"
#include "hpp/rs_context.hpp"

using namespace std;
using namespace rs2;

class cameras
{
public:
  cameras() {}
  ~cameras() {}
}

As you will remember, I said that I wanted the cameras to be enumerable so I need to do some upfront declarations:

using cameras_t = vector<shared_ptr>;
using iterator = cameras_t::iterator;

using const_iterator = cameras_t::const_iterator;
[/source]

With these in place, I can now start to add the ability to enumerate over the camera instances. Before I do that, though, it’s time to introduce something new. In the preceding code, we saw that the RealSense camera was represented as a device that we passed into the relevant constructors. The question is, how did we get that device in the first place? Well, that’s down to the API providing a context that allows us to access these devices. So, let’s add a member to store a vector of camera instances and then build in the method to get the list of devices.

cameras() : cameras_(make_shared())
{
  context context;
  // Iterate over the devices;
  auto devices = context.query_devices();
  for (const auto dev : devices)
  {
    const auto cam = make_shared(dev);
    cameras_->push_back(cam);
  }
}

private:
  shared_ptr cameras_;

There’s nothing complicated in that code. We get the devices from the context using query_devices and iterate over each one, adding a new camera instance to our vector. We have reached the point where we can now add the ability to enumerate over our vector. All the scaffolding is in place so let’s add that capability.

int capacity() const
{
  return cameras_->capacity();
}

iterator begin() { return cameras_->begin(); }
iterator end() { return cameras_->end(); }

const_iterator begin() const { return cameras_->begin(); }
const_iterator end() const { return cameras_->end(); }
const_iterator cbegin() const { return cameras_->cbegin(); }
const_iterator cend() const { return cameras_->cend(); }

That’s it. We now have the ability to build and iterate over the devices on our system. Let’s see how we would go about using that. To test it, I created a little console application that I used to call my code. It’s as easy as this:

#include "stdafx.h"
#include "cameras.h"
#include 

int main()
{
const auto devices = std::make_shared();
for (auto &dev : *devices)
{
cout <get_information()->dump_diagnostic();
}
return 0;
}

This is what it looks like on my system (running a web camera and a separate RealSense camera).

real-sense-images

Advertisement

Sensing the future with WPF

This post is a look into a new library that I’m writing that’s intended to make life easier for WPF developers working with Intel RealSense devices. As many of you may know, I’ve been involved with the RealSense platform for a couple of years now (back from when it was called the Perceptual Computing SDK). When I develop samples with it, I tend to use WPF as my default development experience, and the idea of hooking up the Natural User Interface capabilities of RealSense devices with the NUI power of WPF in an easy to use package is just too good to resist. On top of this, I still strongly believe in WPF and it will take a lot to remove me from developing desktop applications with it because it is just so powerful.

To this end, I have started developing a library called RealSenseLight that will enable WPF developers to easily leverage the power of RealSense without having to worry about the implementation details. While it”s primarily aimed at WPF developers, the functionality available will be usable from other C# (Windows Desktop) applications, so hooking into Console applications will certainly be possible.

One of the many decisions I’ve taken is to allow configuration of features to be set via Fluent interface, so it’s possible to do things like this:

RealSenseApplication.Uses(new EmotionDetectionConfiguration())
  .Uses(SpeechRecognition.DefaultConfiguration().ChangePitch(0.8))
  .Start();

ViewModels will be able to hook into RealSense using convenient interfaces that abstract the underlying implementations. There’s no need to call Enable… to enable a RealSense capability. The simple fact of integrating a concrete implementation means that the feature is automatically available. The following example demonstrates what an IoC resolved implementation looks like:

public class EmotionViewModel : ViewModelBase
{
  private IEmotion _emotion;
  public EmotionViewModel(IEmotion emotion)
  {
    emotion.OnUserHappy(user =&gt; System.Debug.WriteLine(&quot;{0} is happy&quot;, user.DetectedUser));
  }
}

The library will provide the ability to do things such as pause/resume individual RealSense capabilities, identify and choose from the relevant RealSense compatible devices, although this does require identifying up front, what the different aspects are you’re interested in because it uses these to evaluate the devices that meet these capabilities.

I’m still fleshing out what the whole interface will look like, so all of the features haven’t been determined yet, but I will keep posting my designs and a link to the repo once I have it in a state where it’s ready for an initial commit.

Getting a RealSense of my status

Long time readers will have realised that I have been spending a lot of time with the technology that was formally known as Perceptual Computing (PerC). You may also know that this technology is now known as RealSense and that it will be rolling out to a device near you soon. What you might not know is that I’m currently writing a course on this technology for Pluralsight. As part of writing this course, I’ve been creating a few little wrapper utilities that will make your life easier when developing apps with the SDK.

In this post, I’m going to show you a handy little method for working with API methods. Pretty much every RealSense API method returns a status code to indicate whether or not it was successful. Now, it can get pretty tedious writing code that looks like this:

pxcmStatus status = Session.CreateImpl<PXCMVoiceRecognition>
  (PXCMVoiceRecognition.CUID, out voiceRecognition);
if (status < pxcmStatus.pxcmStatus.PXCM_STATUS_NO_ERROR)
{
  throw new InvalidStatusException("Could not create session");
}
status = _voiceRecognition.QueryProfile(out pInfo);
if (status < pxcmStatus.pxcmStatus.PXCM_STATUS_NO_ERROR)
{
  throw new InvalidStatusException("Could not query profile");
}

As you can imagine, the more calls you make, the more status checks you have to do. Well, I like to log information about what I’m attempting to do and what I have successfully managed to do, so this simple method really helps to write information about the methods being invoked, and to throw an exception if things go wrong.

public void PipelineInvoke(Func<pxcmStatus> pipelineMethod, string loggingInfo = "")
{
  if (!string.IsNullOrWhiteSpace(loggingInfo))
  {
    Debug.WriteLine("Start " + loggingInfo);
  }
  pxcmStatus status = pipelineMethod();
  if (status < pxcmStatus.PXCM_STATUS_NO_ERROR)
  {
    throw new InvalidStatusException(loggingInfo, status);
  }
  if (!string.IsNullOrWhiteSpace(loggingInfo))
  {
    Debug.WriteLine("Finished " + loggingInfo);
  }
}

This makes it easier to work with the API and gives us code that looks like this:

PipelineInvoke(() => 
  Session.CreateImpl<PXCMVoiceRecognition>(PXCMVoiceRecognition.CUID, 
  out _voiceRecognition), "creating voice recognition module");

And this is what InvalidStatusException looks like:

public class InvalidStatusException : Exception
{
  public InvalidStatusException(string alertMessage, pxcmStatus status)
  {
    Message = alertMessage;
    Status = status;
  }
  public string Message { get; private set; }
  public pxcmStatus Status { get; private set; }
}

Over the course of the next couple of months, I’ll share more posts with you showing the little tricks and techniques that I use to make working with the RealSense SDK a real joy.

Haswell – Intel SDP Unit (Software Developer Preview) – a death in the family

Okay, that’s possibly a bit too melodramatic a title, but that’s almost what it felt like when my Ultrabook decided to pack in and shuffle off to the great gig in the sky. This post will contain no screenshots for reasons that will soon become abundantly clear.

Some context first, I’ve been using the Ultrabook pretty much continuously since I got it. It was my default, go-to, day to day development box. Yup, that’s a lot of different ways of saying that I was using the Ultrabook very heavily. So heavily, in fact, that I was using it as the workhorse for developing my Synxthasia project for Intel ; a Theramin(ish) type of music application which translated gestures in 3D space into sound and visuals – it even took your picture at intervals and imposed that into the visuals; I was particularly proud of the fact that it gave you the ability to alter the “shape” of the sound to simulate a wah effect just by using your mouth. The SDP really is an excellent device for development tasks like this.

So, about a week before I was due to submit this to Intel; with the app heavily in the polishing stages, the Ultrabook died on me – taking 8 days of code that hadn’t been committed back to source control (I have no excuses really as I should have been committing this to source control as I went along). Remember that I said that it had the habit of giving the low battery warning and then switching off, well this is exactly what happened. Anyhoo, I plugged it in and started it back up – all was fine and dandy here, but I noticed that Windows was reporting that there was no battery present. Well, I couldn’t leave the Ultrabook permanently plugged in and I needed to go to a client site – I unplugged the unit and set off. When I tried to power it back on later on, it refused to start – all that happened was the power button LED flashed blue and that was it.

I got in touch with contacts in Intel and they provided some first line support options which included opening the unit and disconnecting the batteries. Unfortunately, this hasn’t worked – the unit still has all the vitality of the Norwegian Blue. Intel has offered to replace the device but as I have been away from the physical unit for a while, I’ve been unable to take them up on their kind offer. Once I get back home, I will avail myself of their help because the device itself really is that good.

What has become clear to me over the last year or so, is just how much Intel cares about the feedback it gets. I cannot stress enough how they have listened to reviewers and developers like myself, and how they have sought to incorporate that feedback into their products. As a developer, it’s a genuine pleasure for me to deal with them as they really do seem to understand what my pain points are, and they provide a heck of a lot of features to help me get past those problems. Why do I say this? Well, I was lucky enough to have an SDP from last year to use and while it was a good little device, there were one or two niggles that really got to me as I used it – the biggest problem, of course, being the keyboard. The first SDP just didn’t have a great keyboard. The second issue was that the first SDP also felt flimsy – I always felt that opening the screen was a little fragile and that I could do damage if I opened it too vigorously. Well, not this time round – the updated version of the SDP has a fantastic keyboard, and feels as robust as any laptop/Ultrabook that I’ve used, and all this in a slimmer device. You see, that’s what I mean by Intel listening. I spend a lot of time at the keyboard and I need to feel that it’s responsive – and this unit certainly is. The thing is, Intel aren’t selling these units – they are giving them away for developers to try out for free. It would be perfectly understandable if they cut corners to save costs, but it’s patently apparent that they haven’t – they really have tried to give us a commercial level device. Yes, I’ve been unlucky that the Haswell died on me, but given the care from Intel this hasn’t tarnished my opinion of it.

So, does the death of the device put me off the Haswell? Of course it doesn’t. For the period I have been using it, it has been a superb workhorse. It combines good looks with performance with great battery life. It’s been absolutely outstanding and I wouldn’t hesitate to recommend it. Needless to say, I’m delighted with the help and support that I’ve had from Intel – it certainly hasn’t been any fault of theirs that I’ve been unable to return the device for them to replace. I’d also like to thank Rick Puckett at Intel for his help and patience, as well as Carrie Davis-Sydor and Iman Saad at DeveloperMedia for helping hook me up with the unit in the first place and their help when the Haswell shuffled off the mortal coil.

Haswell – Intel SDP Unit (Software Developer Preview) – The Keyboard fights back

Well, since my initial review of the Ultrabook, I’ve been using it for pretty much all my computing needs. From writing C++ and C# code, through standard surfing and the likes, through to heavier duty 3D work, and the Haswell just keeps on going.

First of all, let me just say that the keyboard on this generation of Intel SDP is a whole lot better than the one that shipped with last years SDP. This one actually feels solid and responsive, and it features a nice little backlight. Intel really has upped the game for what are effectively demo units here. My only, admittedly minor, niggle is the fact that the keyboard is designed for the US market. When I change it to the UK settings, the \ key is missing.

Writing code using this Ultrabook is a pleasant experience. The screen resolution is good, and the screen itself is bright and very pleasant to use. Unfortunately, the 4GB memory means that there’s sometimes a slow down when compiling a C++ application from scratch using Visual Studio 2012. Once you’re past the first compilation though, the compile process is responsive enough, so this isn’t a huge issue. As I’m a heavy user of Visual Studio and WPF, I’m pleased to see that the responsiveness of the WPF designer window is nice and snappy – there’s no hold up in the system making me wish I was using a different machine. Even the dodgy upper case menus in Visual Studio don’t distract me while I’m looking at this glorious screen.

Visual Studio running on the Ultrabook.

One thing I’ve tried to do is keep the settings of the machine as default as possible (except for the keyboard layout – that’s’ a compromise too far). This has allowed me to test the claims that the Haswell gives me long battery life. Well, it’s true – using it for a mixture of DirectX development, 3D work and WPF development has allowed me to get just over 7 hours on a single charge. It’s not quite the potential 12 hours, but my day to day usage is pretty atypical, so I wouldn’t be unduly worried about the battery life if I were looking to use this on a really long journey. The reason that I mentioned the defaults is because Windows 8 suddenly tells me that I’ve got about 5% battery life left, and then it shuts down – there’s not enough time for me to get the unit plugged in then.

Now Pete – surely it can’t be all sweetness and light, can it? Where are the niggles that must exist here, or are you playing nicey-nice with Intel here? Well, there is one thing that really gets me annoyed and that’s the Trackpad. If I click on the left hand side of the Trackpad (anywhere on the left), I can trigger a click – and that’s exactly what I’d expect. Clicking or tapping on the right hand side doesn’t trigger anything. This has caused me quite a bit of frustration. It’s my issue to deal with though, as the Trackpad is a multi touch unit so it’s something I’m going to have to work on.

You may wonder what applications I typically have running on the Ultrabook, to see how this compares to what you would use. Well, on a day to day basis, I typically have up to 4 instances of Visual Studio 2012 running simultaneously. I also have a couple of instances of Expression Blend running, as well as Chrome, with a minimum of 10 to 12 fairly heavy web pages in them (if you’re really interested, these pages are Facebook, Code Project – several instances covering different forums and articles,, GMail, Twitter, The Daily Telegraph and nufcblog.org). I also usually have Word, Excel, Photoshop, Huda and Cinema 4D running, along with Intel’s Perceptual Computing SDK and Creative Gesture camera. As you can see, that’s a lot of heavy duty processing going on there, and this machine copes with them admirably.

Okay, to whet your appetite – I’ve been using the Ultrabook, off the charger, just editing documents and web pages for the last two hours, and I still have 91% battery life left. That’s absolutely incredible.

Battery

 

Haswell – Intel SDP Unit (Software Developer Preview) – 1st impressions

The reveal

Bonk, bonk bonk bonk bong….

That’s the sound that greeted me when I opened the box from Intel™ featuring a developer preview Haswell Ultrabook™, and there’s no geek in the world who wouldn’t want to repeat that sound, so after opening and closing the case for ten minutes or so, I decided to actually get the Ultrabook™ out of the box.

WP_20130801_001

Okay, this is the Ultrabook™ packaging (outside of the cardboard it was shipped in). Little did I know the simple joy I was to receive upon opening this box.

WP_20130801_002

I’ve opened up the packaging, and this is the sight that greets me. I’m keen to start the unit, but I want to see what other goodies the box has inside.

WP_20130801_003

Is that a carry case for the Ultrabook™? Nope, it’s just packing material to keep the unit from getting scratched.

WP_20130801_004

Ahh, that’s more like it. Cables and a USB drive (containing the Windows reinstallation media). The reflection in this image is from the plastic covering that protects this layer.

WP_20130801_005

Okay, the plastic is off and I can see the quick start guide as well as the power supply and USB drive. That PSU is tiny. Consider me seriously impressed.

WP_20130801_006

What have we here? Ah, Mini HDMI to Full, to VGA and a USB to Ethernet. Thoughtful of them to supply this.

WP_20130801_007

Well, there’s the keyboard. Nice.

After starting the machine up, it’s a simple matter to configure Windows™ 8. Windows really suits a machine like this.

Day to day

Well, the first thing to do with a machine like this is to take it out for a drive. In my case, this means installing Visual Studio Ultimate 2012. As you might expect, I have several computers, so I have a lot of reference there on how long this actually takes to install. On my old I7 with a spindle hard drive, it took up to an hour to install. The question then, is how long would it take to install on this machine with the same feature set. Start to finish, the whole process took 8 minutes. Yup, you read that right, 8 minutes start to finish. I couldn’t ask for a better initial impression than that.

The good

This unit is fast. Oh, it’s so very fast. Okay, the boot up time is a tad slower than my I7 Lenovo™ Yoga 13, (by less than half a second), but once you actually get going, the machine is very responsive. While it’s a fairly entry level I5, it is more than capable of coping with my day to day development tasks. Opening up a XAML page in Visual Studio 2012 and showing it with both the designer and code view open, the code view updated Intellisense as I typed, and the designer refreshed itself snappily – those of you who develop XAML based apps know how slow this can be – it was an absolute joy to behold.

The unit is running Windows 8 Pro (Build 9200), and Windows works well on it. The screen is responsive to the touch, and Windows feels snappy and responsive.

The machine is light. Maybe it’s not Mac Air light, but it’s heading in the right direction, and it’s thin. If you were going to use this only on your desktop, this wouldn’t matter but if, like me, you travel a lot, you’ll really appreciate them.

That battery life. Oh boy, 6 hours spent doing a lot of fairly intensive development and the battery still has plenty of life to give yet. This really has hit the sweetspot with regards to providing me with portability. I can easily see myself whiling away journeys using this.

The niggles

This is going to sound churlish considering I’ve received this unit as a “favour”, but I do wish it had more than 4GB of RAM and a bigger hard drive than it has (a 128GB SSD).

The shift keys are on the large side, but I have no doubt I will soon get used to them.

There’s a lot of real estate lost around the screen, but I’ve no doubt that commercial units will take care of this, using all the available space.

It’s hard to put my finger on what the exact problem is, but the keys don’t feel quite right to me. There’s a bit of play to them that doesn’t make them feel that stable. Obviously, as this is not a commercial Ultrabook™, I’m not unduly worried – I would expect a commercial Ultrabook™ to have a more solid feel to the keys.

So, where are we at?

I’ve been asked to write the reviews as unbiased as I possibly can be. This review is my initial impression only. My next review will cover using this machine on a daily basis. As I do a lot of Ultrabook™ related work as part of my daily development routine, this will be my primary device, so I’ll let you know how this compares.

Disclosure of Material Connection:

I received one or more of the products or services mentioned above for free in hope that I would mention it on my blog. Regardless, I only recommend products or services I use personally and believe my readers will enjoy. I am disclosing this in accordance with the Federal Trade Commission’s 16 CFR, Part 255: “Guides Concerning the Use of Endorsements and Testimonials in Advertising.”

Altering my perception

My apologies for not posting for a while; it’s been a pretty crazy couple of months and it’s just about to get a whole lot crazier. For those who aren’t aware, Intel® have started running coder challenges where they get together people who are incredibly talented and very, very certifiable and issue them with a challenge. Last year they ran something called the Ultimate Coder which looked to find, well, the ultimate coder for creating showcase Ultrabook™ applications. This competition proved so successful, and sparked such interest from developers that Intel® are doing it again, only crazier.

So, Ultimate Coder 2 is about to kick off, and like The Wrath Of Kahn, it proves that sequels can be even better than the original. The challenge this time is to create applications that make use of the next generation of Ultrabook™ features to cope with going “full tablet”, and as if that wasn’t enough, the contestants are being challenged to create perceptual applications. Right now I bet two questions are going through your mind; first of all, why are you writing about this Pete, and secondly, what’s perceptual computing?

The answer to the first question lies in the fact that Intel® have very kindly agreed to accept me as a charity case developer in the second Ultimate Coder challenge (see, I can do humble – most of the time I just choose not to). The second part is a whole lot more fun – suppose you want to create applications that respond to touch, gestures, voice, waving your hands in the air, moving your hands in and out to signify zoom in and out, or basically just about the wildest UI fantasies you’ve seen coming out of Hollywood over the last 30 years – that’s perceptual computing.

So, we’ve got Lenovo Yoga 13 Ultrabooks™ to develop the apps on, and we’ve got Perceptual Camera and SDK to show off. We’ve also got 7 weeks to create our applications, so it’s going to be one wild ride.

It wouldn’t be a Pete post without some source code though, so here’s a little taster of how to write voice recognition code in C# with the SDK.

public class VoicePipeline : UtilMPipeline
{
private List<string> cmds = new List<string>();

public event EventHandler<VoiceEventArgs> VoiceRecognized;
public VoicePipeline() : base()
{

EnableVoiceRecognition();

cmds.Add("Filter");
cmds.Add("Save");
cmds.Add("Load");
SetVoiceCommands(cmds.ToArray());
}

public override void OnRecognized(ref PXCMVoiceRecognition.Recognition data)
{
var handler = voiceRecognized;
if (data.label >= 0 && handler != null)
{
handler.Invoke(new VoiceEventArgs(cmds[data.label]));
}
base.OnRecognized(ref data);
}

public async void Run()
  {
await Task.Run(() => { this.LoopFrames(); });
this.Dispose();
}
}

As the contest progresses, I will be posting both here on my blog, and a weekly report on the status of my application on the Intel® site. It’s going to be one wild ride.