Four Perspectives on Delivering ‘Return on Experience’ Follow-up Notes

And now, as promised, the link-laced follow-up to this week’s “Four Perspectives on delivering ‘Return on Experience.'”

Our UX Gurus on the panel were:

and in addition to their insights on Wednesday night, they’ve kindly helped me compile these links.
(If you want to contact any member of the panel, they’re first-initial last-name at, or ping me.)


The panel began by reflecting on the masochistic teapot made famous by Donald Norman on the cover of his book The Psychology of Everyday Things, to remind us that in the software industry, what we create for our clients often becomes an everyday thing.

Are we making things that are functional but masochistic like this teapot?

what's "Return on Experience"?

The panel then weighed in on Deborah Adler’s redesign of the Target Rx medicine bottles, which was bravely showcased by Microsoft as a UX case study from another industry during the second day keynote at Mix09.

It was a story arc that highlighted the many elements of ‘return on experience’ – everything from safety and customer satisfaction, through brand awareness and driving revenue.


Then we reflected on the co-existence of the Development and Design lifecycles. There were varying opinions on where each person on the panel feels squeezed for time and resources in the cycle.

Ernie’s more thorough PM’s Gantt chart (very much not shown here) was a sobering dose of reality. We considered techniques for determining the point at which the value to the client diminishes when you add more time and resources.

New Tools, New Processes

I did a Sketchflow demo. We created an interactive prototype. It had the “right level of fidelity” and the panel remarked that the “sketchy” look helps manage client expectations.

At a high level – there was love. Sketchflow should change our software development lifecycle.

But some easy things were hard. We integrated sample data (and Susan quite fairly called me on it when I talked about a designer “databinding” to “sample data.”  (If Blend wants databinding to be [the designer’s] job then the designer says “but it’s not my job!”). We looked at editing a data template (for a Listbox full of items) and everyone agreed this experience was currently way too hard without grokking a number of Blend and XAML-specific concepts.

Especially valuable is Sketchflow’s ability to solicit feedback from clients with standalone prototypes. Ernie remarked that it was when he saw Sketchflow run “live” as a  standalone prototype that he saw how valuable it could be. Integrated client feedback was a big win. We also saw how it can generate Word doc summaries, and all eyes lit up.

We remarked on its incredible potential, which it’s not quite living up to just yet. Earlier on in the presentation, we’d hit upon this theme that a good user experience should never make the user “feel stupid” – but for new users Sketchflow can unfortunately make some of its target audience feel stupid.

For a v1, though – wow – we all saw the value, and deeply, desperately want it to be awesome. Ernie said he’d go back to his team the next day and tell them to start using it.

Roles and Expectations

After the break, we talked about roles and expectations. Given the changing tools and processes, we wondered what should be expected of different roles.

We noted how “designer” is a “suitcase word” that carries many different meanings. Susan saw all these “people” in the Venn Diagram and just wanted it to be clear that in real life, it’s often all a single, multi-faceted “person.”

(Design) Surface

Most of the panel are, or have been, involved in Infusion’s Surface projects, so we took a moment to talk about design and user experience as they relate to that platform.

Susan remarked that Surface development demands UX design skills “to the extreme.”

The Surface design challenges include: attracting the attention of casual users, encouraging users to overcome the novelty of simultaneous multi-user interaction, and embracing the lack of an “up” direction. It’s “hyper-real,” and there is a need to consider the affordances of design elements used on this multi-user touch-table application.

What can we learn from games?

We had Dan Wilcox from the games industry, so we also asked him what we can learn from the gaming world if we’re trying to build line-of-business apps instead.

Dan agreed that a significant challenge is showing users what they can interact with, and how. That “affordances” thing again. He talked about how the games industry has improved in its ability to guide people through 3D landscapes, and perhaps similar cues could influence navigation through user interfaces. He gave examples of where games are blurring the boundaries between user interface and game world.

The Future of User Experience

Then we talked about the future, because that’s always fun.

But the twist here was: what kind of UX considerations will come into play as we design for new kinds of interactivity?

We ran out of time because we wanted to run down the street to see the Surface app before Rogers closed, but now you have time to explore, and add your own thoughts below…

Four Perspectives on Delivering ‘Return on Experience’

Metro Toronto. NET Users Group
Meeting, 16 sept, 6PM, Bloor East, Toronto (click)

I’m looking forward to the conversation at this Metro Toronto .NET Users Group meeting:

Four Perspectives on Delivering
‘Return on Experience’

We’ve heard a lot recently, from Microsoft and others, about the importance of user experience (UX) and delivering ‘return on experience’ to clients. Tools like Sketchflow for prototyping, Expression Blend for visual design, and frameworks like Silverlight and WPF, are designed to change the way we deliver software projects that incorporate rich and intuitive user experiences.

The reality, of course, is that there are many stakeholders with different perspectives on this process. This evening, let’s talk about how things really work during project delivery “in the wild.”

We’ll discuss the process of enhancing user experience from four perspectives: a designer, a developer team lead, a client, and an account manager.  (not personas, but thoughts from real people who have performed or are performing these roles).   Their perspectives will begin a conversation about the tools and processes, challenges and rewards of delivering ‘return on experience.’

(September 16th, Manulife at 200 Bloor East, Toronto, 6:00PM)

[Update, 17 Sept – I really enjoyed last night – and a huge thanks to all 4 members of the panel (Susan Greenfield, Ernie Taylor, Daniel Cox, Bill Baldasti) and everyone who came out. I will post slides and follow-up either later today or early tomorrow!]

WPF Commanding – When do Commands re-evaluate their CanExecute method?

I had been merrily using WPF’s built-in support for the Command Pattern for ages (see Commanding Overview, MSDN Docs, and article on implementing the command pattern in WPF, Jeff Druyt)… when suddenly it occured to me that I had no idea what triggered WPF to determine whether or not a command can be executed.

Let me explain by reduction to an absurd example:

Say I have a command that can only execute when

DateTime.Now.Second % 2 == 0.

I construct this command by home-brewing a static RoutedCommand instance:

public static class Commands


public static RoutedCommand MyCommand { get { return m_MyCommand; } }

private static RoutedCommand m_MyCommand = new RoutedCommand


“Execute My Command”,


new InputGestureCollection()


new KeyGesture(Key.C, ModifierKeys.Alt)




And then I add a Command Binding for that command to my Window, and assign the command to a Button:

<Window x:Class=”TestCommands.Window1″…>
Command=”{x:Static local:Commands.MyCommand}”

<Button Width=”200″ Height=”200″
Command=”{x:Static local:Commands.MyCommand}”
Content=”{Binding Path=IsEnabled}”


By nature of WPF’s awesomeness and WPF Commanding in general, the above Button’s IsEnabled property should automatically be set to true or false based on whether or not the command can or can’t be executed.

Speaking of which, let’s set up my Command’s absurd logic in the CodeBehind by implementing its Execute and CanExecute event handlers:

private void MyCommandCanExecute(object sender, CanExecuteRoutedEventArgs e)


DateTime now = DateTime.Now;

if (CanExecuteOutput != null)


CanExecuteOutput.Text = “MyCommand CanExecute determined at “ +

now.ToLongTimeString() + ” (and “ + now.Millisecond + “ms)”;


e.CanExecute = DateTime.Now.Second % 2 == 0;


private void MyCommandExecuted(object sender, ExecutedRoutedEventArgs e)


TextOutput.Text = “MyCommand executed at “ + DateTime.Now.ToLongTimeString();


So, my example is absurd but I bet you see my point by now: WPF is meant to automatically set the IsEnabled Property on that button to true or false, based on the results of the CanExecute method. But in this case, the results of CanExecute are a function only of time, and thus change repeatedly and independently of “obvious” application events. So… how does the Commanding system know when to query CanExecute and consequently enable/disable the button once a second?

In this case, without further intervention, it doesn’t. It seems that when events are raised on the Window (a mouse button click, etc.), CanExecute is re-evaluated. (I don’t know the details and wish I did.)  But, without further programmatic or user intervention, the button will not automatically change its IsEnabled state once a second.

This led me back to the MSDN docs, where I discovered the aptly-named InvalidateRequerySuggested event. To coerce – er, suggest – that WPF should query CanExecute, I set up the following DispatcherTimer:

m_DispatcherTimer = new DispatcherTimer()


Interval = TimeSpan.FromSeconds(0.25),

IsEnabled = true


m_DispatcherTimer.Tick += delegate




Now, the IsEnabled property of the Button blinks on and off as the Command’s ability to be executed changes with the passing seconds.

Only then did I discover there’s an MSDN Docs sample called “Disable Command Source Via Dispatcher Timer Sample” which is remarkably similar.

There you have it. Now go forth and command WPF’s Commanding. I’m sure you can all execute on that request <g>

P.S. Code for this sample is here.

P.P.S. What are folks using for pasting XAML and C# code into their blogs? This entry is looking a little rough…

NLarge v1.1.2 (once more, with feeling, multimon and text annotation support)

NLargeBarry‘s teaching a course this week and noticed that NLarge didn’t support multimon – or rather, it always zoomed in on the primary monitor. So NLarge got another update. Good thing I don’t sleep!


  • Added Multimonitor support – zooms in on the monitor currently containing the mouse pointer.
  • Added Text support – annotate zoomed-in images with text by using the ‘T’ key

[Update 5 Apr:]

  • v1.1.3 – Fixed a bug in the multimonitor support that caused strange behavior after Hibernate/Restore

Download the update here on Codeplex.