Prototyping in the age of Design Thinking

Prototyping in the age of Design Thinking

The advent of design thinking is really shaking up the whole concept of prototyping solutions. I'm not going to go into too much detail on the ideology of design thinking, there are literally hundreds of articles available on the subject. But, just to set the scene, if I were to to sum up design thinking, I would say that it's about chasing the best possible solution towards its full potential through a series of tangible iterations; ideating, building and testing with each step you take. It's about team collaboration, innovation through action and using empathy with your users and stakeholders.

On the surface, you may be forgiven for thinking that all of those fancy words don't sound too dissimilar from your old way of doing things with just a little bit more marketing sparkles.

Back to this idea of design thinking. Previously, the design phase often followed the strategic analysis, the business analysis and user experience discovery phases, in a well trodden linear project path. It was mostly just a case of managing a few client approval iterations on the visual interpretation of the brand experience design.

This new timely paradigm has snatched our entire group of rugged T-shaped individuals out of their safe little cubicles sitting in a line, and sat them about a round table as a single team, working together in one iterative design phase (which brings its own issues). The primary goal now is to create, build and then test real assets with our selves, our stakeholder and our customers, in order to ensure we have the right solutions to the right problems, before we move into full product development.

WHAT ABOUT PROTOTYPE FIDELITY?

Let's start at the beginning and have a look at what exactly this thing called a prototype really is? I would describe the prototype as a tangible asset which has been developed to a defined level of completeness to allow observation of people interacting with it. Or, in English, we build a new toy so we can watch how the kids play with it, then we make some design tweaks based on how much fun the kids had, and we start the loop again.

Traditionally, the prototype has always fallen into two categories of fidelity, or completeness, high and low. In reality, however, there have always been three different categories of fidelity; low, medium and high.

What's that? Nobody ever mentioned any medium fidelity before? Well, read on, while we take a moment and examine all three.

To begin, we have our trusty low fidelity prototypes: No big surprises here, we have the usual suspects of paper prototypes, lolly pop stick models, storyboards, flip charts, index card stories, whiteboard sketches, image click through (slideshows). They come in many shapes and sizes and use technology like .. well, paper, but also simple digital technology, like Powerpoint slideshows, or using frameworks such as Bootstrap and wordpress. This level of fidelity is mostly quick and cheap, which is excellent for fast iterations and for early concepts or projects in a state of high flux.

Next, we have medium fidelity prototypes: This supposed new kid on the block is probably where you previously thought high fidelity was sitting. The simple truth is that it never was. Medium fidelity prototypes have only a limited interactivity model. They might have buttons, forms and fields, they might even click through simple journeys based on calls to action and simple user driven events, but they are not designed to mimic real solutions, they are mostly vanilla. The technology used here are some of the new prototyping toys on the block, software like Flinto, Proto.io or Axure - where you 'fake' code, using an abstracted graphical user interface, creating a series of event based actions to power the navigation of the user through simple journey screens and changeable asset incarnations. To prototype to this fidelity costs more than the low version, it takes more time and will usually require specific and significant software pseudo-coding skills as well as licensing costs.

I have to admit, I'm not a huge fan of this level. Many consultants will attempt to pass these prototypes off as high fidelity, which is, imho, wrong, and here are a few reasons why. The models created include unrealistic transitions and interactions and they create false expectations of flow and performance. The look and feel can never accurately simulate the real solution, and as the interfaces become more and more complex, so too does the pseudo-code behind them, with the result that the model quickly becomes unwieldy and cannot be maintained without significant pocket excavation.

Medium fidelity can be useful, if kept simple, as a bridging mechanism between designers and stakeholders to promote a shared understanding of early experience design. I've also seen some of these models used as annotated wireframes to assist with the design to development transition. I should also mention that if you happen to be related to Mr.Moneybags then it is possible to create your entire organisational design pattern library on these tools and thus ensure new designs conform. But boy oh boy will that cost you, and it will not help when it comes to the lack of reality or managing complex prototypes. If you're thinking of doing that, my advice would be to use real development resources to create real showcase models.

And, finally, we come to high fidelity prototypes. This level of prototyping reflects a state of realism that, when tested with a user, will provide an environment from which they will be able to make an honest response to the product. Let's re-visit that sentence, it's worth it. A responsive environment which is realistic enough to allow us to capture honest responses through real-life interaction.

And when I say real - I am talking about transitions, native mobile considerations, building in network lag and download/upload time, video play options, Accessing real API's, reacting to a lack of connectivity, zooming, motion, scrolling, gestures, heavy data computations, real visual design and design assets. It's no good creating a high fidelity experience if it's not as close to real as possible. You might just as well stick to paper prototyping as your results will be as unreliable when it comes to production as if you had created a low/medium fidelity prototype.

Imagine Keanu Reeves trying to learn Kung-Fu as a cartoon rabbit with 3 paw puch combinations

However, at the end of the day you must remember that this is a prototype, all the above custom built functionality is only for the application sections that need to be fully tested.

I'm going to make a very quick detour around this term responsive, commonly referred to as responsive web design. When I mention this I am not describing what probably 99.94% of the rest of the digital world think I am. In truth, this deserves it's own full article, but to summarise: Responsive design, for me, encompasses not only a visual response of fitting content to devices and their screen breakpoints. More importantly, it also means looking at the content, the interactions and the journeys we provide for each digital touch point. Users will have different expectations and needs for a service depending on how, when and the device they are experiencing it with. And to discover these habits, we have to talk to them. Nuff said! [mic drop] Back to the prototypes.

In my experience, true high fidelity prototyping is very rarely found in organisations, mainly because it comes with a significant extra cost in time, skills and money, which, when compared to just using design resources, makes what would appear to be a compelling budgetary argument against hifi. And let's not forget you will also need to find, retain and have access to that skilled individual to do the building. Prototyping is a distinct skill set, it is not just any old developer who happens to have some free time.

USER TESTING WITH PROTOTYPES - OR "LET THEM EAT CAKE"

Where were we? Ah, yes, hifi and the high costs. But consider this statement, if you do not prototype to high fidelity, then user testing any interactive asset, e.g. a multi-touch control, cannot produce realistic user responses as it will rely on the users imagination to fill in the interaction experience blanks.

User testing is supposed to be about unearthing honest reactions and the feelings users experience when using a product, not imaginary ones.

Should all user testing be made as hi-fi? In an ideal world, yes, in the real world, no. As mentioned, because of time and money, hifi is only a realistic fidelity for user testing a product mature enough to not be in extreme flux, it is primarily there to provide final stage acceptance verification of products before we start a large, expensive production cycle.

Why don't we look at a couple of examples to see if we can really nail the idea in place. First off, imagine a user testing scenario for a mobile app, as part of the journey we present a wonderful shiny dial on the mobile touchscreen. Test users are instructed to interact with this dial. The expected result is that the app will react to their manipulation by providing different system options. Now, even if you are using pixel perfect visual design, if that dial does not spin and rotate, make small smooth clicky sounds and allow your users to touch and manipulate it, then you are asking the now rather disappointed users to bridge this journey concept gap with their imagination.

What did we learn? Was the drag experience too quick? Did the dial move too fast? Was the friction too high? Did the dial not connect to the options they wanted? Were the onscreen instructions not sufficiently detailed to help the users with the dial? Was the interaction and the resulting system actions intuitive and enjoyable? And that's only for a dial.

Without high fidelity all of these types of interaction questions can never be asked. More importantly, if the interaction event in question is considered critical to the user journey, then unless it is tested using the highest fidelity, there is no way to determine if the users interaction might result in catastrophic failure.

Another example? OK. This time, let's take our test subject and tell him he's sitting in a fine dining establishment. Let's give him a large glass sundae dish and a long silver spoon. He's already licking his lips in anticipation. Now place an unopened Tesco value choc ice into this glass, add squirty cream and mush it all together. Top your creation off with a glace cherry and let's start the script. Describe for them the taste, aroma and texture of a slice of the richest black forest gateaux ever made; fresh juicy Morello cherries in a thick intoxicating Kirsch sauce, rich deep chocolate steam wafting up from the light-as-air sponge, still slightly warm and moist from the oven, mixed with the crunch of wafer thin Belgian milk chocolate smothered in thick Bavarian cream and a smooth chocolate fondant that can have only been made by angels. Describe how the flavours merge and then separate again and .... hang on, your test user has just left your imaginary shop in search of a slice of real cake. The choc-ice has melted and the squirty cream has drowned the cherry.

If you're still reading this, and not out there somewhere hunting cake, let's sum up high fidelity prototyping. It can be expensive [but not much more than medium fidelity, and sometimes a lot less], it requires a high level of development skill, you will need a prototyper, and it's not going to be as fast as lo-fi. Another risk is that this level of complexity can also produce a risk of software failure, which would never occur in lower fidelity levels, possibly requiring a qualified testing resource, or just a real prototyper. But, and it's a big but, here's the kicker:

Is the cost of going into production without user testing to high fidelity worth the risk of later needing a software update?

Wait! That sounded far too trivial. Let's clarify. You have deployed a product and suddenly realise you have failed to achieve your business goals. You need to find out why and fix it. So you start to user test with the product (by definition this is now high fidelity), identifying a series of previously unknown blocks and fail points, you then redefine the application journeys, redesign, and then start another full development and deploy cycle, hopefully with the same team to prevent steep learning curves from hampering your dev cycle. But if not, at least you have all of that amazing documentation to help your new team out [ironic giggle]. Not only are your time to market timelines out by the time required for this new phase, but, if you think about it, they are also out by the time taken by the original development cycle. Even if you have been slightly sensible in deploying first to a pilot programme, you are still going to incur a significantly greater cost by having to restart the process. And finally, did you notice in the above little story that you didn't prototype the new version? Meaning you might very well have gotten the new solution as wrong as the first time around and so we start the fail cycle all over again.

So, no, not really all that trivial.

And let's also not forget that if we are talking about apps here, then you typically only ever get one shot at acquisition/conversion for a customer, you will rarely get them to try you again. Come to think of it, the same can be said of customer confidence.

THE DARWIN ALTERNATIVE - BUT, BEWARE THE GROUNDHOG

It's called evolutionary prototyping and it's a planned project methodology that maintains the current product phase and level of completeness within the evolving prototype, almost like designing along an MVP product road map (ball - skate board - scooter - bicycle - moped - car). The benefits are that the development team can build in a truly agile way. Another is that complex prototype code is (mostly) never wasted. User testing can also be incorporated into the agile project sprint model. The down side is that the full development team is usually too expensive to place into an iterative groundhog day loop. One area where this model works quite well is with very small teams creating new concepts or startups, allowing products to be created, tested and brought to market fast. But you are going to need some serious no-jibber-jabber, A-team grade T-shaped people in your team.

TO SUMMARISE

Overall, I don't personally believe there are any set in stone ways of deciding which fidelity to use, or when and how to use them, but my personal preference would be to use a low fidelity approach for proofs of concept, low/medium level for internal feature brainstorming or for use as a communications bridge between design and development [annotated wire frames]. And then I would use high fidelity in final product user testing as well as for the power sell - i.e. when you want to wow and amaze stakeholders, in order to push acceptance in product presentations, providing them with a functioning, tangible asset full of smooth transitions, micro-interactions and real people following real journeys generating real data.

I want to make this completely clear, I am not proposing that all prototypes should be built to high fidelity. Prototypes need to be developed only to the fidelity required for their defined purpose within the agreed upon time and cost constraints for the project or presentation. There are a multitude of cases where low and medium fidelity are not only viable, but would be the preferred medium, especially within a fast moving, iterative design thinking phase where you want to be able to add, remove and amend options quickly. Another way of looking at the choice, if you expect the concept to undergo massive fundamental change then don't invest a lot of time in a changing presentation model.

This article is about prototyping inside design thinking, and so far I haven't really mentioned the "how" that's supposed to work. And the answer is ... drum roll ... that there is no answer. I could provide case studies, but a single rule does not, and should not exist. When you are starting on the design phase, it's important to build swiftly, build lots and fail heroically, and for this, low fidelity is the only way to go. You are adding and discarding new ideas so fast it would be silly to try and build anything more than a sketch or paper prototype. Gradually, though, as the ideas start to firm up, I might be tempted to start to build small digital mini-models of the more interactive elements. Be sensible, if an idea is firming up and is starting to take too much time to redraw it 5 times a day then digitise it. And of course, if the product is highly interactive, I would bring all of these mini models together into a testable format for user testing before production starts.

As a final thought, I will make one firm prediction:

The days of taking a faulty design into full product development are coming to an end.

Just a few asides from above:

*1: Managing the design thinking team: One of the biggest challenges for many organisations will be to get this team of strategy, UX, BA, IxD, visual and product design individuals to work together effectively as a collaborative unit. That's a log of ego simmering in the same pot. The situation probably calls for a special new design scrum master type role to be created, let's call him .... the groundhog !

*2: I mentioned above that prototyping is a distinct skill. It is not classic development, rather it is a marriage of concept design with extreme programming, requiring many years of experience across user interface (HTML/CSS/JS/AS3) as well as networking and server side coding (.NET/PHP), data manipulation (SQL/JS/PHP) and data storage (SQLite/MYSQL/XML/JSON), motion and game design is a definite plus (API's / Frameworks). Because hifi prototypes are used with real people, your prototyper will also need to have a level testing head, as quite often it's just not feasible to include testing/qa resources.

If we include native mobile prototypes into this mix then our prototype resource will also need to have skills with a hybrid mobile deployment framework, I use Adobe AIR, but there are many others. As for the coding required, it's fast and it's dirty and it will require a thick skin when it comes to the hack and slash of features, and that last alone will eliminate a vast majority of classic developers. I also predict that the prototyper role described above will be needed more and more in organisations as we move forwards.


 

Great, practical roadmap for prototyping - love the post.

Like
Reply

We've been there, continue to be there, and I don't see us (or anyone else doing these tasks) getting out of there soon. In addition to time and skills, I'd include technology and especially budget as a roadblock to well-executed hifi. This is because it's impossible to easily get extremely good details with a UX tool and hard coding is too time-consuming, too early in the process. Not to mention changes will eat right through budgets and squash your perfectly transitioned work. So our process is usually lofi for testing with clearly defined and agreeable research goals established long before we put concepts into users hands. Good researchers are imperative if you're planning to make critical decisions about what to or not to include in the build and they can make up for rudimentary execution.

Like
Reply

Excellent - I love the biters. You've made a really good point, where do you stop? As you've correctly pointed out, the prototype could become as complicated as the real thing. So, to answer, the complicated hifi parts of the prototype are sections of the solution identified as having the most interaction and / or potential pain points through earlier research. Anything else is visual design. The associated test script is a part of the process, you're allowed to skip the boring bits. You can also narrow the scope by using a single browser. Or a single mobile / tablet with defined screen sizes. A lag is just a timeout, a download needn't actually download, If you're coding with the same tech as the solution team, any complex work you are completing can form a real starting point to the full scope dev. A pinch of common sense is all I ask. Remember the alternative with a low/med fidelity, so do as much as you can within the allowed scope.

Like
Reply

I'll bite. I love the idea of getting "as real as possible" but - with this definition - what IS the distinction between "high fidelity" and the real thing? Where are you actually avoiding the full development cost if you are including: "realistic transitions, native mobile considerations, building in network lag and download/upload time, video play options, Accessing real API's, reacting to a lack of connectivity, zooming, scrolling, gestures, heavy data computations."

Like
Reply

Having created a couple of medium fidelity prototypes in Axure, I agree, it is just as complex as developing and maintaining a high fidelity prototype. In fact, I've switched to Balsamiq because no one in their right mind thinks that a seemingly hand drawn image is the end product. However, there is a place for medium fidelity. Visual designers comp and red line screens for developers anyway. We might as well link the flow together in Invision or another tool of that nature to test common scenarios before throwing the design over the fence to developers. It's easier for a user to validate that you put the right content in the right place at the right time when they see what the end product looks like.

To view or add a comment, sign in

More articles by Michael Keating

Others also viewed

Explore content categories