2014 - The year of the workflow

In the last couple of years there have been extensive changes in the way we are using the web, but our existing design workflows have struggled to keep pace. I see 2014 being the year of the workflow, as they rapidly evolve to match the new ways we are accessing the web.

We may already be designing responsive sites that work across a range of devices including: phones, tablets and smart TVs, but traditionally the tools we used and the context in which we used them were not designed with the responsive nature of the web in mind. I will be looking at how I see my workflow and the tools I use evolve to meet these new challanges.

Image showing workflow Photo credit to: Emma b

Less is more - Wireframing

Wireframing continues to move away from the traditional approach, where, currently, detailed templates and annotations are presented in static documents. This process is time consuming and doesn’t reflect the flexibility of todays “access anywhere” web. This year, I expect to be doing a lot more lo-fi sketching of wireframes, further front loading the design process, allowing for rapid ideation and iteration. There is no document set up, pixel grids or time costly applications to get to grips with.

In browser prototyping

As more frameworks and tools with the ability to cater for responsive design become available, rapid prototyping in browser will become the norm. Whether thats writing code manually (HTML/CSS/Javascript), using a framework (Bootstrap/Foundation) or responsive design tools (Macaw). The primary benefits of HTML prototypes over wireframes include:

  • They are relatively quick to implement and update allowing for fast design iterations.
  • They are a communication tool both within the design team and with the client.
  • They provide equivalent functionality to what we are actually producing, and extra context within the browser including responsiveness and real HTML elements. This allows for design solutions to be tested and iterated, producing appropriately informed designs.

Letting go of pixels

Traditionally, wireframes attempted to be pixel perfect representations of how we wanted a final page layout to appear to the user. Web access is no longer constrained to a desktop computer with a couple of different screen resolutions, there are numerous devices and scenarios through which the web can be accessed. Providing wireframes for all these potential scenarios just isn’t feasible, so finding new ways to communicate page layouts, structure and journeys will be essential. In response to this, focus will continue to shift from wireframes on to other deliverables that better communicate elements of the experience including tasks, content, hierarchy and style.

Taking a modular approach

I’ve found myself moving away from thinking about pages and templates and instead thinking about designs more in terms experiences, journeys, tasks and the functionality required to complete them. This approach lends itself to a modular mindset that allows for flexibility and reusability, making it easier to produce a consistent experiences that match the users goals.

With a deluge of internet connected devices, the ways in which the web is accessed will only continue to grow. Our workflows need to evolve in order to meet the new challenges and opportunities this will present; allowing time for real ideation and iteration will be essential. Moving away from the idea of the page towards experiences, tasks and modules will be part of this, and spending a large amount of time in digital wireframing tools should become a thing of the past.

User testing biases

User testing is a great tool for understanding how users interact with a system and is one of the cornerstones of a user centered design process. In this article we are going to review some of the biases that can impact your findings, and will break user testing down into its component parts, analysing each in turn. By being aware of these biases we can identify when they occur and look to limit their effects. Image representing user testing Photo credit to: Emma b

The environment

For this article we will break the environment down into two parts; the physical environment and the social environment. First lets review the physical environment. The testing environment is one variable that can have an affect on user behaviour. To minimise these effects look to test in an environment similar to that which the user would typically be using the system in. The second aspect of the environment is the social setting. Ordinarily testing will require a facilitator sitting in a room with the user and presenting them with various tasks, there may also be people observing remotely or a via two way mirror. We will look at the effects this type of social environment can have on user behaviour. One bias that research has identified is that when participants know they are being observed they are likely to be much more vigilant, spending more time reading instructions and completing tasks, than they would otherwise. This is known as the Hawthorne effect. There has been much discussion around the causality of this effect and whether it is purely the act of observation that causes the change in behaviour. From personal experience I have observed users persevering with tasks much longer in a formal user testing scenario than when approached in a more informal way. So whatever the reason for the change in behaviour, it is important to understand that user behaviour in testing will not always exactly mirror behaviour in the real situation.

The participant

The selection process, and getting the right participants, is an important aspect to getting real results from testing. If you haven’t recruited participants that truly represent your real end users then you can generate a selection bias. This bias will be accentuated the more unique the user base and/or the more novel the task being tested. For example there would be little benefit testing an ultrasound system with users who have no medical training. Although general usability issues are likely to be experienced by most users, specific behaviours can differ greatly dependant on many factors such as: users previous experience, skills, age etc.

The facilitator

When running a test, there will be at least one facilitator in the environment with the participants. The facilitator may also have defined the testing tasks and in some cases had a hand in designing the system being tested. The facilitator of the test is likely to have preconceived ideas of what they think is right or wrong with the system they are testing. This can be beneficial as they have an understanding what areas to focus on but this can lead to them being biased towards the behaviour that matches their assumptions. This issue can be exacerbated when facilitators test their own work, as they are likely to have a stronger bond with the system they are testing. As a facilitator you should try and stay objective, keeping an open mind to the users behaviour. Remember participants don’t just pick up on what the facilitator says, they can also recognise other cues such as the stressing of certain words, phrasing of questions and even body language.

The tasks

The tasks that are chosen for the testing probably have the biggest impact on user behaviour, as the user will respond and try to complete the given tasks. Selecting the right tasks and scenarios is therefore essential to identifying usability issues. Lets move on to look at the various biases that can have an effect on the outcome of the tasks and scenarios you set your users:

Recency and primacy

It has been found that participants emphasise issues depending on when they happened. Participants are more likely to give emphasis to events that have most recently occurred and also events that happen at the beginning of a test. Remember, first impressions count, but so do last impressions! In psychological experiments a for these biases is to vary the order of the tasks you give the participant. However this may not always be possible in user testing due to linear and interlinked nature of certain tasks. For example: It wouldn’t make sense to test the confirmation page of a checkout process before first going through the checkout process.

Keyword and information priming, phrasing

A selection of different biases that can be triggered by the tasks, or the facilitator. Examples of these include: Providing the user with supplementary information: This can occur when a task gives the user information they wouldn’t necessarily have available were they completing the task on their own. An example could be when tasked with finding a call to action on screen, If the task includes a specific term then users may scan the page until they find the term, but would they have been looking for that term without prompting? Be aware of how users respond to the information they are given and try not to provide extra information that the user wouldn’t have available if they were completing the task on their own. The phrasing of questions can introduce further variables, try to phrase questions neutrally so not to lead the user towards a specific answer.

Task selection bias

If you ask a user to complete a task they will assume there is a way to complete it, and will therefore invest more effort into finding a solution. One way to potentially combat this would be to add tasks that cannot be completed, although you would have to consider the ethical implications of such an approach, also this would likely introduce further behavioural changes later in the test.

A word on incentives

Participants are usually given a cash incentive for taking part in testing. The exact amount will depend on many factors including the duration of the testing and the availability/expertise of the participants. Offer too little and it can be hard to recruit, or you may have participants not show up. On the other end of the spectrum, there have been some reports that offering too high an incentive can lead to participants trying too hard to earn it.


User testing is an invaluable tool to gaining insight into user behaviour. By being aware of the biases that can affect the validity of your results, you can look at ways to minimise their effects, taking them into consideration when analysing your data.

Next on the Calendar

No events on the horizon.