OUR WEB SITE

Search This Blog

Tuesday, May 12, 2009

Making forms work

All forms are designed for a purpose and that is generally to collect data that, hopefully, will provide useful information for the organisation. If that data is incorrect or incomplete in any way, then the form hasn't worked. Likewise, if you are providing data to someone else such as on a customer's bank statement or a report about an incident then that data needs to be understood by the recipient. If the recipient of the form doesn't understand the content or misinterprets it then the form hasn't worked. Of course, that may be a problem with the recipient's knowledge, but nevertheless the form still hasn't worked as intended.
It is possible to guarantee successful forms. This doesn’t mean that all forms will be
100% accurate, but error rates should come down to as low as 5% with one or more errors.

To achieve success, at least two things are necessary and a third is recommednded.

Best practice: forms should be designed according to ‘best practice’.
Go back thirty years and our knowledge of what made a good form was severely limited. But today there has been a large amount of research and we know how to design good public-use forms.

This is a big subject and can't be covered in a short post here, but you will find a lot of information on this in my book
Forms For People and in Caroline Jarrett’s book Forms That Work. There are also a number of free papers on form design on our company’s web site and also on Rob's Perspective.
Usability testing: the next step is usability testing, which I’ve covered in more detail below as well as in Forms For People.
Traditional methods of ‘testing’ include opinion surveys, pilot studies, readability scores and focus groups. But for the most part, they don’t TEST forms, they only provide opinions or inaccurate recollections. They often concentrate on treating people as machines and ignore the mind.

One of the most useless techniques is readability scores such as the
Flesch Reading Ease Scale method. We have an excellent paper on this downloadable from our company’s web site.

Another useless method is focus groups. Many people place a lot of ‘faith’ in focus groups, but they provide little useful information for forms usability. Again, I have a lot more to say about this in
Forms For People.

Modern research methods show the form in action and show us WHY people make mistakes. Most of the methods mentioned above don’t TEST forms to find out whether or not they are actually working. They concentrate on treating people as machines but ignore their minds and the complexities of their social interactions.

To produce quality forms we need a different approach—one that lets us see the forms in action and work out in advance if the form is going to work. We need a method that give us empirical evidence about their form filling behaviour—why users make mistakes, why they don’t carry out what was expected of them and the problems they face.

For our purposes, behaviour includes:
  • The way in which the person carries out the task
  • Physical things such as turning pages or moving through the document
  • Facial expression and other mannerisms that might indicate problems, frustration, lack of understanding and confusion
  • What the person says
  • Most important of all: finding out as much as possible about how the person understands the document. What is the cause of any misunderstanding? Do they give answers to form questions that the processors correctly understand? Do they carry out instructions or do what is expected with the information given?
Observational studies are a method whereby you can find out why people are going wrong—where you can highlight specific user problems and fine tune the design to get rid of them.

Using structured observational studies we watch users filling in or using the forms and, with appropriate questions, we can learn why they make mistakes. We learn about their real requirements, what they really need and want, and we collect information about their behaviour when using the form. The aim is to study the document in action in an environment as close as possible to the real world. We don’t just want to know what people think of the form or how they think we should ask the questions. We want to know about their behaviour—what really happens when they fill out the form.

One of the most valuable aspects of observational studies is that you can actually SEE the form improving through the testing stages. They also provide a great amount of fine detail and yet they are relatively inexpensive.

While each round of testing uses only a few people—perhaps 6 to 10—over the course of the study these can add up to a large group.
Error analysis: In most cases we also recommend error analysis of existing forms to determine where the problems are.
Error analysis involves examing a hundred or more completed forms  and determining where errors occur. It won't necessarily show WHY they are occurring and it won't show you all errors, but at least you will find out the number of errors that are detectable and where the form needs to be improved in the first instance. It also provides you with a useful benchmark for further evaluation after redesign.

In most cases errors will fall into the following categories.
  • Missing data
  • Data entered that wasn't required
  • Mistakes—data entered that is incorrect
Conclusion

It is possible to have good forms and to collect accurate information from form fillers. This in turn leads to much more accurate information for the organisation.
------------------------------------------------

1 comment:

  1. Love the idea that we too often treat people as machines not humans with a mind (plus a fallible brain working to do a great deal with limited resources).

    ReplyDelete