Science fiction author Isaac Asimov once wrote the “Three Laws of Robotics” into his Robot series of stories. These laws were permanently hard-coded into every robot as a final failsafe to prevent catastrophe and protect humanity. I got to wondering, what are our final failsafes? What would our three laws be as interaction designers?
A user’s perception of an interface is inextricably connected to its form, content, and behavior. Just as industrial and graphic designers focus on form, interaction designers hold behavior as the foremost element to consider. When designing to influence a user’s experience, our three laws must be primarily concerned with how an interface behaves, and what effect it has on user behavior. They must be basic and unalterable fundamentals upon which to build other interaction design principles. Lucky for me, the three laws I would go with have already been alluded to by the master himself, Jef Raskin, the brain behind the original Macintosh project. He drops all three bombshells on the same page of his book, The Humane Interface, an interaction design book of near-biblical status. If you look closely, you will see that most popular, modern interfaces use concepts and techniques put forth within the pages of Raskin’s landmark book, including both Microsoft and Apple devices.
Playing off of Asimov’s three laws himself, Raskin says that “[t]he first law of interface design should be: ***A computer shall not harm your work or, through inaction, allow your work to come to harm.***” Let us all remember that a computer is a tool you use to accomplish something; simply using a piece of software is rarely, if ever, the end goal in and of itself. Thus we can all agree that the single most aggravating experience you can have with a computer is losing work. The only thing worse than having to redo work you have already done is losing data that you cannot reproduce exactly, like creative work.
Applications should maintain the integrity of your data as you entered it and do as much as possible to prevent users from losing work. Maybe you think this is only an engineer’s concern. Developers must make sure their software has safeguards and redundancy to prevent data loss (and has as few bugs as possible), right? In fact, data protection and effort preservation is also an interface design task, as a designer must anticipate — and/or find through user research — how a user is likely to lose work or have to duplicate her efforts. This means things like including robust undo functionality, and shielding destructive actions to prevent inadvertent data loss. A great example of this in action is GitHub’s repository deletion dialog, which works because unlike other confirmation-style dialogs, this one forces you to type the name of the repository to continue. It is a clever technique that forces the user’s locus of attention to the repository name during the deletion process. The safeguard protects the user against her own habituated workflows.
This first law also applies to the preservation of efforts related to the content the user is working with, as well as the content itself. For example, it can take significant effort to make a selection like the subset of items you would like to perform an action on (e.g. files). Thus, consider preserving selections across work sessions, and including them in the list of actions that can be undone. Similarly, if an interface allows a user to customize or rearrange elements, that arrangement or customization should be preserved.
Observance of this first “law” is why features like Apple’s Time Machine and autosave, and Dropbox’s revision history are so great. They are acknowledgments of the fact that humans make mistakes, and that even though a user may have initiated the destruction of work or data, it may not have been their intent.
Raskin goes on to note that a good second law might be “***A computer shall not waste your time or require you to do more work than is strictly necessary.***” Too often, users are burdened with tasks because it was simpler to let a person perform the action manually than to code a system to do it automatically. In these cases, when the technology allows, the computer should do the work. An example is forcing a user to select a credit card type, when that information can be inferred from the number.
Also count the time and mental effort required to learn a new interface or a system’s data model towards the balance of total work required. It may be less work for a user to perform a single action less efficiently than to first learn a new method before doing it faster. A user likely only interacts with a small set of interfaces all the time. For the rest, it may be worth trading in the speed of performing an action for one that it easier to intuit.
Spotting a situation where you are forcing the user to adapt their own mental model can be tricky because it usually comes from design decisions that impose a structure on a user’s content (often a technical requirement) instead of letting them decide how to organize their information, or using a structure with which they are already familiar. Great interfaces bring information in the system to the user in the way(s) they are most likely to want and/or understand it. Take for instance the way most banks expect you to save money. Normally, you have to set money aside by using one or a couple of separate savings accounts, or maybe deal with a single account and keep track of what not to spend yourself. Simple chooses a better alternative. Instead of forcing a user to think of saving money within the structure of how it’s implemented (i.e. in bank accounts), Simple allows you to set money aside for any number of “goals,” and that amount, along with pending transactions is subtracted from your “Safe-to-Spend” balance.
What I consider to be Raskin’s third law is really what the entire rest of his book is about. He says that an interface should be humane; it should be “responsive to human needs and considerate of human frailties.” This is really the core of the entire discipline of user-centered design, and from where most other interaction design principles are derived. Good interaction design is always about respecting the limitations of the human mind and body. It entails being sensitive to both our visceral, physiological responses, and our cultural values.
One example of making an interface humane is designing around the fact that people have single locus of attention. Take, for instance, having a light on the caps lock key of your keyboard. On its own, this light is not a good solution for avoiding slipping into caps lock mode by mistake because the user’s locus of attention is generally not on the key when they press it. Password inputs on the Mac handle this nicely by providing a visual indicator that caps lock is active within the input field itself, where the user is actually looking.
An example of being “responsive to human needs” is staying aware of what a user cares about when performing an action or going through a workflow (hint: it is what they are trying to accomplish and not your app). See how Amazon automatically shows you whether the lens you are viewing will work with your recently purchased camera.
This last “law” is where the meat of the interaction design discipline is. There is a great deal more to know about designing interfaces in today’s world such that it is humane, from Gestalt principles of perception and the graphic design principles they inform, to the relevant bits of cultural psychology. Our work is rarely final. As time passes, the technology landscape and our cultural context slowly change, so we truck along, continually evolving our designs and design processes.
These three laws, however, are a fundamental set of guidelines that I find myself returning to repeatedly as touchstones of a successful interface. They are useful to keep in mind as you make decisions about how an interface should look and act, regardless of the aesthetic style you end up with.
A computer shall not harm your work or, through inaction, allow your work to come to harm.
A computer shall not waste your time or require you to do more work than is strictly necessary.
An interface should be humane; it should be responsive to human needs and considerate of human frailties.