Darkness

It’s pitch black outside. The air is cold, wet, yet carries a lingering sweet smell. Sporadic beams of light dance in the night, casting an eerie glow on the landscape. Giggles, whispers, and even the occasional scream sound, a reminder that others too are walking the night.

Through the eyes of one of these figures, a house is seen. The figure changes course and heads in the direction of the home. The home is unlit, and looks unoccupied. In one hand, the figure holds a large sack, the other yields a blunt sword.

As the figure makes his way now up the porch and to the door, the hand with the sword points forward. The hand is not human. It’s about twice as big a man’s hand. Black course fur covers its skin while jagged, sharp claws extend from the aged fingers.

The creature now stands directly in front of the door. Its purpose clear. It only wants one thing and that thing remains inside the house. With a blast of energy the hand with the sword raises…lunges, and slams into the house.

A chime echoes. The front door opens. The man who opens the door smiles happily while looking down, hardly frightened by the four foot tall, hairy monster screaming “Trick or Treat!”.

That sounds like a great idea

Take a moment to catch your breath, slow your heart beat down. That was an intense introduction! Despite my enjoyment of Halloween, it’s not my focus for the remainder of this chapter. However, the doorbell that so frequently sounds during that annual night is.

The doorbell is an outstanding example of an effective interactive control. If a 10 year old dressed as a monster with oversized, latex hands, in the dark can use it effortlessly, it must be good!

Let’s examine why the doorbell is an effective control using Donald Norman’s Interface Model.

Make it Visible

A control needs to be visible when an action or state change requires its presence. The doorbell is an example of an “always present” control. There aren’t that many controls on the door anyway (door knob, lock, knocker), so the doorbell is easy to locate. In some cases, the doorbell is illuminated, making it quite visible when lighting conditions are poor.

Cultural norms, and prior experiences have developed our mental models in which we expect the doorbell to be placed in a specific location- within eye and reach level and to the side of the door. So our scary monster was quickly able to detect the location of the doorbell from his prior knowledge, as well as see its illuminated glow.

Having an object visible, doesn’t have to mean it can be seen. It can also mean that the object is detected. Consider someone who is visually impaired. They still have the prior knowledge that the doorbell is located on the side within eye and reach level. It’s shape is uniquely tactile, making it easy to detect when fingers or hands make contact.

In mobile devices, this is a very important principle. Many times, we don’t have the opportunity to always look at the display for a button on the screen, but we can tactilely feel the different hardware keys. Consider people playing video games. Their attention is on the TV, not the device controller, yet they are easily able to push he correct buttons during game play.

Mapping

Describes the relationship between two objects and how well we understand their connection. This relates with our mental model of the control and its expected outcome. When we see the doorbell, we have learned that when pushed, it will sound a chime that can be heard from the inside of the house or building to notify the person inside, that someone is waiting outside by the door.

Our ten year scary monster, mapped that pushing and sounding of the doorbell will both notify the people inside that a trick or treater is there and that he will receive a handful of free candy.

So mapping relates heavily on context. If the boy pushes the doorbell and a night other than Halloween, the outcome will differ. The sounding chime will remain which indicates a person is waiting outside, but the likelihood of someone there to answer and even more, to hand out candy, is much more unlikely.

On a mobile device, controls that resemble our cultural standards, are going to be well understood. For example, let’s relate volume with a control. In the context of a phone call, pushing the volume control is expected to either increase or reduce the volume levels. However, if the context changes, i.e, on the idle screen, that button may provide additional functionality, bringing up a modal pop-up to control screen brightness, and volume levels. Just like call volume, pushing up will still perform an increase and pushing down will perform a decrease in those levels.

But you must adhere to common mapping principles related to your user’s understanding of control display compatibility. On the iPhone, in order to take a screen shot, you must hold the power button at the same time as the home button. This type interaction is very confusing, and impossible to discover unless you read the manual (or otherwise look it up, or are told), and is hard to remember.

Onscreen and kinesthetic gestures can be problematic too if the action isn’t related to the type of reaction expected. Use natural body movements that mimic the way the device should act. Do not use arbitrary or uncommon gestures.

Affordances

Affordances describe that an object’s function can be understood based on its properties. The doorbell extends outward, can be round or rectangular, and has a target touch size larger enough for a finger to push. Its characteristics afford contact and pushing.

On mobile devices, physical keys that extend outward or recessed inward afford pushing, rotating, or sliding. Keys that are grouped in proximity afford a common relationship between the two of them, many times polar functionality. For example, 4 way keys afford directional scrolling while and the center key affords selection.

Provide Constraints

Restrictions on behavior can be both natural and cultural. They can be both positive and negative, and they can prevent undesired results such as loss of data, or unnecessary state changes.

Our doorbell from above could only be pushed not pulled. The distance in which the doorbell could be push down, was restricted by the mechanics of the device. It could not be toggled directionally, just interacted along the “Z” axis.

Despite the small surface and touch size, the button could still be pressed down by an finger, a entire hand or any object larger than the button’s surface. In this context, that lack of constraint was beneficial. A user doesn’t have to be entirely accurate using the tip of the finger to access the device. Our scary monster, with huge latex hands holding a sword was still able to push the button down, allowing for a quick interaction.

On mobile devices, however, it’s often necessary to define constraints on general interactive controls. Some constraints that should be considered involve:

Refer to the General Touch Interaction Guidelines found earlier in the Input & Output Section.

Use Feedback

Feedback describes the immediate perceived result of an interaction. It confirms that action took place and presents us with more information. Without feedback, the user may believe the action never took place leading to frustration and repetitive input attempts. The pushed doorbell provides immediate audio feedback that can be heard from outside as well as inside the house.

On mobile devices, when we click, select an object, or move the device, we expect an immediate, if applicable, response. With general interactive controls, feedback is experienced in multiple ways. A single object or entire image may change shape, size, orientation, color, or position. Devices that use accelerometers provide immediate feedback showing page flips, rotations, expand, and slide.

Gestural Interactive Controls

There is growing number of devices today that are using gestural interactive controls as primary way of interaction. We can expect smartphones, tablets, and game systems to have some level of these types of controls.

Gestural interfaces have a unique set of guidelines that other interactive controls need not follow.

Dan Saffer, author of Designing Gestural Interfaces (Saffer, 2009) points out five reasons to use interactive gestures.

  1. More natural interactions. People naturally interact and manipulate physical objects.

  2. Less cumbersome or visible hardware.Mobile devices are everywhere: in our pockets, hands, tables, on storefront walls, or kiosks. Gestural controls don’t rely on the large physical components such as keyboards, and mice to manipulate the device.

  3. More flexibility. Using sensors that can detect our body movements remove our hand-eye dependence and coordination normally required on small mobile screens.

  4. More nuance. A lot of human gestures are related to subtle emotional forms of communication. Like winking, smiling, rolling our eyes. The nuance gestures have yet to be fully explored in today’s devices, leaving an area of opportunity in user experience.

  5. More fun. Today's gesture based games encourage full body movement. Not only is it fun, it provokes a fully engaging social context.

Patterns for General Interactive Controls

The patterns within this chapter describe how General Interactive Controls can be used to initiate various forms of interaction on mobile devices. The following patterns in the chapter will be discussed.

Directional Entry– Controls used to select and otherwise interact with items on the screen, a regular, predictable method of input must be made available. All mobile interactive devices use list and other paradigms that require indicating position within the viewport.

Press-and-hold – This mode switch selection function can be used to initiate an alternative interaction.

Focus & Cursors – The position of input behaviors must be clearly communicated to the user. Within the screen, inputs may often occur at any number of locations, and especially for text entry the current insertion point must be clearly communicated at all times.

Other Hardware Keys – Functions on the device, and in the interface, are controlled by a series of keys arrayed around the periphery of the device. Users must be able to understand, learn and control their behavior.

Accesskeys – Provide one-click access to functions and features of the handset, application or site for any device with a hardware keyboard or keypad.

Dialer – Numeric entry for the dialer application or mode to access the voice network varies from other entry methods, and has developed common methods of operation that users are accustomed to.

On-screen Gestures – Instead of physical buttons and other input devices mapped to interactions, these allow the user to directly interact with on-screen objects and controls.

Kinesthetic Gestures – Instead of physical buttons and other input devices mapped to interactions, these allow the user to directly interact with on-screen objects and controls using body movement.

Remote Gestures – A handheld remote device, or the user alone, is the best, only or most immediate method to communicate with another, nearby device with display.