Components are a section or subsection of a designed space. They take up a significant portion of the screen, may be as large as the viewport (or, depending on your point of view, larger), and may appear modally in front of other displayed information.

Components are concerned with displaying a range information types- images, ordered data, expandable lists, and notifications. They also allow the user to interact with the system in some significant, primary manner. Combining them, with the small, reusable, interactive or display Widgets (see that section) gives an unlimited number of options for design.

The components that will be discussed in the following chapters are:

Displaying Information

How information is displayed needs to reflect the user’s mental model and mimic the way they organize and process knowledge. If the information is displayed on mobile user interfaces that ignores these principles, you will most likely cause the user to become lost, confused, frustrated, and unwilling. To prevent this, this chapter will explain research based frameworks, tactical examples, and descriptive mobile patterns to use. This chapter will discuss the following topics:

To best implement the correct component in a mobile user interface, it’s essential to familiarize yourself with a model of information processing. Our human mind is like a leaky bucket. It holds plenty of information, but can easily let information slip away and spill out. If we can understand how our mind works, and its limits, we can create visual information displays that limit our information loss and mental load during decision making processes. Our perception model is complex, and there are many theories explaining it’s structure. To describe the complete human cognitive structure, is beyond the scope of this book. However, this section will generally explain the major processes:

Sensation: Getting Information Into Our Heads

Sensation is a process referring to the capture and transformation of information required for the process of perception to begin (Bailey 1996). Each of our sensors (eyes, ears, nose, skin, mouth) collects information, or stimuli, uniquely but all will transform the stimulus energy into a form the brain can process.

All of these senses respond selectively to certain types of stimuli. There are four types of stimuli our bodies can sense:

Each of these stimuli can be collected through different senses. Electromagnetic stimuli can be collected through vision. Mechanical stimuli can be collected by hearing, touch, pain, vestibular, and kinesthetic. Thermal by cold and warmth. Chemical by taste and smell (Ellingstad, 1972).

Our Sensory Limits

Our sensory processing has limits. For example, we can only see wavelengths between 400 and 700 nanometers. Our thermal sensors respond only to infrared wavelengths. Our skin temperature is about 91.4 degrees F and stimuli at this temperature do not cause a noticeable thermal sensation. However, below 60 degrees F, the skin will transmit a cold feeling and above 105 degrees, the skin will transmit a hot feeling. This section details patterns involving information display. Let’s first examine how we collect visual stimuli for processing.

Visualization: Collecting Visual Stimuli

How the Eye Works

The eye is an organ responsible for vision. Many people use the analogy that our eye works similar to a camera. Both eye and camera have a lens, an aperture (pupil), and film (retina). However, the similarity stops there. Because the image that is shown on the back of our retina does not resemble our perception of it. It first collects, filters, and focuses light. MORE PROCESS

Our eyes can only experience a narrow band of radiation in the electromagnetic spectrum. The narrow range is approximately 400 nanometers (where we can perceive the violet color) to about 700 nanometers (where red is perceived). The focused beam of light is then projected onto the back part of our retina where it contacts photoreceptors, known as rods and cones. These receptors are light sensitive and vary in the amount we have. There are about 100 million rods and only 6 million cones. The cones are used for seeing when there is bright light and are color sensitive. The rods are sensitive to dim lighting and are not color sensitive. These receptors convert light into electro-chemical signals which travel along the optic nerve to the brain for processing. The eye is sensitive to stimuli in many ways at any moment, including the size of stimulus, its brightness and contrast, and the part of the retina that is stimulated.

Visual Acuity and the Visual Field

Visual Acuity is the ability to see details and detect differences between stimuli and spaces. Inside our eye at the center of our retina lie our fovea. The fovea is tightly packed only with cones (approximately 200,000) and it is here where our vision is most focused. The fovea is the central 1 to 2 degree of our eye but actually only 1/2 degree is where we have our sharpest vision. So the farther away objects extend beyond our fovea range, the more they become out of focus and we are less able to pick up certain colors. We can still detect these images peripherally, but with less clarity. Even color perception is affected by our visual field. Blue can be detected about 60 degrees from our fixed focal point. Colors of yellow, red, and green are detected within a narrower visual field.

Factors affecting visual acuity depend on many things including: the size of the stimulus, the brightness and contrast of the stimulus, the region of the retina stimulated, and the physiological and psychological condition of the individual (Bailey, 1996). Factors of Stimuli affecting Acuity: Luminance and Glare. Luminance is the amount of light that enters the eye.

Size of the Stimulus: Visual Angle

The size of the stimulus is measured with visual angle. This is the angle formed at the eye by the viewed object. The visual angle can be calculated using the following formula: Visual Angle (minutes of arc) = (3438)(length of the object perpendicular to the line of sight)/distance from the front of the eye to the object. Visual angle is typically measured in degrees of arc where one degree=60’(minutes of arc), and 1 minute of arc=60”(seconds of arc). With an understanding of visual angle, we can determine the appropriate size of visual elements including character size viewed at specific distances. According to Human Factors Society (1988), the following visual angles are recommended for reading tasks:

When reading speed is important, the minimum visual angle should not be less than 16 minutes of arc and not greater than 24 minutes of arc. When reading speed is not important, the visual angle can be as small as 10 minutes of arc. Characters should never be less than 10 minutes of arc or greater than 45 minutes of arc.

So, let’s assume I’m designing a text that is to be read quickly on a mobile device with a viewing distance of 30 centimeters (11.8 inches). The equation would look like this: Length= 16 minutes of arc (30)/3438. The smallest acceptable character height would then = .14cm. or about 10 points.

Now, other factors that need to be addressed when designing character size on mobile is:

Brightness, Luminance, and Contrast

Brightness refers to our subjective perception of how bright an object is.

Luminance is the measure of light an object gives emits from its surface. Luminance is measured in different units such as candela (cd/m2), footlambert (ftL), mililambert (mL), and Nit (nt). Riggs (1971) notes that in starlight (luminance of .0003 cd/m2) we can see the white pages of a book but not the writing on them. The recommended luminance standard for measuring acuity is 85 (cd/m2) (Olzak and Thomas, 1996). For text contrast, the International Standards Organization (ISO 9241, part 3) recommends a minimum of 3:1 luminance ratio of text and background. Though a ratio of 10:1 is preferred (Ware, 2000).

Remember that Luminance and Brightness are unrelated. For example, if you lay out a piece of black paper in full sunlight on a bright day, you may measure a value of 1000 cd/m^2. If you view a white piece of paper in an office light , you will probably measure a value of only 50 (cd/m2). Thus, a black object on a bright day outside may reflect 20 times more light than white paper in the office (Ware, 2000).

Visual Perception

After our senses collect visual information, our brain begins to perceive and store the information. Perception involves taking information that was delivered from our senses and interacting it with our prior knowledge stored in memory. This process allows us to relate new experiences with old experiences. During this process of visualization of perception, our minds look to identify familiar patterns. Recognizing patterns is the essential for object perception. Once we have identified an object, it is much easier to identify the same object on a subsequent appearance anywhere in the visual field (Biederman and Cooper, 1992).

Gestalt Design Principles

The Gestalt School of Psychology was founded in 1912 to study how humans perceive form. The Gestalt principles they developed can help designers create visual displays based on the way our minds perceive objects. These principles, as they apply to mobile interactive design are:

Articulating Graphics

Now that we have an understanding that visual object perception is based on identifying patterns, we must be able to design visual displays that mimic the way our mind perceives information. Stephen Kossyln states “We cannot exploit multimedia technology to manage information overload unless we know how to use it properly. Visual displays must be articulate graphics to succeed. Like effective speeches, they must transmit clear, compelling, and memorable messages, but in the infinitely rich language of our visual sense” (Kossyln, 1990).

Display Elements are Organized Automatically

This follows gestalt principles. Objects that are close by, collinear, or look similar tend to be perceived as groups. So when designing information displays, like maps, adding indicators, landmarks, and objects that are clustered together, appear to be grouped and share a relationship. This may cause confusion when the viewer needs to locate his exact position.

Perceptual Organization is Influenced by Knowledge

When looking at objects in a pattern for the first time, the organization may not be fully understood or remembered. However, if this pattern is seen again over time, we tend to chunk this pattern and store it in our memory. Think of chessboard with its pieces played out. To a viewer who has never seen this game before, will perceive the board as having many objects. However, an experienced chess player, will immediately identify the objects and the relationships that have with each other and the board. So when designing visual displays, its essential to know the mental model of your user so they may quickly identify and relate to the information displayed.

Images are transformed Incrementally

When we see an object move and transform its shape in incremental steps, we have an easier time understanding that the two objects are related or identical. However, if we only see the object’s beginning state and end state, our minds are forced to use a lot of mental processing and load to understand the transformation. This can take much more time and also increase errors or confusion. So when designing carousel lists, make sure the viewer can see the incremental movement.

Different Visual Dimensions are Processed by Separate Channels

Object attributes such as color, size, shape, and position are processed with our minds using separate processing channels. The brain processes many individual visual dimensions in parallel at once, but can only deal with multiple dimensions in sequence. For example, when designing bullet list that are all black circles, we can immediate identify all of them. However, if you add a bullet that is black, same size, but diamond shape, our minds have to work harder to perceive them as being different.

Color is Not Perceived as a Continuum

Many times designers will use color scale to represent a range of temperature, like red is hot. Blue is cold. And temperatures in between will be represented by the visual spectrum. The problem is that our brains do not perceive color this way in a linear dimension. We view color based on the intensity and amount of light. So a better way of showing this temperature difference would be to use varying intensity and saturation.

If a perceptually orderable sequence is required, a black to white, red to green, yellow to blue, or saturation (dull to vivid) sequence can be used (Ware, 2000).

When high level of details are to be displayed, the color sequence should be based mostly on luminance to take advantage of the capacity of this channel to convey high spatial frequencies. When there is little detail, a chromatic sequence or a saturation sequence can be used (Rogowitz and Treinish 1996).

In many cases, the best color sequence should vary through a range of colors, but with each successive hue chosen to have higher luminance than the previous one (Ware, 1988).

Perceiving the World: Norman's Interface Model

The reason why magic works to confuse us is because it takes advantage of our cognitive processing capabilities. Donald Norman tells us there are two fundamental principles of designing for people: provide a good conceptual model, and make things visible (Norman: 1988).

A conceptual model, known more today as a mental model, is a mental representation -- built from our prior experiences, interactions, and knowledge -- of how something works. It’s our representation of how we perceive the world.

The second principle, "make things visible," is based on the idea that after we have collected, filtered, and stored information, we must be able to retrieve it to solve problems and carry out tasks. Norman indicates that this principle is made up of smaller principles such as:

Mapping Describes the relationship between two objects and how well we understand their connection. On mobile devices, we’re talking about display-control compatibility. On a mobile device, controls that resemble our cultural standards, are going to be well understood. For example, let’s relate volume with a control. If we want to increase the volume, we expect to push the volume button up. If we want to read more information in a paragraph, we can scroll down, click on a link, or tap on an arrow. Problems occur when designs create an unfamiliar relationship between two objects. On the iPhone, in order to take a screen shot, you must hold the power button at the same time as the home button. This sort of interaction is very confusing, and impossible to discover unless you read the manual (or otherwise look it up or are told), and are hard to remember.

Affordances

Used to describe that an object’s function can be understood based on it’s properties. For example, a handle on a door affords gripping and pulling. The properties of the door handle -- it’s relative height to our arm's reach, that the cylindrical shape fits within a our closed grasp -- make this very clear. If an object is designed well and it clearly communicates it’s affordance, we don’t need additional information attached to the design to indicate it’s use. On mobile devices, we understand that physical buttons afford pushing and screen buttons afford touching, selecting, and clicking. If we cannot recognize the object as a button, then the user will ignore it, assume it is decoration (and therefore not functional) or not understand how to interact with it. "Affordanceless" interfaces, such as those where gestures are required but have no indicators at all, are a real concern in this area.

Feedback

Describes the immediate perceived result of an interaction. It confirms that action took place and presents us with more information. In a car, you step on the accelerator. That action has an immediate result. The feedback is that you experience the car moving faster. On mobile devices, when we click or select an object, we expect an immediate response. Feedback can be experienced in multiple ways: A button may change shape, size, orientation, color, or position. Or very often a combination of these. A notification or message may appear, or a new page might open up. Feedback can also appeal to other senses using haptics (vibration) or sounds. Be sure to design actions that result in immediate feedback. This will limit confusion and aggravation while making the user’s experience more satisfying.

Constraints

Restrictions on behavior can be both natural and cultural. They can be both positive and negative. Remember the toy with cut out spaces resembling geometric shapes? A child is to take a yellow plastic shape and fit it though the space in the red sphere that it matches. A cylinder fits into the circle cut out, the cube fits inside the square, but will not fit in the triangle slot, etc. The size and shape of the objects are constraints in making the correct fit. Those are examples of natural constraints (though still learned). Cultural constraints are applied to socially acceptable behaviors. For example, it’s not socially acceptable to steal from someone or throw your friend's phone out the window to get their attention. When designing mobile interfaces, use constraints to reduce or prevent user error. When you accidentally press delete instead of save, you should be provided a constraining confirmation message that requires your action. When designing to reveal information, use constraints of the size of the viewport, or have unimportant buttons become inactive. Norman’s model of design is a framework that should be referred to when using patterns to revealing more information. For more of a complete understanding of his model, refer to his book, The Design of Everyday Things.

Memory through Emboddied, Situated, and Distributed Cognition