New Copy

Components, as described here, are a section or subsection of a designed interactive space. They take up a significant portion of the screen, and may be as large as the viewport (or larger), or when smaller may appear to be in front of other displayed information.

Components are concerned with displaying a range of information types -- images, ordered data, expandable lists, and notifications. They also allow the user to interact with the system in some significant, primary manner. Combining them with the small, reusable, interactive or display Widgets gives the designer an almost unlimited number of options.

The components that will be discussed here are subdivided into the following chapters:

Types of Components

Display of Information

How information is displayed needs to reflect the user’s mental model and mimic the way they organize and process knowledge. If the information is displayed on mobile user interfaces that ignores these principles, you will most likely cause the user to become lost, confused, frustrated, and unwilling. Patterns found in this chapter, such as Vertical List and Infinite List, provide solutions on how to display lots of information in a structured and contained manner that doesn’t overload the user’s mental load. These patterns also provide solutions to arranging content where screen real estate is so valuable and every pixel counts. To prevent this, this chapter will explain research based frameworks, tactical examples, and descriptive mobile patterns to use.

Control & Confirmation

Humans make mistakes. As designers, we can create effective interfaces that can prevent costly human error resulting in loss of user-entered data. When costly human error is possible, we can create modal constraints and decision points as preventative measures. Patterns found in this chapter, such as Confirmation and Exit Guard, provide solutions that help prevent human error that can lead to the loss of important user inputted data. The context of the user’s goals and current tasks should be considered when incorporating confirmation controls. Overuse of these constraints and decision points during low risk situations will cause user frustration by increasing their processing time and mental load, and delaying or stopping their task.

Revealing More Information

When we design to reveal more information, we need to be conscious of limitations of the devices and networks, and our human abilities. Screen size will limit the amount of information that can be displayed at a time. A device’s OS will limit the types of interactions available. Hardware constrains the viewport size, as well as processing speed and loading times. Our memory limits cause us to filter, store, and process only relevant information over a duration of time. Patterns, such as Windowshade and Pop-up, can be used to work around these limitations. They provide solutions to reveal additional information if the user chooses to need more, without having to saturate the screen full of content.

Getting Started

You now have a general sense of what components are how they relate to information display. The component chapters will provide specific information on theory, tactics, and illustrate examples of appropriate design patterns. And always remember to read the antipatterns, to make sure you don't misuse or over-use a pattern.


old copy

Components, as described here, are a section or subsection of a designed interactive space. They take up a significant portion of the screen, and may be as large as the viewport (or larger), or when smaller may appear to be in front of other displayed information.

Components are concerned with displaying a range of information types -- images, ordered data, expandable lists, and notifications. They also allow the user to interact with the system in some significant, primary manner. Combining them with the small, reusable, interactive or display Widgets gives the designer an almost unlimited number of options.

The components that will be discussed here are subdivided into the following chapters:

Types of Components

Display of Information

How information is displayed needs to reflect the user’s mental model and mimic the way they organize and process knowledge. If the information is displayed on mobile user interfaces that ignores these principles, you will most likely cause the user to become lost, confused, frustrated, and unwilling. To prevent this, this chapter will explain research based frameworks, tactical examples, and descriptive mobile patterns to use. This chapter will discuss the following topics:

Control & Confirmation

Humans make mistakes. As designers, we can create effective interfaces that can prevent costly human error resulting in loss of user-entered data. When costly human error is possible, we can create modal constraints and decision points as preventative measures. The context of the user’s goals and current tasks should be considered when incorporating confirmation controls. Overuse of these constraints and decision points during low risk situations will cause user frustration by increasing their processing time and mental load, and delaying or stopping their task. To prevent that, this chapter will explain research based frameworks, tactical examples, and descriptive mobile patterns to use. This chapter will discuss the following topics:

Revealing More Information

When we designing to reveal more information, we need to be conscious of limitations of the devices and networks, and our human limits. Screen size will limit the amount of information that can be displayed at a time. A device’s OS will limit the types of interactions available. Hardware constrains the viewport size, as well as processing speed and loading times. Our memory limits cause us to filter, store, and process only relevant information over a duration of time.

It is essential to design interactive displays that reflect a user’s mental model while making sure the control that is used to reveal more information is visible. If the information is displayed on mobile user interfaces that ignores these principles, users will encounter performance errors, dissatisfaction, and frustration. To prevent this, this chapter will explain research based frameworks, tactical examples, and descriptive mobile patterns to use. This chapter will discuss the following topics:

Helpful Knowledge for this Section

Before you dive right into each pattern chapter, we like to provide you some extra knowledge in the section introductions. This extra knowledge is in multi-disciplinary areas of human factors, engineering, psychology, art, or whatever else we feel relevant.

This section will provide background knowledge for you in the following areas:

Understand These Human Factors

To choose and implement the most correct component in a mobile user interface, it’s essential to familiarize yourself with human factors. Our minds are like a leaky bucket. They holds plenty of information, but can easily let information slip away and spill out. If we can understand how visual information is processed and collected, we can create effective visual interactive displays that resemble the way the mind works.

This can help limit the cognitive load and risks of information loss during decision making processes. Our perception model is complex, and there are many theories explaining its structure which is beyond the scope of this book. A general description of visual sensation and perception is described below.

Sensation: Getting Information Into Our Heads

Sensation is a process referring to the capture and transformation of information required for the process of perception to begin (Bailey 1996). Each of our sensors (eyes, ears, nose, skin, mouth) collects information, or stimuli, uniquely but all will transform the stimulus energy into a form the brain can process.

Collecting Visual Stimuli: How the Eye Works

The eye is an organ responsible for vision. Many people use the analogy that our eye works like a camera. Both eye and camera have a lens, an aperture (pupil), and a sensor (retina). However, the manner in which sensing and processing occurs is very different. This should be understood and least a little by the designer in order to create displays that are easy to see and understand.

The eye collects, filters, and focuses light. Light enters through the cornea and is refracted through the pupil. The amount of light entering the lens is controlled by the iris. The lens focuses the beam of light and then projects it onto the back part of our retina where it contacts the photoreceptors known as rods and cones. These receptors are light sensitive and vary in relative density; there are about 100 million rods and only 6 million cones. The cones are used for seeing when there is bright light; three kinds of cones, each with their own pigment filter, allow perception of color. The rods are sensitive to dim lighting and are not color sensitive. These receptors convert light into electro-chemical signals which travel along the optic nerve to the brain for processing. Color deficits, commonly known as colorblindness -- affecting fully 10% of the male population, though almost no women -- are a result of reduced pigmentation in a cone, or loss of a whole type of cone.

Our eyes can only experience a narrow band of radiation in the electromagnetic spectrum. The narrow range is approximately 400 nanometers (where we can perceive the violet color) to about 700 nanometers (where red is perceived). The focused beam of light is then projected onto the back part of our retina where it contacts photoreceptors, known as rods and cones. These receptors are light sensitive and vary in the amount we have. There are about 100 million rods and only 6 million cones. The cones are used for seeing when there is bright light and are color sensitive. The rods are more sensitive so are useful for dim lighting and are not color sensitive. These receptors convert light into electro-chemical signals which travel along the optic nerve to the brain for processing.

The eye is sensitive to stimuli in many ways at any moment, including the size of stimulus, its brightness and contrast, and the part of the retina that is stimulated.

Visual Acuity and the Visual Field

Visual acuity is the ability to see details and detect differences between stimuli and spaces. Inside our eye, at the center of our retina, lie our fovea. The fovea is tightly packed only with cones (approximately 200,000) and it is here where our vision is most focused. The fovea is the central 1 to 2 degree of our eye, and the last 1/2 degree is where we have our sharpest vision. The farther away objects extend beyond our fovea range, the lower the resolution and color fidelity. We can still detect items peripherally, but with less clarity. Types of color perception vary by their location as well; blue can be detected about 60 degrees from our fixed focal point, while yellow, red, and green are only perceptable within a narrower visual field.

Factors affecting visual acuity depend on many things including: the size of the stimulus, the brightness and contrast of the stimulus, the region of the retina stimulated, and the physiological and psychological condition of the individual (Bailey, 1996).

Size of the Stimulus: Visual Angle

The actual size of an object is basically unimportant as far as how easy it is to perceive. Instead, it is the "visual angle" or the relative size to your eye. This takes into account both size and distance from the viewer. Discussions of this in various technical fields often discuss the angular resolution, as true resolution is unimportant.

The visual angle can be calculated using the following formula:

Visual Angle (minutes of arc) = (3438)(length of the object perpendicular to the line of sight)/distance from the front of the eye to the object.

Visual angle is typically measured in much smaller units than degrees such as seconds or minutes of arc (60 minutes in a degree, 60 seconds in a minute). Other specialized units may also be encountered such as milliradians, or may simply be in degrees with annoying large numbers of decimal places.

With an understanding of visual angle, we can determine the appropriate size of visual elements including character size viewed at specific distances. According to Human Factors Society (1988), the following visual angles are recommended for reading tasks:

So, let’s assume you are designing text that is to be read quickly on a mobile device, with a viewing distance of 30 centimeters (11.8 inches). The equation would look like this:

Length= 16 minutes of arc (30)/3438.

The smallest acceptable character height would be 0.14 cm, or about 10 points. Remember, all this exists in the real world; you will have to take into account real-world sizes, never pixels, when designing for perception.

Visual Perception

After our senses collect visual information, our brain begins to perceive and store the information. Perception involves taking information that was delivered from our senses and interacting it with our prior knowledge stored in memory. This process allows us to relate new experiences with old experiences. During this process of visualization of perception, our minds look to identify familiar patterns. Recognizing patterns is the essential for object perception. Once we have identified an object, it is much easier to identify the same object on a subsequent appearance anywhere in the visual field (Biederman and Cooper, 1992).

The Gestalt School of Psychology was founded in 1912 to study how humans perceive form. The Gestalt principles they developed can help designers create visual displays based on the way our minds perceive objects. These principles, as they apply to mobile interactive design are:

Articulating Graphics

Now that we have an understanding that visual object perception is based on identifying patterns, we must be able to design visual displays that mimic the way our mind perceives information. Stephen Kossyln states “We cannot exploit multimedia technology to manage information overload unless we know how to use it properly. Visual displays must be articulate graphics to succeed. Like effective speeches, they must transmit clear, compelling, and memorable messages, but in the infinitely rich language of our visual sense” (Kossyln, 1990).

Display Elements are Organized Automatically

This follows gestalt principles. Objects that are close by, collinear, or look similar tend to be perceived as groups. So when designing information displays, like maps, adding indicators, landmarks, and objects that are clustered together, appear to be grouped and share a relationship. This may cause confusion when the viewer needs to locate his exact position.

Perceptual Organization is Influenced by Knowledge

When looking at objects in a pattern for the first time, the organization may not be fully understood or remembered. However, if this pattern is seen again over time, we tend to chunk this pattern and store it in our memory. Think of chessboard with its pieces played out. To a viewer who has never seen this game before, will perceive the board as having many objects. However, an experienced chess player, will immediately identify the objects and the relationships that have with each other and the board. So when designing visual displays, its essential to know the mental model of your user so they may quickly identify and relate to the information displayed.

Images are transformed Incrementally

When we see an object move and transform its shape in incremental steps, we have an easier time understanding that the two objects are related or identical. However, if we only see the object’s beginning state and end state, our minds are forced to use a lot of mental processing and load to understand the transformation. This can take much more time and also increase errors or confusion. When designing a list where items move, such as a Carousel, make sure the viewer can see the incremental movement.

Different Visual Dimensions are Processed by Separate Channels

Object attributes such as color, size, shape, and position are processed with our minds using separate processing channels. The brain processes many individual visual dimensions in parallel at once, but can only deal with multiple dimensions in sequence. For example, when designing bullet list that are all black circles, we can immediate identify all of them. However, if you add a bullet that is black, same size, but diamond shape, our minds have to work harder to perceive them as being different.

Color is Not Perceived as a Continuum

Designers often use color scales to represent a range. Such as red is hot, blue is cold. Temperatures in-between will be represented by the remaining visual spectrum between them. The problem is that our brains do not perceive color in a linear dimension as it physically exists like this. We view color based on the intensity and amount of light. So a better way of showing this temperature difference would be to use varying intensity and saturation.

If a perceptually orderable sequence is required, a black to white, red to green, yellow to blue, or saturation (dull to vivid) sequence can be used (Ware, 2000).

When high level of details are to be displayed, the color sequence should be based mostly on luminance to take advantage of the capacity of this channel to convey high spatial frequencies. When there is little detail, a chromatic sequence or a saturation sequence can be used (Rogowitz and Treinish 1996).

In many cases, the best color sequence should vary through a range of colors, but with each successive hue chosen to have higher luminance than the previous one (Ware, 1988).

Getting Started

You now have a general sense of what components are as well as a physiological visual perception framework to reference. The component chapters will provide specific information on theory, tactics, and illustrate examples of appropriate design patterns. And always remember to read the antipatterns, to make sure you don't misuse or over-use a pattern.