Differences between revisions 2 and 66 (spanning 64 versions)
Revision 2 as of 2010-10-15 00:16:27
Size: 6028
Editor: shoobe01
Comment:
Revision 66 as of 2010-11-14 18:14:28
Size: 17855
Editor: shoobe01
Comment:
Deletions are marked like this. Additions are marked like this.
Line 1: Line 1:
Components are a section or subsection of a designed space. They take up a significant portion of the screen, and may be as large as the viewport (or, depending on your point of view, larger). Components are a section or subsection of a designed space. They take up a significant portion of the screen, may be as large as the viewport (or, depending on your point of view, larger), and may appear modally in front of other displayed information.  
Line 3: Line 3:
Components are concerned with displaying large amounts of information, or allowing the user to interact with the system is some significant, primary manner. Combining them, with the small, reusable, interactive or display '''Widgets''' (see that section) gives an unlimited number of options for design. Components are concerned with displaying a range information types -- images, ordered data, expandable lists, and notifications. They also allow the user to interact with the system in some significant, primary manner. Combining them, with the small, reusable, interactive or display Widgets (see that section) gives an unlimited number of options for design.

The components that will be discussed here are subdivided into the following chapters:
 * '''[[Display of Information]]'''
 * '''[[Revealing More Information]]'''
 * '''[[Control & Confirmation]]'''

== Types of Components ==

=== Display of Information ===
How information is displayed needs to reflect the user’s mental model and mimic the way they organize and process knowledge. If the information is displayed on mobile user interfaces that ignores these principles, you will most likely cause the user to become lost, confused, frustrated, and unwilling. To prevent this, this chapter will explain research based frameworks, tactical examples, and descriptive mobile patterns to use. This chapter will discuss the following topics:
 * Types of visual information.
 * How information is classified.
 * Organizing through an Information Architecture.
 * Information Design and Ordered Data.
 * Patterns for Displaying Information: ''[[Vertical List]], [[Infinite List]], [[Thumbnail List]], [[Fisheye List]], [[Grid]], [[Film Strip]], [[Slideshow]], [[Infinite Area]], and [[Select List]]''.

=== Revealing More Information ===
When we designing to reveal more information, we need need to be conscious of limitations of the devices and networks, and our human limits. Screen size will limit the amount of information that can be displayed at a time. A device’s OS will limit processing and loading times. Our memory limits cause us to filter, store, and process only relevant information over a duration of time.

It is essential to design interactive displays that reflect a user’s mental model while making sure the control that is used to reveal more information is visible. If the information is displayed on mobile user interfaces that ignores these principles, users will encounter performance errors, dissatisfaction, and frustration. To prevent this, this chapter will explain research based frameworks, tactical examples, and descriptive mobile patterns to use. This chapter will discuss the following topics:
 * Donald Norman’s Interaction Model.
 * Designing for Information.
 * Patterns for Revealing More Information: ''[[Windowshade]], [[Pop-Up]], [[Hierarchical List]], [[Returned Results]]''.

=== Control & Confirmation ===
Humans makes mistakes. As designers, we can create effective interfaces that can prevent costly human error resulting in loss of inputted data. When costly human error is possible, we can create modal constraints and decision points as preventative measures. The context of the user’s goals and current tasks should be considered when incorporating confirmation controls. Overuse of these constraints and decision points during low risk situations will cause user frustration, by increased their processing time and mental load, and delaying or stopping their task. To prevent that, this chapter will explain research based frameworks, tactical examples, and descriptive mobile patterns to use. This chapter will discuss the following topics:

 * Understanding Our Users- Relating to human error and mental load.
 * Control and Confirmation - When to use and why.
 * Patterns for Control & Confirmation: ''[[Confirmation]], [[Sign On]], [[Exit Guard]], [[Cancel Protection]], and [[Timeout]]''.
Line 6: Line 36:
== Understanding How Visual Information is Perceived ==
Our visual perception model is complex. Our human mind is like a leaky bucket. It holds plenty of information, but can easily let information slip away and spill out. If we can understand how our mind works, and it’s limits, we can create visual information displays that limit our information loss and mental load during decision making processes.
Basically, information processing involves these major processes.
1. Sensation
2. Perceptual Processing
3. Memory: Sensory Memory, Short-term or working memory, and Long term Memory.
4. Intellection
5. Movement Control
== Understand These Human Factors ==
Line 15: Line 38:
This chapter will provide you brief information on Sensation and Perceptual Processing and how understanding them can provide you a framework to designing better visual displays. To best implement the correct component in a mobile user interface, it’s essential to familiarize yourself certain human factors. Our human mind is like a leaky bucket. It holds plenty of information, but can easily let information slip away and spill out. If we can understand how we can collect and process visual information, we can create effective visual interactive displays that resemble the way our mind works. This will help limit our cognitive load and information loss during decision making processes. Our perception model is complex, and there are many theories explaining it’s structure which is beyond the scope of this book. A general description of visual sensation and perception is described below.
Line 17: Line 40:
Sensation is a process referring to the capture and transformation of information required for the process of perception to begin (Bailey 1996). Each of our sensors (eyes, ears, nose, skin, mouth) collects information, or stimuli, uniquely but all will transform the stimulus energy into a form the brain can process. == Sensation: Getting Information Into Our Heads ==
Line 19: Line 42:
All of these senses respond selectively to certain types of stimuli. There are four types of stimuli our bodies can sense: Electromagnetic, Mechanical, Thermal, and Chemical. Each of these stimuli can be collected through different senses. Electromagnetic stimuli can be collected through vision. Mechanical stimuli can be collected by hearing, touch, pain, vestibular, and kinesthetic. Thermal by cold and warmth. Chemical by taste and smell (Ellingstad, 1972).

Our sensory processing has limits. For example, we can only see wavelengths between 400 and 700 nanometers. Our thermal sensors respond only to infrared wavelengths. Our skin temperature is about 91.4 degrees F and stimuli at this temperature do not cause a noticeable thermal sensation. However, below 60 degrees F, the skin will transmit a cold feeling and above 105 degrees, the skin will transmit a hot feeling.

Our sense of touch (pressure) is experienced when an object contacts our skin. The skin within certain locations, can identify where the object is, its size and shape, and its movement. (**talk in detail about this in chapter of Navigation and Gesturing?**) For more information on sensory limits, refer to Chapter 3: Sensing and Responding (Bailey, 1996).

This chapter details patterns on Displaying information. Therefore, it is beneficial to discuss in greater detail the sense of vision, how it works, and it’s limits.

The human eye. Many people use the analogy that our eye works similar to a camera. Both eye and camera have a lens, an aperture (pupil), and film (retina). However, the similarity stops there. Because the image that is shown on the back of our retina does not resemble our perception of it.
Sensation is a process referring to the capture and transformation of information required for the process of perception to begin (Bailey 1996). Each of our sensors (eyes, ears, nose, skin, mouth) collects information, or stimuli, uniquely but all will transform the stimulus energy into a form the brain can process.
Line 30: Line 45:
== How Does the Eye Work? ==
The eye is an organ responsible for vision. It first collects, filters, and focuses light. Our eyes can only experience a narrow band of radiation in the electromagnetic spectrum.
=== Collecting Visual Stimuli: How the Eye Works ===
The eye is an organ responsible for vision. Many people use the analogy that our eye works like a camera. Both eye and camera have a lens, an aperture (pupil), and a sensor (retina). However, the manner in which sensing and processing occurs is very different. This should be understood and least a little by the designer in order to create displays that are easy to see and understand.
Line 33: Line 48:
The narrow range is approximately 400 nanometers (where we can perceive the violet color) to about 700 nanometers (where red is perceived). The focused beam of light is then projected onto the back part of our retina where it contacts photoreceptors, known as rods and cones. These receptors are light sensitive. The cones are used for seeing when there is bright light and are color sensitive. The rods are sensitive to dim lighting and are not color sensitive. These receptors convert light into electro-chemical signals which travel along the optic nerve to the brain for processing. The eye collects, filters, and focuses light. Light enters through the cornea and is refracted through the pupil. The amount of light entering the lens is controlled by the iris. The lens focuses the beam of light and then projects it onto the back part of our retina where it contacts the photoreceptors known as rods and cones. These receptors are light sensitive and vary in relative density; there are about 100 million rods and only 6 million cones. The cones are used for seeing when there is bright light; three kinds of cones, each with their own pigment filter, allow perception of color. The rods are sensitive to dim lighting and are not color sensitive. These receptors convert light into electro-chemical signals which travel along the optic nerve to the brain for processing. Color deficits, commonly known as colorblindness -- affecting fully 10% of the male population, though almost no women -- are a result of reduced pigmentation in a cone, or loss of a whole type of cone.

Our eyes can only experience a narrow band of radiation in the electromagnetic spectrum. The narrow range is approximately 400 nanometers (where we can perceive the violet color) to about 700 nanometers (where red is perceived). The focused beam of light is then projected onto the back part of our retina where it contacts photoreceptors, known as rods and cones. These receptors are light sensitive and vary in the amount we have. There are about 100 million rods and only 6 million cones. The cones are used for seeing when there is bright light and are color sensitive. The rods are more sensitive so are useful for dim lighting and are not color sensitive. These receptors convert light into electro-chemical signals which travel along the optic nerve to the brain for processing.
Line 37: Line 54:
As a designer, it’s important to understand how these stimuli can affect and influence our design decisions.******* The size of the stimulus is measured with visual angle. This is the angle formed at the eye by the viewed object. The visual angle can be calculated using the following formula: Visual Angle (minutes of arc) = (3438)(length of the object perpendicular to the line of sight)/distance from the front of the eye to the object. Visual angle is typically measured in degrees of arc where one degree=60’(minutes of arc), and 1 minute of arc=60”(seconds of arc). === Visual Acuity and the Visual Field ===
Visual acuity is the ability to see details and detect differences between stimuli and spaces. Inside our eye, at the center of our retina, lie our fovea. The fovea is tightly packed only with cones (approximately 200,000) and it is here where our vision is most focused. The fovea is the central 1 to 2 degree of our eye, and the last 1/2 degree is where we have our sharpest vision. The farther away objects extend beyond our fovea range, the lower the resolution and color fidelity. We can still detect items peripherally, but with less clarity. Types of color perception vary by their location as well; blue can be detected about 60 degrees from our fixed focal point, while yellow, red, and green are only perceptable within a narrower visual field.
Line 39: Line 57:
With an understanding of visual angle, we can determine the appropriate size of visual elements including character size viewed at specific distances. According to Human Factors Society (1988), the following visual angles are recommended for reading tasks: When reading speed is important, the minimum visual angle should not be less than 16 minutes of arc and not greater than 24 minutes of arc. When reading speed is not important, the visual angle can be as small as 10 minutes of arc. Characters should never be less than 10 minutes of arc or greater than 45 minutes of arc. So, let’s assume I’m designing a text that is to be read quickly on a mobile device with a viewing distance of 30 centimeters(11.8 inches). The equation would look like this: Length= 16 minutes of arc(30)/3438. The smallest acceptable character height would then = .14cm. or about 10 points. Now, other factors that need to be addressed when designing character size on mobile is 1: The distance changes all of the time, and 2: Glare and wobble affects legibility. This will be further addressed in another chapter******* Factors affecting visual acuity depend on many things including: the size of the stimulus, the brightness and contrast of the stimulus, the region of the retina stimulated, and the physiological and psychological condition of the individual (Bailey, 1996).

=== Size of the Stimulus: Visual Angle ===
The actual size of an object is basically unimportant as far as how easy it is to perceive. Instead, it is the "visual angle" or the relative size to your eye. This takes into account both size and distance from the viewer. Discussions of this in various technical fields often discuss the angular resolution, as true resolution is unimportant,

The visual angle can be calculated using the following formula:
Visual Angle (minutes of arc) = (3438)(length of the object perpendicular to the line of sight)/distance from the front of the eye to the object.

Visual angle is typically measured in much smaller units than degrees such as seconds or minutes of arc (60 minutes in a degree, 60 seconds in a minute). Other specialized units may also be encountered such as milliradians, or may simply be in degrees with annoying large numbers of decimal places.

With an understanding of visual angle, we can determine the appropriate size of visual elements including character size viewed at specific distances. According to Human Factors Society (1988), the following visual angles are recommended for reading tasks:
 * When reading speed is important, the minimum visual angle should not be less than 16 minutes of arc and not greater than 24 minutes of arc.
 * When reading speed is not important, the visual angle can be as small as 10 minutes of arc.
 * Characters should never be less than 10 minutes of arc or greater than 45 minutes of arc.

So, let’s assume I’m designing text that is to be read quickly on a mobile device, with a viewing distance of 30 centimeters (11.8 inches). The equation would look like this:

Length= 16 minutes of arc (30)/3438. The smallest acceptable character height would then = .14cm. or about 10 points.


== Visual Perception ==

After our senses collect visual information, our brain begins to perceive and store the information. Perception involves taking information that was delivered from our senses and interacting it with our prior knowledge stored in memory. This process allows us to relate new experiences with old experiences. During this process of visualization of perception, our minds look to identify familiar patterns. Recognizing patterns is the essential for object perception. Once we have identified an object, it is much easier to identify the same object on a subsequent appearance anywhere in the visual field (Biederman and Cooper, 1992).

'''Gestalt Design Principles'''

The Gestalt School of Psychology was founded in 1912 to study how humans perceive form. The Gestalt principles they developed can help designers create visual displays based on the way our minds perceive objects. These principles, as they apply to mobile interactive design are:

 * '''Proximity''' - Objects that are close together are perceived as being related and grouped together. When designing graphical displays, having descriptive text close to an image will cause the viewer to relate the two objects together. This can be very effective when dual coding graphical icons.

 * '''Similarity''' - Objects sharing attributes are perceived to be related, and will be grouped by the user. Navigation tabs that are similar in size, shape, and color, will be perceived as a related group by the viewer.

 * '''Continuity'''- Smooth, continuous objects imply they are connected. When designing links with nodes or arrows pointing to another object, viewers will have an easier time establishing a connected relationship if the lines are smooth and continuous and less jagged.

 * '''Symmetry''' - Symmetrical relationships between objects imply relationships. Objects that are reflected symmetrically across an axis, are perceived as forming a visual whole. This can be bad more easily than good. If a visual design grid is too strict, unrelated items may be perceived as related, adding confusion.

 * '''Closure''' - A closed entity is perceived as an object. We have a tendency to close contours that have gaps in them. We also perceive closed contours as having two distinct portions: an inside and outside. When designing list patterns, like the grid pattern described in this chapter, use closure principles to contain either an image or label.

 * '''Relative Size''' - Smaller components within a pattern are perceived as objects. When designing lists, using entities like bullets, arrows, nodes inside a group of information, will be viewed as individual objects that our eyes will be drawn to. Therefore, make sure these objects are relevant to the information that it is relating to. Another example of relative size is a pie with a missing piece. The missing piece will stand out and be perceived as an object.

 * '''Figure and Ground''' - A figure is an object that appears to be in the foreground. The ground is the space or shape that lies behind the figure. When an object uses multiple gestalt principles, figure and ground occurs.

== Articulating Graphics ==

Now that we have an understanding that visual object perception is based on identifying patterns, we must be able to design visual displays that mimic the way our mind perceives information. Stephen Kossyln states “We cannot exploit multimedia technology to manage information overload unless we know how to use it properly. Visual displays must be articulate graphics to succeed. Like effective speeches, they must transmit clear, compelling, and memorable messages, but in the infinitely rich language of our visual sense” (Kossyln, 1990).

'''Display Elements are Organized Automatically'''

This follows gestalt principles. Objects that are close by, collinear, or look similar tend to be perceived as groups. So when designing information displays, like maps, adding indicators, landmarks, and objects that are clustered together, appear to be grouped and share a relationship. This may cause confusion when the viewer needs to locate his exact position.

'''Perceptual Organization is Influenced by Knowledge'''

When looking at objects in a pattern for the first time, the organization may not be fully understood or remembered. However, if this pattern is seen again over time, we tend to chunk this pattern and store it in our memory. Think of chessboard with its pieces played out. To a viewer who has never seen this game before, will perceive the board as having many objects. However, an experienced chess player, will immediately identify the objects and the relationships that have with each other and the board. So when designing visual displays, its essential to know the mental model of your user so they may quickly identify and relate to the information displayed.

'''Images are transformed Incrementally'''

When we see an object move and transform its shape in incremental steps, we have an easier time understanding that the two objects are related or identical. However, if we only see the object’s beginning state and end state, our minds are forced to use a lot of mental processing and load to understand the transformation. This can take much more time and also increase errors or confusion. So when designing carousel lists, make sure the viewer can see the incremental movement.

'''Different Visual Dimensions are Processed by Separate Channels'''

Object attributes such as color, size, shape, and position are processed with our minds using separate processing channels. The brain processes many individual visual dimensions in parallel at once, but can only deal with multiple dimensions in sequence. For example, when designing bullet list that are all black circles, we can immediate identify all of them. However, if you add a bullet that is black, same size, but diamond shape, our minds have to work harder to perceive them as being different.

'''Color is Not Perceived as a Continuum'''

Many times designers will use color scale to represent a range of temperature, like red is hot. Blue is cold. And temperatures in between will be represented by the visual spectrum. The problem is that our brains do not perceive color this way in a linear dimension. We view color based on the intensity and amount of light. So a better way of showing this temperature difference would be to use varying intensity and saturation.

If a perceptually orderable sequence is required, a black to white, red to green, yellow to blue, or saturation (dull to vivid) sequence can be used (Ware, 2000).

When high level of details are to be displayed, the color sequence should be based mostly on luminance to take advantage of the capacity of this channel to convey high spatial frequencies. When there is little detail, a chromatic sequence or a saturation sequence can be used (Rogowitz and Treinish 1996).

In many cases, the best color sequence should vary through a range of colors, but with each successive hue chosen to have higher luminance than the previous one (Ware, 1988).

== Getting Started ==
You now have a general sense of what components are as well as a physiological visual perception framework to reference. The component chapters will provide specific information on theory, tactics, and illustrate examples of appropriate design patterns. And always remember to read the antipatterns, to make sure you don't misuse or over-use a pattern.

Components are a section or subsection of a designed space. They take up a significant portion of the screen, may be as large as the viewport (or, depending on your point of view, larger), and may appear modally in front of other displayed information.

Components are concerned with displaying a range information types -- images, ordered data, expandable lists, and notifications. They also allow the user to interact with the system in some significant, primary manner. Combining them, with the small, reusable, interactive or display Widgets (see that section) gives an unlimited number of options for design.

The components that will be discussed here are subdivided into the following chapters:

Types of Components

Display of Information

How information is displayed needs to reflect the user’s mental model and mimic the way they organize and process knowledge. If the information is displayed on mobile user interfaces that ignores these principles, you will most likely cause the user to become lost, confused, frustrated, and unwilling. To prevent this, this chapter will explain research based frameworks, tactical examples, and descriptive mobile patterns to use. This chapter will discuss the following topics:

Revealing More Information

When we designing to reveal more information, we need need to be conscious of limitations of the devices and networks, and our human limits. Screen size will limit the amount of information that can be displayed at a time. A device’s OS will limit processing and loading times. Our memory limits cause us to filter, store, and process only relevant information over a duration of time.

It is essential to design interactive displays that reflect a user’s mental model while making sure the control that is used to reveal more information is visible. If the information is displayed on mobile user interfaces that ignores these principles, users will encounter performance errors, dissatisfaction, and frustration. To prevent this, this chapter will explain research based frameworks, tactical examples, and descriptive mobile patterns to use. This chapter will discuss the following topics:

Control & Confirmation

Humans makes mistakes. As designers, we can create effective interfaces that can prevent costly human error resulting in loss of inputted data. When costly human error is possible, we can create modal constraints and decision points as preventative measures. The context of the user’s goals and current tasks should be considered when incorporating confirmation controls. Overuse of these constraints and decision points during low risk situations will cause user frustration, by increased their processing time and mental load, and delaying or stopping their task. To prevent that, this chapter will explain research based frameworks, tactical examples, and descriptive mobile patterns to use. This chapter will discuss the following topics:

Understand These Human Factors

To best implement the correct component in a mobile user interface, it’s essential to familiarize yourself certain human factors. Our human mind is like a leaky bucket. It holds plenty of information, but can easily let information slip away and spill out. If we can understand how we can collect and process visual information, we can create effective visual interactive displays that resemble the way our mind works. This will help limit our cognitive load and information loss during decision making processes. Our perception model is complex, and there are many theories explaining it’s structure which is beyond the scope of this book. A general description of visual sensation and perception is described below.

Sensation: Getting Information Into Our Heads

Sensation is a process referring to the capture and transformation of information required for the process of perception to begin (Bailey 1996). Each of our sensors (eyes, ears, nose, skin, mouth) collects information, or stimuli, uniquely but all will transform the stimulus energy into a form the brain can process.

Collecting Visual Stimuli: How the Eye Works

The eye is an organ responsible for vision. Many people use the analogy that our eye works like a camera. Both eye and camera have a lens, an aperture (pupil), and a sensor (retina). However, the manner in which sensing and processing occurs is very different. This should be understood and least a little by the designer in order to create displays that are easy to see and understand.

The eye collects, filters, and focuses light. Light enters through the cornea and is refracted through the pupil. The amount of light entering the lens is controlled by the iris. The lens focuses the beam of light and then projects it onto the back part of our retina where it contacts the photoreceptors known as rods and cones. These receptors are light sensitive and vary in relative density; there are about 100 million rods and only 6 million cones. The cones are used for seeing when there is bright light; three kinds of cones, each with their own pigment filter, allow perception of color. The rods are sensitive to dim lighting and are not color sensitive. These receptors convert light into electro-chemical signals which travel along the optic nerve to the brain for processing. Color deficits, commonly known as colorblindness -- affecting fully 10% of the male population, though almost no women -- are a result of reduced pigmentation in a cone, or loss of a whole type of cone.

Our eyes can only experience a narrow band of radiation in the electromagnetic spectrum. The narrow range is approximately 400 nanometers (where we can perceive the violet color) to about 700 nanometers (where red is perceived). The focused beam of light is then projected onto the back part of our retina where it contacts photoreceptors, known as rods and cones. These receptors are light sensitive and vary in the amount we have. There are about 100 million rods and only 6 million cones. The cones are used for seeing when there is bright light and are color sensitive. The rods are more sensitive so are useful for dim lighting and are not color sensitive. These receptors convert light into electro-chemical signals which travel along the optic nerve to the brain for processing.

The eye is sensitive to stimuli in many ways at any moment, including the size of stimulus, its brightness and contrast, and the part of the retina that is stimulated.

Visual Acuity and the Visual Field

Visual acuity is the ability to see details and detect differences between stimuli and spaces. Inside our eye, at the center of our retina, lie our fovea. The fovea is tightly packed only with cones (approximately 200,000) and it is here where our vision is most focused. The fovea is the central 1 to 2 degree of our eye, and the last 1/2 degree is where we have our sharpest vision. The farther away objects extend beyond our fovea range, the lower the resolution and color fidelity. We can still detect items peripherally, but with less clarity. Types of color perception vary by their location as well; blue can be detected about 60 degrees from our fixed focal point, while yellow, red, and green are only perceptable within a narrower visual field.

Factors affecting visual acuity depend on many things including: the size of the stimulus, the brightness and contrast of the stimulus, the region of the retina stimulated, and the physiological and psychological condition of the individual (Bailey, 1996).

Size of the Stimulus: Visual Angle

The actual size of an object is basically unimportant as far as how easy it is to perceive. Instead, it is the "visual angle" or the relative size to your eye. This takes into account both size and distance from the viewer. Discussions of this in various technical fields often discuss the angular resolution, as true resolution is unimportant,

The visual angle can be calculated using the following formula: Visual Angle (minutes of arc) = (3438)(length of the object perpendicular to the line of sight)/distance from the front of the eye to the object.

Visual angle is typically measured in much smaller units than degrees such as seconds or minutes of arc (60 minutes in a degree, 60 seconds in a minute). Other specialized units may also be encountered such as milliradians, or may simply be in degrees with annoying large numbers of decimal places.

With an understanding of visual angle, we can determine the appropriate size of visual elements including character size viewed at specific distances. According to Human Factors Society (1988), the following visual angles are recommended for reading tasks:

  • When reading speed is important, the minimum visual angle should not be less than 16 minutes of arc and not greater than 24 minutes of arc.
  • When reading speed is not important, the visual angle can be as small as 10 minutes of arc.
  • Characters should never be less than 10 minutes of arc or greater than 45 minutes of arc.

So, let’s assume I’m designing text that is to be read quickly on a mobile device, with a viewing distance of 30 centimeters (11.8 inches). The equation would look like this:

Length= 16 minutes of arc (30)/3438. The smallest acceptable character height would then = .14cm. or about 10 points.

Visual Perception

After our senses collect visual information, our brain begins to perceive and store the information. Perception involves taking information that was delivered from our senses and interacting it with our prior knowledge stored in memory. This process allows us to relate new experiences with old experiences. During this process of visualization of perception, our minds look to identify familiar patterns. Recognizing patterns is the essential for object perception. Once we have identified an object, it is much easier to identify the same object on a subsequent appearance anywhere in the visual field (Biederman and Cooper, 1992).

Gestalt Design Principles

The Gestalt School of Psychology was founded in 1912 to study how humans perceive form. The Gestalt principles they developed can help designers create visual displays based on the way our minds perceive objects. These principles, as they apply to mobile interactive design are:

  • Proximity - Objects that are close together are perceived as being related and grouped together. When designing graphical displays, having descriptive text close to an image will cause the viewer to relate the two objects together. This can be very effective when dual coding graphical icons.

  • Similarity - Objects sharing attributes are perceived to be related, and will be grouped by the user. Navigation tabs that are similar in size, shape, and color, will be perceived as a related group by the viewer.

  • Continuity- Smooth, continuous objects imply they are connected. When designing links with nodes or arrows pointing to another object, viewers will have an easier time establishing a connected relationship if the lines are smooth and continuous and less jagged.

  • Symmetry - Symmetrical relationships between objects imply relationships. Objects that are reflected symmetrically across an axis, are perceived as forming a visual whole. This can be bad more easily than good. If a visual design grid is too strict, unrelated items may be perceived as related, adding confusion.

  • Closure - A closed entity is perceived as an object. We have a tendency to close contours that have gaps in them. We also perceive closed contours as having two distinct portions: an inside and outside. When designing list patterns, like the grid pattern described in this chapter, use closure principles to contain either an image or label.

  • Relative Size - Smaller components within a pattern are perceived as objects. When designing lists, using entities like bullets, arrows, nodes inside a group of information, will be viewed as individual objects that our eyes will be drawn to. Therefore, make sure these objects are relevant to the information that it is relating to. Another example of relative size is a pie with a missing piece. The missing piece will stand out and be perceived as an object.

  • Figure and Ground - A figure is an object that appears to be in the foreground. The ground is the space or shape that lies behind the figure. When an object uses multiple gestalt principles, figure and ground occurs.

Articulating Graphics

Now that we have an understanding that visual object perception is based on identifying patterns, we must be able to design visual displays that mimic the way our mind perceives information. Stephen Kossyln states “We cannot exploit multimedia technology to manage information overload unless we know how to use it properly. Visual displays must be articulate graphics to succeed. Like effective speeches, they must transmit clear, compelling, and memorable messages, but in the infinitely rich language of our visual sense” (Kossyln, 1990).

Display Elements are Organized Automatically

This follows gestalt principles. Objects that are close by, collinear, or look similar tend to be perceived as groups. So when designing information displays, like maps, adding indicators, landmarks, and objects that are clustered together, appear to be grouped and share a relationship. This may cause confusion when the viewer needs to locate his exact position.

Perceptual Organization is Influenced by Knowledge

When looking at objects in a pattern for the first time, the organization may not be fully understood or remembered. However, if this pattern is seen again over time, we tend to chunk this pattern and store it in our memory. Think of chessboard with its pieces played out. To a viewer who has never seen this game before, will perceive the board as having many objects. However, an experienced chess player, will immediately identify the objects and the relationships that have with each other and the board. So when designing visual displays, its essential to know the mental model of your user so they may quickly identify and relate to the information displayed.

Images are transformed Incrementally

When we see an object move and transform its shape in incremental steps, we have an easier time understanding that the two objects are related or identical. However, if we only see the object’s beginning state and end state, our minds are forced to use a lot of mental processing and load to understand the transformation. This can take much more time and also increase errors or confusion. So when designing carousel lists, make sure the viewer can see the incremental movement.

Different Visual Dimensions are Processed by Separate Channels

Object attributes such as color, size, shape, and position are processed with our minds using separate processing channels. The brain processes many individual visual dimensions in parallel at once, but can only deal with multiple dimensions in sequence. For example, when designing bullet list that are all black circles, we can immediate identify all of them. However, if you add a bullet that is black, same size, but diamond shape, our minds have to work harder to perceive them as being different.

Color is Not Perceived as a Continuum

Many times designers will use color scale to represent a range of temperature, like red is hot. Blue is cold. And temperatures in between will be represented by the visual spectrum. The problem is that our brains do not perceive color this way in a linear dimension. We view color based on the intensity and amount of light. So a better way of showing this temperature difference would be to use varying intensity and saturation.

If a perceptually orderable sequence is required, a black to white, red to green, yellow to blue, or saturation (dull to vivid) sequence can be used (Ware, 2000).

When high level of details are to be displayed, the color sequence should be based mostly on luminance to take advantage of the capacity of this channel to convey high spatial frequencies. When there is little detail, a chromatic sequence or a saturation sequence can be used (Rogowitz and Treinish 1996).

In many cases, the best color sequence should vary through a range of colors, but with each successive hue chosen to have higher luminance than the previous one (Ware, 1988).

Getting Started

You now have a general sense of what components are as well as a physiological visual perception framework to reference. The component chapters will provide specific information on theory, tactics, and illustrate examples of appropriate design patterns. And always remember to read the antipatterns, to make sure you don't misuse or over-use a pattern.

Components (last edited 2013-04-10 23:57:01 by shoobe01)