Differences between revisions 6 and 7
Revision 6 as of 2010-10-14 23:53:37
Size: 18045
Editor: shoobe01
Comment:
Revision 7 as of 2010-10-15 00:16:05
Size: 12516
Editor: shoobe01
Comment:
Deletions are marked like this. Additions are marked like this.
Line 54: Line 54:




== Understanding How Visual Information is Perceived ==
Our visual perception model is complex. Our human mind is like a leaky bucket. It holds plenty of information, but can easily let information slip away and spill out. If we can understand how our mind works, and it’s limits, we can create visual information displays that limit our information loss and mental load during decision making processes.
Basically, information processing involves these major processes.
1. Sensation
2. Perceptual Processing
3. Memory: Sensory Memory, Short-term or working memory, and Long term Memory.
4. Intellection
5. Movement Control

This chapter will provide you brief information on Sensation and Perceptual Processing and how understanding them can provide you a framework to designing better visual displays.

Sensation is a process referring to the capture and transformation of information required for the process of perception to begin (Bailey 1996). Each of our sensors (eyes, ears, nose, skin, mouth) collects information, or stimuli, uniquely but all will transform the stimulus energy into a form the brain can process.

All of these senses respond selectively to certain types of stimuli. There are four types of stimuli our bodies can sense: Electromagnetic, Mechanical, Thermal, and Chemical. Each of these stimuli can be collected through different senses. Electromagnetic stimuli can be collected through vision. Mechanical stimuli can be collected by hearing, touch, pain, vestibular, and kinesthetic. Thermal by cold and warmth. Chemical by taste and smell (Ellingstad, 1972).

Our sensory processing has limits. For example, we can only see wavelengths between 400 and 700 nanometers. Our thermal sensors respond only to infrared wavelengths. Our skin temperature is about 91.4 degrees F and stimuli at this temperature do not cause a noticeable thermal sensation. However, below 60 degrees F, the skin will transmit a cold feeling and above 105 degrees, the skin will transmit a hot feeling.

Our sense of touch (pressure) is experienced when an object contacts our skin. The skin within certain locations, can identify where the object is, its size and shape, and its movement. (**talk in detail about this in chapter of Navigation and Gesturing?**) For more information on sensory limits, refer to Chapter 3: Sensing and Responding (Bailey, 1996).

This chapter details patterns on Displaying information. Therefore, it is beneficial to discuss in greater detail the sense of vision, how it works, and it’s limits.

The human eye. Many people use the analogy that our eye works similar to a camera. Both eye and camera have a lens, an aperture (pupil), and film (retina). However, the similarity stops there. Because the image that is shown on the back of our retina does not resemble our perception of it.


== How Does the Eye Work? ==
The eye is an organ responsible for vision. It first collects, filters, and focuses light. Our eyes can only experience a narrow band of radiation in the electromagnetic spectrum.

The narrow range is approximately 400 nanometers (where we can perceive the violet color) to about 700 nanometers (where red is perceived). The focused beam of light is then projected onto the back part of our retina where it contacts photoreceptors, known as rods and cones. These receptors are light sensitive. The cones are used for seeing when there is bright light and are color sensitive. The rods are sensitive to dim lighting and are not color sensitive. These receptors convert light into electro-chemical signals which travel along the optic nerve to the brain for processing.

The eye is sensitive to stimuli in many ways at any moment, including the size of stimulus, its brightness and contrast, and the part of the retina that is stimulated.

As a designer, it’s important to understand how these stimuli can affect and influence our design decisions.******* The size of the stimulus is measured with visual angle. This is the angle formed at the eye by the viewed object. The visual angle can be calculated using the following formula: Visual Angle (minutes of arc) = (3438)(length of the object perpendicular to the line of sight)/distance from the front of the eye to the object. Visual angle is typically measured in degrees of arc where one degree=60’(minutes of arc), and 1 minute of arc=60”(seconds of arc).

With an understanding of visual angle, we can determine the appropriate size of visual elements including character size viewed at specific distances. According to Human Factors Society (1988), the following visual angles are recommended for reading tasks: When reading speed is important, the minimum visual angle should not be less than 16 minutes of arc and not greater than 24 minutes of arc. When reading speed is not important, the visual angle can be as small as 10 minutes of arc. Characters should never be less than 10 minutes of arc or greater than 45 minutes of arc. So, let’s assume I’m designing a text that is to be read quickly on a mobile device with a viewing distance of 30 centimeters(11.8 inches). The equation would look like this: Length= 16 minutes of arc(30)/3438. The smallest acceptable character height would then = .14cm. or about 10 points. Now, other factors that need to be addressed when designing character size on mobile is 1: The distance changes all of the time, and 2: Glare and wobble affects legibility. This will be further addressed in another chapter*******

Take a moment and look around. Are you inside? Then you might come across books, a pile of mail, your computer and television. Or maybe you're outside, carrying your mobile device, and checking your appointments. The world we live in is surrounded by ubiquitous information. Information that is visual, audible, and tactile. It it meant to inform, to entertain, to instruct, and to warn. Because we are constantly bombarded with this information in our daily lives, we must quickly collect, filter, and process which of it is important to use for specific tasks.

Consider a busy intersection you are trying to cross. You are surrounded by the sights and sounds of pedestrians conversing, cars and trucks honking, birds flying, signage on billboards, and thousands of other types of stimuli. Our minds have an amazing ability to focus on the task at hand, filter the surrounding "noise," process, store, and allow us to act on only the relevant information.

When the crosswalk signal changes to "Walk," we identify the sign, interpret it's meaning, determine an action to move our body forward, carry out our actions by walking until you've crossed the street achieving your goal.

Understanding how we process and filter visual information, or data, will help us design effective displays of information on mobile devices. Let’s first explore the types of information.

Types of visual information

All humans have more or less have the same visual processing system. However, without a standardized way of explaining and notating our perceptions, our communication of this information becomes arbitrary and not effective when designing mobile interactions.

Bertin (1977) organized visual information into two forms: data values and data forms. Ware (2000) introduces a more modern way of dividing data into entities and relationships.

Entities are the objects that can be visualized such as people, buildings, signs. Relationships (sometimes called "relations"), define the structures and patterns that entities share with each other. Relationships can be structural and physical, conceptual, causal, and temporal.

These entities and relationships can be further described using attributes. These are properties of both the entity and the relationship, and cannot be considered independently. Some examples of attributes are:

  • Color.
  • Duration.
  • Texture.
  • Weight, or thickness of a line.
  • Type size.

For each of these we mean the attribute as it applies to a specific item. Not texture in general, or the texture of paper, but the texture of a specific type of paper (or even, a specific sheet of paper).

Classifying Information

In addition to creating descriptions of our perceptions, we have also standardized a classifying way to organize them. Common classifying schemes that we use are:

  • Nominal - Uses labels and names to categorize data.

  • Ordinal - using numbers to order things in sequence.

  • Ratio - a fixed relationship between one object compared to another using a zero value as a reference.

  • Interval - the gap between two data values is measurable.

  • Alphabetical - using the order of the alphabet to organize nominal data.

  • Geographical - using location, such as city, state, country, to organize data.

  • Topical - organizing data by topic or subject.

  • Task - organizing data based on processes, tasks, functions and goals.

  • Audience - organizing data by user type, such as interests, demographics, knowledge and experience levels, needs and goals.

  • Social - A collaboration of organizing data by users who share the same interests. Such as tagging, adding to a wiki, and creating and following twitter feeds.

  • Metaphor - organizing data based on a familiar mental model to the user. Such as organizing computer files with folders, trash, and recycle bin.

WRAP THIS UP WITH A UNIFYING, TACTICAL THOUGHT...

Organizing Information with Information Architectures

Now that we are able to describe the data that we perceive, we must understand how this information should be structured, organized, labeled, and identified on mobile user interfaces.

One of the most common organization structures humans have used through time is a hierarchy. A hierarchy organizes information based on divisions and parent-child relationships. When using heirarchies to organize information, Peter Morville explains rules to consider (Morville, 2006): Categories should be mutually exclusive to limit ambiguity. Consider the balance between breadth and depth. When determining the number of categories regarding breath, you must consider the user's ability to visually scan the page as well as the amount of real estate on the screen. When considering depth, limit the scope to two to three levels down. Recognize the danger of providing users with too many options.

FACETING>>> ???

Visual Perception

After our senses collect visual information, our brain begins to perceive and store the information. Perception involves taking information that was delivered from our senses and interacting it with our prior knowledge stored in memory. This process allows us to relate new experiences with old experiences. During this process of visualization of perception, our minds look to identify familiar patterns. Recognizing patterns is the essential for object perception. Once we have identified an object, it is much easier to identify the same object on a subsequent appearance anywhere in the visual field (Biederman and Cooper, 1992).

The Gestalt School of Psychology in 1912, was founded to study how humans perceive form. They developed the Gestalt Laws. These principles can help designers create visual displays based on the way our minds perceive objects. The following are Gestalt principles:

Proximity - states that objects that are close together are perceived as being related and grouped together. When designing graphical displays, having descriptive text close to an image will cause the viewer to relate the two objects together. This can be very effective when dual coding graphical icons. Similarity - states that similar looking objects are perceived to be related and grouped. As a designer, creating navigation tabs that are similar in size, shape, and color, will be perceived as a related group by the viewer. Continuity-states that objects that are smooth and continuous tend to form an entity of connectedness. When designing links with nodes or arrows pointing to another object, viewers will have an easier time establishing a connected relationship if the lines are smooth and continuous and less jagged. Symmetry-states that objects that have a symmetrical relationship are considered related. Objects that are reflected symmetrically across an axis, are perceived as forming a visual whole. Closure-states that a closed entity is perceived as an object. We have a tendency to close contours that have gaps in them. We also perceive closed contours as having two states: an inside and outside. When designing list patterns, like the grid pattern described in this chapter, use closure principles to contain either an image or label. Relative Size-states that smaller components within a pattern are perceived as objects. When designing lists, using entities like bullets, arrows, nodes inside a group of information, will be viewed as individual objects that our eyes will be drawn to. Therefore, make sure these objects are relevant to the information that it is relating to. Another example of relative size is a pie with a missing piece. The missing piece will stand out and be perceived as an object. Figure and Ground - states that an a figure is an object that appears to be in the foreground. The ground is the space or shape that lies behind the figure. When an object uses multiple gestalt principles, figure and ground occurs.

Now that we have an understanding that visual object perception is based on identifying patterns, we must be able to design visual displays that mimic the way our mind perceives information. Stephen Kossyln states “We cannot exploit multimedia technology to manage information overload unless we know how to use it properly. Visual displays must be articulate graphics to succeed. Like effective speeches, they must transmit clear, compelling, and memorable messages, but in the infinitely rich language of our visual sense” (Kossyln, 1990).

In his article, Kossyln identifies and describes five key principles for articulate graphics:

Display Elements are Organized Automatically

This follows gestalt principles. Objects that are close by, collinear, or look similar tend to be perceived as groups. So when designing information displays, like maps, adding indicators, landmarks, and objects that are clustered together, appear to be grouped and share a relationship. This may cause confusion when the viewer needs to locate his exact position.

Perceptual Organization is Influenced by Knowledge

When looking at objects in a pattern for the first time, the organization may not be fully understood or remembered. However, if this pattern is seen again over time, we tend to chunk this pattern and store it in our memory. Think of chessboard with its pieces played out. To a viewer who has never seen this game before, will perceive the board as having many objects. However, an experienced chess player, will immediately identify the objects and the relationships that have with each other and the board. So when designing visual displays, its essential to know the mental model of your user so they may quickly identify and relate to the information displayed.

Images are transformed Incrementally

When we see an object move and transform its shape in incremental steps, we have an easier time understanding that the two objects are related or identical. However, if we only see the object’s beginning state and end state, our minds are forced to use a lot of mental processing and load to understand the transformation. This can take much more time and also increase errors or confusion. So when designing carousel lists, make sure the viewer can see the incremental movement.

Different Visual Dimensions are Processed by Separate Channels

Object attributes such as color, size, shape, and position are processed with our minds using separate processing channels. The brain processes many individual visual dimensions in parallel at once, but can only deal with multiple dimensions in sequence. For example, when designing bullet list that are all black circles, we can immediate identify all of them. However, if you add a bullet that is black, same size, but diamond shape, our minds have to work harder to perceive them as being different. Color is not perceived as a continuum.

Many times designers will use color scale to represent a range of temperature, like red is hot. Blue is cold. And temperatures in between will be represented by the visual spectrum. The problem is that our brains do not perceive color this way in a linear dimension. We view color based on the intensity and amount of light. So a better way of showing this temperature difference would be to use varying intensity and saturation.

TACTICS FOR CHOOSING??? -- MAYBE EVEN MY SCALE OF INFO DESIGN ORGANIZING. SIZE, SHAPE, COLOR... Note: Hey, Eric, there are several notes in the patterns that say things like choose circular vs. dead-end lists based on the type of data. 1) Be sure to define circular vs. dead-end a bit 2) Focus your discussion of IA theory on principles of deciding things about lists, how to choose which one, which variant, etc. Like, when is it okay to have a non-circular list... --ssh

Patterns for Displaying Information

A valid way of thinking about the entire topic of interactive design is that it is about displaying information. This chapter in particular is concerned with components whose sole task is presenting ordered sets of information, so that users may understand and act upon them.

These patterns have been developed and TESTED??? based on how the human mind processes patterns, objects, and information:

Display of Information (last edited 2013-04-10 23:57:11 by shoobe01)