|Deletions are marked like this.||Additions are marked like this.|
|Line 67:||Line 67:|
|The information that is visually collected to achieve legibility, begins early in our visually perception process. In a parallel, bottom-up, top-down process, neural activity rides two information-driven waves concurrently. The first wave occurs within the bottom-up process. Information collected by the retinal image passes to the back of our brain along the optic nerve in a series of steps that begin pattern recognition.||The information that is visually collected , begins early in our visually perception process. In a parallel, bottom-up, top-down process, neural activity rides two information-driven waves concurrently. The first wave occurs within the bottom-up process. Information collected by the retinal image passes to the back of our brain along the optic nerve in a series of steps that begin pattern recognition.|
Describe Human Factors & Physiology here.
Sensation: Getting Information Into Our Heads
Your mind is like a leaky bucket. It holds plenty of information, but can easily let information slip away and spill out. If you can understand how visual information is processed and collected, you can create effective visual interactive displays that resemble the way the mind works.
This can help limit the cognitive load and risks of information loss during decision making processes. Your perception model is complex, and there are many theories explaining its structure which is beyond the scope of this book. A general description of visual sensation and perception is described below.
Sensation is a process referring to the capture and transformation of information required for the process of perception to begin (Bailey 1996). Each of our sensors (eyes, ears, nose, skin, mouth) collects information, or stimuli, uniquely but all will transform the stimulus energy into a form the brain can process.
Collecting Visual Stimuli: How the Eye Works
The eye is an organ responsible for vision. Many people use the analogy that our eye works like a camera. Both eye and camera have a lens, an aperture (pupil), and a sensor (retina). However, the manner in which sensing and processing occurs is very different. This should be understood and least a little by the designer in order to create displays that are easy to see and understand.
The eye collects, filters, and focuses light. Light enters through the cornea and is refracted through the pupil. The amount of light entering the lens is controlled by the iris. The lens focuses the beam of light and then projects it onto the back part of our retina where it contacts the photoreceptors known as rods and cones. These receptors are light sensitive and vary in relative density; there are about 100 million rods and only 6 million cones. The cones are used for seeing when there is bright light; three kinds of cones, each with their own pigment filter, allow perception of color. The rods are sensitive to dim lighting and are not color sensitive. These receptors convert light into electro-chemical signals which travel along the optic nerve to the brain for processing. Color deficits, commonly known as colorblindness -- affecting fully 10% of the male population, though almost no women -- are a result of reduced pigmentation in a cone, or loss of a whole type of cone.
Our eyes can only experience a narrow band of radiation in the electromagnetic spectrum. The narrow range is approximately 400 nanometers (where we can perceive the violet color) to about 700 nanometers (where red is perceived). The focused beam of light is then projected onto the back part of our retina where it contacts photoreceptors, known as rods and cones. These receptors are light sensitive and vary in the amount we have. There are about 100 million rods and only 6 million cones. The cones are used for seeing when there is bright light and are color sensitive. The rods are more sensitive so are useful for dim lighting and are not color sensitive. These receptors convert light into electro-chemical signals which travel along the optic nerve to the brain for processing.
The eye is sensitive to stimuli in many ways at any moment, including the size of stimulus, its brightness and contrast, and the part of the retina that is stimulated.
Visual Acuity and the Visual Field
Visual acuity is the ability to see details and detect differences between stimuli and spaces. Inside our eye, at the center of our retina, lie our fovea. The fovea is tightly packed only with cones (approximately 200,000) and it is here where our vision is most focused. The fovea is the central 1 to 2 degree of our eye, and the last 1/2 degree is where we have our sharpest vision. The farther away objects extend beyond our fovea range, the lower the resolution and color fidelity. We can still detect items peripherally, but with less clarity. Types of color perception vary by their location as well; blue can be detected about 60 degrees from our fixed focal point, while yellow, red, and green are only perceptable within a narrower visual field.
Factors affecting visual acuity depend on many things including: the size of the stimulus, the brightness and contrast of the stimulus, the region of the retina stimulated, and the physiological and psychological condition of the individual (Bailey, 1996).
Size of the Stimulus: Visual Angle
The actual size of an object is basically unimportant as far as how easy it is to perceive. Instead, it is the "visual angle" or the relative size to your eye. This takes into account both size and distance from the viewer. Discussions of this in various technical fields often discuss the angular resolution, as true resolution is unimportant.
The visual angle can be calculated using the following formula:
Visual Angle (minutes of arc) = (3438)(length of the object perpendicular to the line of sight)/distance from the front of the eye to the object.
Visual angle is typically measured in much smaller units than degrees such as seconds or minutes of arc (60 minutes in a degree, 60 seconds in a minute). Other specialized units may also be encountered such as milliradians, or may simply be in degrees with annoying large numbers of decimal places.
With an understanding of visual angle, we can determine the appropriate size of visual elements including character size viewed at specific distances. According to Human Factors Society (1988), the following visual angles are recommended for reading tasks:
- When reading speed is important, the minimum visual angle should not be less than 16 minutes of arc and not greater than 24 minutes of arc.
- When reading speed is not important, the visual angle can be as small as 10 minutes of arc.
- Characters should never be less than 10 minutes of arc or greater than 45 minutes of arc.
So, let’s assume you are designing text that is to be read quickly on a mobile device, with a viewing distance of 30 centimeters (11.8 inches). The equation would look like this:
Length= 16 minutes of arc (30)/3438.
The smallest acceptable character height would be 0.14 cm, or about 10 points. Remember, all this exists in the real world; you will have to take into account real-world sizes, never pixels, when designing for perception.
After our senses collect visual information, our brain begins to perceive and store the information. Perception involves taking information that was delivered from our senses and interacting it with our prior knowledge stored in memory. This process allows us to relate new experiences with old experiences. During this process of visualization of perception, our minds look to identify familiar patterns. Recognizing patterns is the essential for object perception. Once we have identified an object, it is much easier to identify the same object on a subsequent appearance anywhere in the visual field (Biederman and Cooper, 1992).
The Gestalt School of Psychology was founded in 1912 to study how humans perceive form. The Gestalt principles they developed can help designers create visual displays based on the way our minds perceive objects. These principles, as they apply to mobile interactive design are:
Proximity - Objects that are close together are perceived as being related and grouped together. When designing graphical displays, having descriptive text close to an image will cause the viewer to relate the two objects together. This can be very effective when dual coding graphical icons.
Similarity - Objects sharing attributes are perceived to be related, and will be grouped by the user. Navigation tabs that are similar in size, shape, and color, will be perceived as a related group by the viewer.
Continuity- Smooth, continuous objects imply they are connected. When designing links with nodes or arrows pointing to another object, viewers will have an easier time establishing a connected relationship if the lines are smooth and continuous and less jagged.
Symmetry - Symmetrical relationships between objects imply relationships. Objects that are reflected symmetrically across an axis, are perceived as forming a visual whole. This can be bad more easily than good. If a visual design grid is too strict, unrelated items may be perceived as related, adding confusion.
Closure - A closed entity is perceived as an object. We have a tendency to close contours that have gaps in them. We also perceive closed contours as having two distinct portions: an inside and outside. When designing list patterns, like the grid pattern described in this chapter, use closure principles to contain either an image or label.
Relative Size - Smaller components within a pattern are perceived as objects. When designing lists, using entities like bullets, arrows, nodes inside a group of information, will be viewed as individual objects that our eyes will be drawn to. Therefore, make sure these objects are relevant to the information that it is relating to. Another example of relative size is a pie with a missing piece. The missing piece will stand out and be perceived as an object.
Figure and Ground - A figure is an object that appears to be in the foreground. The ground is the space or shape that lies behind the figure. When an object uses multiple gestalt principles, figure and ground occurs.
Visual Information Processing
The information that is visually collected , begins early in our visually perception process. In a parallel, bottom-up, top-down process, neural activity rides two information-driven waves concurrently. The first wave occurs within the bottom-up process. Information collected by the retinal image passes to the back of our brain along the optic nerve in a series of steps that begin pattern recognition.
Step 1. Features in our visual field, like size, orientation, color, and direction are processed by specific neurons. Millions of these features are processed and used to construct patterns.
Step 2. Patterns are formed from processed features depending on our attention demands. Here, visual space is divided up by color and texture. Feature chains become connected and form contours. Many of these cognitive pattern recognitions are described through Gestalt Principles.
Step 3. Objects most relevant to our current task, are formed after the pattern-processing stages filters them. These visually objects are stored in our working memory, which is limited in its capacity. Our working memory holds only about three visual objects in attention at one time. These visual objects are linked to other various kinds of information that we have previously stored.
While the first bottom-up wave is processing patterns, the second top-down wave is processing which information is relevant to us at that moment and is driven by a goal. In addition, we associate actions that are then primed for our behaviors. So through a series of associated visual and nonvisual information and action priming, we can perceive the complex world around us. For more information about how we perceive visual information, read Visual Thinking for Design.
Now that we have an understanding that visual object perception is based on identifying patterns, we must be able to design visual displays that mimic the way our mind perceives information. Stephen Kossyln states “We cannot exploit multimedia technology to manage information overload unless we know how to use it properly. Visual displays must be articulate graphics to succeed. Like effective speeches, they must transmit clear, compelling, and memorable messages, but in the infinitely rich language of our visual sense” (Kossyln, 1990).
Display Elements are Organized Automatically
This follows gestalt principles. Objects that are close by, collinear, or look similar tend to be perceived as groups. So when designing information displays, like maps, adding indicators, landmarks, and objects that are clustered together, appear to be grouped and share a relationship. This may cause confusion when the viewer needs to locate his exact position.
Perceptual Organization is Influenced by Knowledge
When looking at objects in a pattern for the first time, the organization may not be fully understood or remembered. However, if this pattern is seen again over time, we tend to chunk this pattern and store it in our memory. Think of chessboard with its pieces played out. To a viewer who has never seen this game before, will perceive the board as having many objects. However, an experienced chess player, will immediately identify the objects and the relationships that have with each other and the board. So when designing visual displays, its essential to know the mental model of your user so they may quickly identify and relate to the information displayed.
Images are transformed Incrementally
When we see an object move and transform its shape in incremental steps, we have an easier time understanding that the two objects are related or identical. However, if we only see the object’s beginning state and end state, our minds are forced to use a lot of mental processing and load to understand the transformation. This can take much more time and also increase errors or confusion. When designing a list where items move, such as a Carousel, make sure the viewer can see the incremental movement.
Different Visual Dimensions are Processed by Separate Channels
Object attributes such as color, size, shape, and position are processed with our minds using separate processing channels. The brain processes many individual visual dimensions in parallel at once, but can only deal with multiple dimensions in sequence. For example, when designing bullet list that are all black circles, we can immediate identify all of them. However, if you add a bullet that is black, same size, but diamond shape, our minds have to work harder to perceive them as being different.
Color is Not Perceived as a Continuum
Designers often use color scales to represent a range. Such as red is hot, blue is cold. Temperatures in-between will be represented by the remaining visual spectrum between them. The problem is that our brains do not perceive color in a linear dimension as it physically exists like this. We view color based on the intensity and amount of light. So a better way of showing this temperature difference would be to use varying intensity and saturation.
If a perceptually orderable sequence is required, a black to white, red to green, yellow to blue, or saturation (dull to vivid) sequence can be used (Ware, 2000).
When high level of details are to be displayed, the color sequence should be based mostly on luminance to take advantage of the capacity of this channel to convey high spatial frequencies. When there is little detail, a chromatic sequence or a saturation sequence can be used (Rogowitz and Treinish 1996).
In many cases, the best color sequence should vary through a range of colors, but with each successive hue chosen to have higher luminance than the previous one (Ware, 1988).
A Brief Look Into Visual Perception
Legibility refers to the ease with which the elements, e. g. letters, numbers, symbols can be immediately detected, discriminated, identified from each other.