Differences between revisions 34 and 36 (spanning 2 versions)
Revision 34 as of 2011-05-15 08:45:34
Size: 12039
Editor: eberkman
Comment:
Revision 36 as of 2011-05-15 08:46:40
Size: 12044
Editor: eberkman
Comment:
Deletions are marked like this. Additions are marked like this.
Line 39: Line 39:
 ''[[Ordered Data]], [[Tooltip]], [[Avatar]], [[Wait Indicator]]'', and ''[[Reload, Synch, Stop]]''. ''[[Ordered Data]], [[Tooltip]], [[Avatar]], [[Wait Indicator]]'', and ''[[Reload, Synch, Stop]]''.
Line 48: Line 48:
Helpful Knowledge for this Section == Helpful Knowledge for this Section ==

The term “widget” can mean a number of things, even within related internet technologies. Even the savvy user may be confused by the lack of common terminology and the lack of any inherent meaning. The term may apply to bits of code, applets, engines, and GUI elements.

However, the scope of this book, and this part, is solely concerned with mobile GUI widgets. These widgets are display elements such as buttons, links, icons, indicators, tabs, and tooltips.

The functionalities of the widgets discussed in this section are to:

  • Display a small amount of directly related information.
  • Provide an alternative view of the same information, in an organic manner.
  • Provide access to related controls or settings.
  • Display information about the current state of the device.
  • Provide quick access to indexed information.

The widgets that will be discussed here are subdivided into the following chapters:

Types of Widgets

Lateral Access

Whether your information architecture is organized hierarchically or laterally, its presentation and access is affected by the potentially small mobile display. One option to consider is to use lateral access widgets to assist the user in quickly navigating through and selecting this content. This chapter will explain research based frameworks, tactical examples, and descriptive mobile patterns to use.

This chapter will discuss the following patterns:

Tabs, Peel Away, Simulated 3D Effects, Pagination, and Location Within.

Drilldown

Using an information architecture that is structured hierarchically allows content to be laid out from general to specific while depending on parent-child relationships. This drilldown, top-down approach is effective in providing users additional related content and commands within multiple information tiers.

This chapter will discuss the following patterns:

Link, Button, Indicator, Icon, Stack of Items, and Annotation.

Labels & Indicators

In some situations, it may be required to use small labels, indicators and other additional pieces of information to describe content. Mobile users each have unique goals. Some require instant additional information without clicking. Others may need additional visual cues to assist them while quickly locating information. In any case, the information labels must be presented appropriately while considering valuable screen real estate, cultural norms and standards.

This chapter will discuss the following patterns:

Ordered Data, Tooltip, Avatar, Wait Indicator, and Reload, Synch, Stop.

Information Controls

Finding specific items within a long list or other large page or data array can be challenging. Without appropriate controls to locate specific information quickly, the user experience will be quite frustrating. This chapter will discuss how widgets can be used to appropriately locate and reveal information.

This chapter will discuss the following patterns:

Zoom & Scale, Location Jump, Search Within, Icon, and Sort & Filter.

Helpful Knowledge for this Section

Before you dive right into each pattern chapter, we like to provide you some extra knowledge in the section introductions. This extra knowledge is in multi-disciplinary areas of human factors, engineering, psychology, art, or whatever else we feel relevant. Due to the broad characteristics of widgets, we find it helpful for you become knowledgeable in the following relevant areas.

  • Fitts Law
  • Wayfinding
  • Color Conspicuity.

Fitts’ Law

Paul M. Fitts (1912-1965) was a psychologist at both Ohio State University and the University of Michigan. 1n 1954, he created a mathematical formula to determine the relationship how long it takes a user to either select an object on the screen, or by physically touching it, based on it's target size and distance from the selector's starting point. Fitts' Law is widely used today by UX designers, human factor specialists and engineers when designing graphical user interfaces and comparing performance of various input devices.

Fitts Law finds that

  • The time required to move to a target is a function of the target size and distance to the target.
  • The further a target object is from the initial starting position, will require a longer time to make that successful selection.
  • That time can be increased when the target size is too small.

In mobile devices, we know that screen display size is limited and its space is valuable. In addition, mobile users require quick access to the content they are looking for. Using Fitts' Law together with these constraints, can improve the user experience

  • Buttons and selectable controls should be an appropriate size because it is relatively difficult to click on small ones. Using the screen bezel overflow can provide a cheat for placing smaller selectable buttons. This allows the user to place part of their finger off the screen while still activating the target. Refer to the General Touch Interaction Guidelines in the Input & Output section intro for more information.

  • Accessing information using pop-ups and tooltips can usually be opened or activated faster than pull-down menus since the user avoids travel.
  • Reduce the number of clicks to access content by providing surface level sorting and filtering controls to access indexed information quickly.

Wayfinding Across Content

Whether interacting on a PC, kiosk, mobile device, your users can easily get lost when navigating content. To reduce the frustration of being lost, visual, haptic, and even auditory cues can be used to help guide the user in getting to the place he needs to be Designing a navigation system must provide those cues to answer the following user questions:

  1. Where is my current state or position within the environment? Where am I on this page?
  2. Where is my destination? Where do I have to go to achieve my end goal?
  3. How do I get to my destination? How am I going to navigate across content to achieve my end goal?
  4. How do I know when I have arrived?
  5. How do I plan my way back? Are there alternate routes I can take?

Kevin Lynch, an Environmental psychologist and author of the book Image of the City, 1960, determined that we rely on certain objects, as cues, to help us identify our position within an environment. Let’s examine how these objects, as they relate to widgets, can be used to improve navigation.

  • Paths: Are the channels which an person moves along. Examples, are streets, walkways, transit lines, canals. On mobile devices, paths are the routes users take to access their desired content. These paths can follow both lateral and hierarchically organization structures. Help the user define routes by clearly labeling, color coding, and grouping related content. Use location within widgets to define the user’s current position along the path. Provide alternate paths to access the same information.

  • Edges: Are linear elements that define boundaries between two phases. Such as walls, buildings, and shorelines. On mobile devices, edges can include perimeter of the viewport, fixed menus, scroll bars, and annunciator rows. Use edges to appropriately contain navigation.

  • Nodes: are focal points like distinct street intersections. On mobile devices, these may serve as graphics, labels, and indicators to describe small pieces of content.

  • Districts: Are areas within boundaries that share common features. Such as neighborhoods, downtowns, parks.

  • Landmarks: are highly noticeable objects that serve as reference points.

Conspicuity with Color

Conspicuity, while involving legibility, also implies other display characteristics. It describes how well an object can be detected while it captures a user’s attention amongst other noise or other competing information.

Color can be used to classify, label, and emphasize information displayed on a screen. When using color for these things, you need to understand that we have limits in our processing abilities that affect our signal detection.

Opponent Processing Theory

In 1892, A German psychologist Ewald Hering theorized that there are six elementary colors that are arranged perceptually as opponent pairs along three axes. These pairs are: black-white, red-green, and yellow-blue. Each color either is positive (excitatory) or negative (inhibitor). These opponent colors are never perceived at the same time because the visual system cannot be simultaneously excited and inhibited.

Our modern color theory stems off of this. Today, we know that the input from the cones is processed intro three distinct channels:

  • The luminance channel (black-white) is based on input from all of the cones. It combines the outputs of long and middle wavelength sensitive cones.
  • The red-green channel is based on the difference of long and middle-wavelength cone signals.
  • The yellow-blue channel is based on the difference between the short-wavelength cones and the sum of the other two (Ware, 2000).

Color for Labeling

or more technically – nominal information coding, is used to because color can be an effective way to make objects easy to remember and visually classify.

Perceptual factors to be considered in choosing a set of color labels:

  • Unique hues: Based on the Opponent Theory, they are: Red, Green, Yellow, Blue, Black, and White.

  • Contrast with background: Our eyes are edge detectors. When we have objects that must be in front of a variety of backgrounds, it may be beneficial to have a thin white or black border around the color-coded object. Consider the reasons why alert street signs, have the border, too.

  • Color blindness: About 7% of males and only 0.5% of females are color blind in some way. The most common is being red-green color blind.

  • Number: We are limited in the number of color-codes we can rapidly perceive. Studies recommend use between five and ten codes.

  • Field Size: Object size affects how you should color code. Small color-coded objects less than half a degree of visual angle and in the yellow-blue direction range should not be used to avoid the small-field colorblindness.

  • Conventions: Color conventions are culturally defined and accepted. When using color-naming conventions, be cautious of cultural differences. Some common conventions are:

    • red = hot and danger,
    • blue = cold, green
    • green = life, environement, go
    • In China, red =life, good fortune, green =death.

Color Conspicuity Guidelines For Mobile Devices

  • Use colors with high contrast between the text and the background. Optimal legibility requires black text on a white background, and White text on black background is effective as well.
  • For text contrast, the International Standards Organization (ISO 9241, part 3) recommends a minimum of 3:1 luminance ratio of text and background. Though a ratio of 10:1 is preferred (Ware, 2000).
  • When having text on background, purely chromatic differences are not suitable for displaying any kind of fine detail. You must have a considerable luminance contrast in addition to color contrast.
  • When large areas of color-coding are needed, like with map regions, use colors with low saturation.
  • Small objects that are color-coded should use high-saturation.
  • The majority of colorblind people cannot distinguish colors that differ in the red-green direction.
  • Recommended colors for color-coding: 1. Red, 2. Green, 3. Yellow, 4. Blue, 5. Black, 6. White, 7. Pink, 8. Cyan, 9. Gray, 10. Orange, 11. Brown, 12. Purple (Ware, 2000).

Widget (last edited 2011-12-13 05:27:37 by shoobe01)