The term “widget” can mean a number of things, even within related internet technologies. Even the savvy user may be confused by the lack of common terminology and the lack of any inherent meaning. The term may apply to bits of code, applets, engines, and GUI elements. However, the scope of this book, and this part, is solely concerned with mobile widgets. These widgets are uniquely designed to accommodate the varying differences in display size and device technology, as well as the different types of user needs, their tasks, and browsing goals.
Mobile widgets are always-on, internet-connected, auto-updated, lightweight applications that reside within an existing OS framework and appear as miniaturized (i.e. never full-screen) display elements within an existing or enclosing GUI. Mobile widgets are highly reusable items and used repeatedly, across the device’s OS.
The widgets that will be discussed here are subdivided into the following chapters:
Chapter 6, Lateral Access
Chapter 7, Drilldown
Chapter 8, Labels & Indicators
Chapter 9, Information Controls
Types of Widgets
Whether your information architecture is organized hierarchically or laterally, its presentation and access is affected by the potentially small mobile display. One option to consider is to use lateral access widgets to assist the user in quickly navigating through and selecting this content. This chapter will explain research based frameworks, tactical examples, and descriptive mobile patterns to use. This chapter will discuss the following patterns:
Using an information architecture that is structured hierarchically allows content to be laid out from general to specific while depending on parent-child relationships. This drilldown, top-down approach is effective in providing users additional related content and commands within multiple information tiers. This chapter will explain research based frameworks, tactical examples, and descriptive mobile patterns to use. This chapter will discuss the following patterns:
Labels & Indicators
In some situations, it may be required to use small labels, indicators and other additional pieces of information to describe content. Mobile users each have unique goals. Some require instant additional information without clicking. Others may need additional visual cues to assist them while quickly locating information. In any case, the information labels must be presented appropriately while considering valuable screen real estate, cultural norms and standards. This chapter will discuss the following patterns:
Finding specific items within a long list or other large page or data array can be challenging. Without appropriate controls to locate specific information quickly, the user experience will be quite frustrating. This chapter will discuss how the following widgets can be used to appropriately locate and reveal information:
Other Widget Types
GUI Widget - Dating back to early X-windows implementations in the 1980s, the term widget refers to any GUI element, especially in windowing systems (Widget,get it?).
Programmers and interaction designers for desktop computer systems use widgets in the design of the interface for their systems. Widgets are things like buttons, pulldown menus and scroll bars. This definition naturally tends to be extended to similar elements in any GUI, including mobile phones, and can even refer to elements of pure code, without a direct interface element.
Web Widgets - Bits of embedded code to generate dynamic content within a web page are also called widgets. The first web widgets were the visit counters (followed shortly by third party advertising), but have been extended to uncounted other uses.
While many widgets are built for and by specific systems or pages, tens of thousands of general widgets are available for use on most any page. These are also known as modules, snippets, and plug-ins, depending on the system being used.
Desktop Widgets - Any applet, easily accessible as a discrete UI element from the desktop, is a widget. Widget-like items have been available since at least the mid 1990s; consider items such as Apple's Desk Accessories and Microsoft's Active Desktop.
Widget engines are systems to enable installation of small, generally internet-enabled applets that can be used directly from the desktop, or immediately accessed (usually with one keystroke) from anywhere in the GUI.
Mobile device users have needs which vary from desktop computer users. Their needs are not much different from those of any other application, or the device itself. Considering those needs, widget engines should all abide by a small number of core principles:
- The widget information should be immediately available without clicks. The likelihood of distraction means the glanceable nature of widgets is their key feature. That means the selected widgets have to be on the device desktop, home or idle screen and always be running.
- And that means that widgets can tweak some of the rules for mobile design. They can be less-than-screen size, unlike full applications, and use smaller type than usual if needed for the space, for example.
- The information in the widget should be contextual. Allow (and perhaps require) the customer to provide information so they see the right weather and time, but if possible use location, time-of-day, contacts, activity and other context information to present the most relevant content.
- The widgets displayed, and their position, should be easy to customize. Since lots of people cannot be bothered to customize, try to use information gathered from the user to offer a personalized display for them.
- Widgets must be beautiful. Preferably actually visually appealing, but also easy to use, engaging, and enjoyable to use. If you don't believe in the value of the visual appeal, consider how many mobile users personalize with wallpapers or screensavers.
Widget Information Architecture Approach
Far too many mobile widgets are like the links on a bad web portal; icons or banners or a word or two that link to a start page from where you can launch an app. The good ones use the stepped portlet approach to information design:
- A tiny bit of focused information, with a link to...
- ...more breadth and depth about that fragment of information, and links to...
- ...the entire category of information.
Take as an example a weather mobile widget. The small box on the home page can be as small as an icon about the primary info for the day: cloudy, hot, rainy. Click, and you go straight to the “today's weather” page, short enough that the critical info can be viewed without scroll. That page offers additional links (maybe even in a softkey or option menu) to get to the weather home page or other pages.
Paul M. Fitts (1912-1965) was a psychologist at both Ohio State University and the University of Michigan. 1n 1954, he created a mathematical formula to determine the relationship how long it takes a user to either select an object on the screen, or by physically touching it, based on it's target size and distance from the selector's starting point.
Fitts' Law is widely used today by UX designers, human factor specialists and engineers when designing graphical user interfaces and comparing performance of various input devices.
Fitts Law finds that
- The further a target object is from the initial starting position, will require a longer time to make that successful selection.
- That time can be increased when the target size is too small.
In mobile devices, we know that screen display size is limited and its space is valuable. In addition, mobile users require quick access to the content they are looking for. Using Fitts Law together with these constraints, can improve the user experience.
- Buttons and selectable controls should be an appropriate size because it is relatively difficult to click on small ones.
- Accessing information using pop-ups and tooltips can usually be opened or activated faster than pull-down menus since the user avoids travel.
- Reduce the number of clicks to access content by providing surface level sorting and filtering controls to access indexed information quickly.
Wayfinding Across Content
Whether interacting on a PC, kiosk, mobile device, your users can easily get lost when navigating content. To reduce the frustration of being lost, visual, haptic, and even auditory cues can be used to help guide the user in getting to the place he needs to be. Designing a navigation system must provide those cues to answer the following user questions:
- Where is my current state or position within the environment? Where am I on this page?
- Where is my destination? Where do I have to go to achieve my end goal?
- How do I get to my destination? How am I going to navigate across content to achieve my end goal?
- How do I know when I have arrived?
- How do I plan my way back? Are there alternate routes I can take?
Kevin Lynch, an Environmental psychologist and author of the book Image of the City, 1960, determined that we rely on certain objects, as cues, to help us identify our position within an environment. Let’s examine how these objects, as they relate to widgets, can be used to improve navigation.
Paths: Are the channels which an person moves along. Examples, are streets, walkways, transit lines, canals. On mobile devices, paths are the routes users take to access their desired content. These paths can follow both lateral and hierarchically organization structures. Help the user define routes by clearly labeling, color coding, and grouping related content. Use location within widgets to define the user’s current position along the path. Provide alternate paths to access the same information.
Edges: Are linear elements that define boundaries between two phases. Such as walls, buildings, and shorelines. On mobile devices, edges can include perimeter of the viewport, fixed menus, scroll bars, and annunciator rows. Use edges to appropriately contain navigation.
Nodes: are focal points like distinct street intersections. On mobile devices, these may serve as graphics, labels, and indicators to describe small pieces of content.
Districts: Are areas within boundaries that share common features. Such as neighborhoods, downtowns, parks.
Landmarks: are highly noticeable objects that serve as reference points.
Widgets on the Idle Screen
Most widgets should appear directly on the handset's idle screen (aka: standby screen, start screen or home deck). Users should never be required to launch the widget engine as a separate application. While these free-standing apps may have some value, they do not meet the requirements of a readily-usable (glanceable) mobile widget.
- Try to make everything fit on one screen, without scrolling. If scrolling is required, don't require side-scroll, ever.
- Consider what is available on a phone without a widget engine. The clock will need a home, or perhaps a widget of its own, for example. Can items in the device status bar (battery, signal) be presented better as widgets?
- Adding, removing and moving widgets should be easily accomplished. Your users may not have a desktop computer, so some or all of this control should be available from the handset.
- New widgets should be able to be downloaded directly with the handset. Note that while as few as 1% of your users may actually customize, the ability to customize adds an irreplaceable sense of empowerment to the rest.
- Wallpapers, themes and other custom features users have come to expect from their phones, should continue to be offered, and must work with the widget engine. Preferably, the widgets themselves respond to these changes (such as absorbing facets of the theme as a style change).
- Primary handset functions should still be readily available. This may require dedicated spaces to communicate what takes the bulk of the screen in typical phones today.
Conspicuity with Color
Conspicuity, while involving legibility, also implies other display characteristics. It describes how well an object can be detected while it captures a user’s attention amongst other noise or other competing information.
Color can be used to classify, label, and emphasize information displayed on a screen. When using color for these things, you need to understand that we have limits in our processing abilities that affect our signal detection.
Opponent Processing Theory
In 1892, A German psychologist Ewald Hering theorized that there are six elementary colors that are arranged perceptually as opponent pairs along three axes. These pairs are: black-white, red-green, and yellow-blue. Each color either is positive (excitatory) or negative (inhibitor). These opponent colors are never perceived at the same time because the visual system cannot be simultaneously excited and inhibited.
Our modern color theory stems off of this. Today, we know that the input from the cones is processed intro three distinct channels: The luminance channel (black-white) is based on input from all of the cones. We have two chromatic channels. The red-green channel is based on the difference of long and middle-wavelength cone signals. The yellow-blue channel is based on the difference between the short-wavelength cons and the sum of the other two (Ware, 2000).
In 1986, Post and Green created an experiment to test how subjects could effectively name 210 colors on a computer screen. The results of that test that are worth noting are:
- The pure monitor red was actually named orange most of the time.
- A true red color required the addition of a blue monitor primary.
- Eight colors out of the 210 were consistently named, suggesting that only a small number of colors should be used to label categories.
Color for Labeling
or more technically – nominal information coding, is used to because color can be an effective way to make objects easy to remember and visually classify.
Perceptual factors to be considered in choosing a set of color labels:
Unique hues: Based on the Opponent Theory, they are: Red, Green, Yellow, Blue, Black, and White.
Contrast with background: Our eyes are edge detectors. When we have objects that must be in front of a variety of backgrounds, it may be beneficial to have a thin white or black border around the color-coded object. Consider the reasons why alert street signs, have the border, too.
Color blindness: About 7% of males and only 0.5% of females are color blind in some way. The most common is being red-green color blind.
Number: We are limited in the number of color-codes we can rapidly perceive. Studies recommend use between five and ten codes.
Field Size: Object size affects how you should color code. Small color-coded objects less than half a degree of visual angle and in the yellow-blue direction range should not be used to avoid the small-field colorblindness.
Conventions: Color conventions are culturally defined and accepted. When using color-naming conventions, be cautious of cultural differences. Some common conventions are:
- red = hot and danger,
- blue = cold, green
- green = life, environement, go
- In China, red =life, good fortune, green =death.
Color Conspicuity Guidelines For Mobile Devices
- Use colors with high contrast between the text and the background. Optimal legibility requires black text on a white background, and White text on black background is effective as well.
- For text contrast, the International Standards Organization (ISO 9241, part 3) recommends a minimum of 3:1 luminance ratio of text and background. Though a ratio of 10:1 is preferred (Ware, 2000).
- When having text on background, purely chromatic differences are not suitable for displaying any kind of fine detail. You must have a considerable luminance contrast in addition to color contrast.
- When large areas of color-coding are needed, like with map regions, use colors with low saturation.
- Small objects that are color-coded should use high-saturation.
- The majority of colorblind people cannot distinguish colors that differ in the red-green direction.
- Recommended colors for color-coding: 1. Red, 2. Green, 3. Yellow, 4. Blue, 5. Black, 6. White, 7. Pink, 8. Cyan, 9. Gray, 10. Orange, 11. Brown, 12. Purple (Ware, 2000).