For the newest version of this presentation, always go to: 4ourth.com/tppt
For the latest video version, see: 4ourth.com/tvid
Summary in text and all the linked articles, research and references are at: 4ourth.com/Touch
We are finally starting to think about how touchscreen devices really work, and design proper sized targets, think about touch as different from mouse selection, and to create common gesture libraries.
But despite this we still forget the user. Fingers and thumbs take up space, and cover the screen. Corners of screens have different accuracy than the center. It's time to re-evaluate what we think we know.
Steven reviews his ongoing research into how people actually interact with mobile devices, presents some new ideas on how we can design to avoid errors and take advantage of this new knowledge, and leaves you with 10 (relatively) simple steps to improve your touchscreen designs tomorrow.
1 Fingers, Thumbs and People
Designing for the way your users really hold and touch their phones and tablets @shoobe01 #UXPA2014
We are outnumbered.
3 Many more mobiles than
4 80% growth in use
Users prefer mobile.
Design for mobile.
8 What we used to
know:What we used to know: 44 px
9 But now we know
11 1,333 19 120,626,225 651
12 Touch is not about
• Finger size • Thumb reach • No-go corners • Pinpoint accuracy • iPhones only
Where do I start?
14 10 design guidelines for
fingers, touch and people
15 1.Your users are not
20 1. • Design for
every user. • Accept that users change. • Plan for every device. Your users are not like you.
21 2.Users prefer to touch
the center of the screen.
24 2. • Place key
actions in the middle. • Secondary actions along the top and bottom. Users prefer to touch the center of the screen.
25 3.Users prefer to view
the center of the screen.
28 3. • Place key
content in the middle. • Allow users to scroll content to comfortable viewing positions. Users prefer to view the center of the screen.
29 4.Fingers get in the
37 4. • Make room
for fingers around targets. • Put your content or functions where they won’t be covered. • Leave room for gesture and scroll. Fingers get in the way.
38 5.Different devices are used
in different ways.
45 2.5” 3.5” 5” 7-10”
In Stand 4 pt 6 pt 7 pt 8 pt 10 pt
46 5. • Support all
input types. • Predict use by device class. • Account for distance by adjusting sizes. Different devices are used in different ways.
6.Touch is imprecise.
53 6. • Make touch
targets as large as possible. • Tap entire containers. • Design in lists and large boxes. Touch is imprecise.
7. Touch is inconsistent.
62 • Design by zones.
• Don’t force edge selection. • Very large spacing along the top and bottom. Touch is inconsistent.7.
63 8.People only click what
66 8. • Attract the
eye. • Afford action. • Be readable. • Inspire confidence. People only click what they see.
67 9.Don’t forget cases and
69 9. • Provide room
for edge taps and off- screen gestures. Don’t forget cases and bezels.
70 10. Work at human
76 XHDPI: 320ppi XXHDPI: 480ppi
MDPI: 160ppi Really: 267ppi Really: 445ppi Really: 180ppi 83% 92% 112%
78 10. • Paper is
your friend. • Test and demonstrate on real devices. • Pixels are a lie. Plan accordingly. Work at human scales.
79 Steven Hoober firstname.lastname@example.org +1
816 210 0455 @shoobe01 shoobe01: www.4ourth.com
82 Read more on design
for touch, mobile and people: 4ourth.com/wrtg
83 Appendix: Touch technology, additonal
data, and other stuff
84 • Orientation: 60% Landscape,
40% portrait, but… which device did you mean? • 84% touch with the right hand. • Age, sex, region? No perceptible changes but…
91 Proximity Accelerometer Gryosensor Light
color Gesture Cover sensor Light level Capacitive Touch Screen
Programming Touch Events
95 Contact me for consulting,
design, to follow up on this deck, or just to talk: Steven Hoober email@example.com +1 816 210 0455 @shoobe01 shoobe01 on: www.4ourth.com
As a mostly-mobile guy, I have to remind myself that not everyone knows how big and important it is. So first, a brief overview of why this presentation matters…
There are more mobile devices than humans. Yes, over 7 billion devices in use.
Computer sales are plummeting. PC sales dropped 95% in 2013. Mobiles continue to grow, and for several years now have outsold desktops and laptops.
If you heard that iPad sales are flattening, remember that’s just one device by one maker. There will be more tablets sold in 2014 than desktops and laptops combined.
Even with all this scale in place, mobile use rates continues to grow, rapidly. Mobile traffic grew 80% in 2013.
Which I believe. Depending on the survey, as many as 2/3rd the people in the US only have a mobile internet device, or prefer to use their mobile over a desktop or laptop—even when one is available in front of them or in the next room—to access the internet. You won’t be surprised that the rates in places like Kenya, where connectivity is generally mobile, are over 90%.
Almost half of ALL the data transferred over the internet (in the US) this most recent Christmas Day came from mobile devices.
So, design for mobile, adaptively, as you design your solutions on every platform.
And that means most of the time we’re going to design for touch. Which should be a snap. I mean, touch is so natural. [CLICK] Anyone can design a touch-based system without risk of users hitting the wrong target or anything.
Oh, you have problems? Everyone does. Because touch is still fairly new. We are still developing patterns of interaction. And we don’t really, in general, understand how touchscreens even work.
More of these at DamnYouAutocorrect.com
What we we used to “know” about touch was…
[CLICK] …what Apple told us, the 44 pixel target.
But that was based on some convenience of that platform’s design, and pixel sizes. It’s not based on the real world.
Because the OS makers also don’t really know. We’ve all stumbled into this, and so unless you work for Apple or Google, you need to work /around/ their concept of touch.
Now we are starting to know how to design for people. And for the many devices that people use, not just iPhones and iPads. We know how to design for hands, fingers and thumbs.
(Image is cover page from http://www.amazon.com/Fingers-Thumb-Bright-Early-Board/dp/0679890483)
We know this from
— 1,333 original observations on how people hold and touch their phones
— At least 19 serious, academic studies (by others) which I referenced and analyzed
— Including one with some 91,731 users and over 120 million touch events.
— 651 new observations done in coordination with the eLearning Guild, on how people also use phablets and tablets in offices, classrooms and the home
— And I am currently doing some additional research to get info on gesture and context, with analysis complete on 31 videos of people touching their phones and tablets. That data is in here, but look for a research report with that information in a month or two.
We know that this diagram is wrong (and you can tell anyone who repeats it).
-- We know touch accuracy has nothing to do with finger or thumb size.
-- We know it has no direct relationship to reach.
-- There are not “no go” areas in the corner of the screen to avoid or put dangerous controls, just areas of more and less accuracy, which we can easily account for in design.
-- No one, and no design solution, will yield pinpoint accuracy so you can use tiny targets.
-- And I don’t know what the next big device will be, but it won’t be whatever one thing you are designing for today.
So what do you need to know?
Well, there’s a lot of information. And I encourage you to read more about this if you design for touch interfaces all the time. You need to internalize this knowledge.
But I’ll try to make it easy on everyone.
Just understand these 10 user behaviors, and the accompanying guidelines, to make your designs work for touch and people in the real world.
It’s easy to make assumptions, and confuse empathy with your own point of view. Your users are not like you, or your friends.
And neither are you. We are bad at observing ourselves as well.
And there’s no one way per user, anyway. Because users change the way they work with their phones, regularly shifting their grip.
To reach other areas with another finger, to type with two thumbs…
To cradle the device for more reach…
(Video from Luke Wroblewski, who gathered it on a plane sometime in 2013.)
And the more I watch people, the more I am amazed at how variable their interactions are.
How they are comfortable changing their hand position. how they touch the screen in different ways to do different things with their devices, as they change tasks and context.
(Video from set of user interviews I did in spring 2014 on teen use of mobile devices. A teenager with her Galaxy Tab.)
Much of the data I have gathered allows us to chart these use patterns…
(See http://www.uxmatters.com/mt/archives/2013/02/how-do-users-really-hold-mobile-devices.php for more information on grasping methods.)
… and note for example that 75% of users only touch the screen with one thumb. But that can be misleading. [CLICK]
-- Because less than HALF hold the phone with one hand also, and that’s for phones. Much less for phablets and tablets of course. [CLICK]
-- 36% “cradle” the device, using a second hand for reach or stability. [CLICK]
-- And fully 10% hold in one hand, and tap with a finger, giving a totally different interaction.
Users are, in general, comfortable shifting their grip to get to whatever part of the screen you make them touch.
Don’t make assumptions about one type of user, or assume what is a popular device today will be important tomorrow. You will end up disregarding all others.
Design for every user, accept that users change their way of touching and holding, and plan for your design to work on any device.
Users, in general, for every portable touchscreen device, prefer to touch the center of the screen.
I recently confirmed this, and this is the actual data from a study I performed. These are the actual tap positions when users selected items from a full-screen scrolling list. They naturally moved the content to the position they could tap, then I recorded the position. This also reflects other data on tap accuracy and preference.
[CLICK] And when you account for content position and different devices, you find that most taps are in about the center half of the page.
If you wonder about tablets, they are surprisingly similar so that data is embedded into this data viz.
So, you might think that when you copy the UI for something like this, the key controls are the actions and input at the top and bottom of the viewport.
[CLICK] but in fact the primary content and interactive area is in the middle of the page. All these content-centric tools are already based around the user’s primary behavior of viewing and tapping the center of the viewport. The other functions are secondary options.
Even though it seems to be subconscious, or maybe learned, users prefer to touch the center of the page, and will do so when given a choice.
Place key actions in the middle half to 2/3rds of the screen, and place options, and secondary paths along the top and bottom of the screen.
Conveniently, this extends to viewing as well.
Follow the existing, reliable mobile pattern of list views, or grid views, and put your main content and interaction in the middle of the page.
Make sure menu bars, tabs, and status displays and action items on the top or bottom are secondary.
If you have content that scrolls, or takes up the whole page (and, of course, you all do), you need to make sure the bottom of your scrolling articles and forms are padded, so users can bring the last line of text, or that last field towards the middle of the page.
Otherwise, they will still try, and waste time then be that little-bit-more dissatisfied. And avoid this [CLICK] trendy way of showing pages that fit to the viewport, and snap to the page, so you can’t really scroll. People don’t read like that.
So even if you aren’t interacting with it, make sure key content is in the middle of the page.
Sometimes, this means providing extra room or other provisions to let users scroll longer content to the middle of the viewport.
I said earlier, briefly, finger size doesn’t matter. And it’s true, but only for touch target size, and touch accuracy.
But fingers are opaque. They get in the way.
This is anecdotal, but I have seen similar results on real projects.
When I updated to the new Twitter, I kept hitting [CLICK] the Add-person icon. Not just because I focused on the middle first, not just because that icon has got this very inviting “plus”…
But mostly because the compose area down here [CLICK] is obscured by my thumb as I naturally scroll through the content.
I simply missed it while glancing around at the actual tweets in the middle of the page.
So where ARE our fingers on the screen? Well, it depends on what we do and how we grasp. It is hard to say not to place items below, or to the side. You need to simply provide room.
Room to make sure that you can see the target, see the label, and see the clicked state when the target is selected.
[CLICK} When you click a whole row, and especially one multiple lines long like this, that works great [CLICK] Little icons like the retweet are too small, so the user can’t accurately target them, can’t confirm which one he selected, and can’t see that the selected state changed on tap so cannot be sure he tapped it at all.
In some contexts, we know even better what users do. For example, this is where people scroll. Interesting… but why those three distinct areas?
Because they have to do with what content is being displayed. Here, the content is a list with very small amounts of information, so there are large blank areas in the middle of the screen. The users prefer to touch… the center. Get used to that coming up over and over. All other things being equal, people want to touch and look at the center of the screen.
Yes, there are outliers and I included all data, but most users are gesturing in the middle.
And here, where there were fairly long pieces of content occupying much of the screen width, users did most of their scrolling to the far right. Even left handed users were more inclined to avoid touching the content so reached across the screen.
Users are not always confident scrolling in areas where there are items, or content they want to see or worry they will interact with. When the page is simply full of content and there’s no room, they will choose to scroll to the right side or near the bottom.
Yes, this also varies a bit based on device size. On tablets, your content might be shorter so there’s more room. (I normalized the data to the handset so you could compare it more directly here).
You might think that users would stick to the edges on tablets, because they are bigger. Of course people can’t reach the center. But wrong. They are always inclined to tap the center of the screen, so when room is available to confidently scroll without covering content, they will move their finger, thumb or stylus over there even if it’s a reach, or requires repositioning their finger or their device.
Make sure people can see content around their fingers and thumbs.
Make sure selectable items are large enough to clearly indicate when a tap is successful. Try to place functions and the content that changes so users can see the results, or to invite them to perform actions you think are important.
And think about what users are clicking or scrolling on. White space may be really important to give them confidence to gesture.
Stop saying “fragmentation” as though it is bad. Respect user choice. These devices are different, because people’s needs are different, and this is reflected in the way the devices are used.
We all think of our typical smartphone being held about here, around 12 inches or 30 cm from the eyes, and when walking around.
But phablets (and 1/3rd of smartphones sold now have screens over 5”) are used a little more when sitting down…
And tablets are used almost 2/3rd of the time in a stand [CLICK] or set down on tables.
Large tablets, like the 10” iPads, are used about ¼ of the time with physical keyboards, with the screen at almost arm’s reach away, [CLICK] And almost 10% [CLICK] with pen styluses.
Yeah, that’s a pen hiding under the case.
In general, as devices get larger, they are used less and less held in the hand.
Distance from the eye can be surmised by device class.
And the smaller the device is, the more it is used on the move.
On the move doesn’t mean in busses or on trains, but can just mean when you walk around the house or office. Instead of finding time to stop and use that tablet on the table, or sit and type on a computer at your desk.
This is critical partly because we don’t view anything based on size, but on resolution at our eyeballs. And the relationship between this and that is called angular resolution. Which we can calculate…
(For more on this, and the math from the next slide, start with http://4ourth.com/wiki/Human%20Factors%20%26%20Physiology)
This is actually the simple version of this formula. To get the 3438 number requires knowing the size of the sensors in your eyeball, and so forth.
Don’t take a picture of this formula. I’ve done the math for you.
Visual Angle (minutes of arc) = (3438) * (length of the object perpendicular to the line of sight) / (distance from the front of the eye to the object)
The larger devices get, the further away from the eye they are used. [CLICK] Small handsets are held very close to the eye, larger ones and phablets further away, and tablets at approximately desktop distance since so many are in stands, with keyboards.
[CLICK] Minimum text sizes vary from 4 point for small handsets, to 10 points for devices set on tables or in stands. Yes, this really depends on the actual context, but we can make very good guesses based on device class.
These are MINIMUMS. At least 30% larger for almost all actual uses like body copy. Even larger for more readability, for active environments, and for older populations. The smallest sizes are okay for things like labels under icons, though.
Icons and other elements follow these same scale rules, and can roughly follow about these actual sizes. They have the same concerns of readability as text.
Support all input types, especially if you are building responsive websites, or expect to make an app for tablet and handset.
If you can, get data on how your users work in their actual environment. But for most users, the patterns I outlined are pretty safe, and you can predict size and use by device class.
Account for distance by adjusting the sizes of readable items like type, icons, text fields, checkboxes and buttons.
People are never going to be able to precisely click your target. There’s always inaccuracy. But you can account for it in design.
We’re talking here entirely about capacitive touchscreens. There are others, but we don’t care about them today. Ask me if you design for resistive touch.
Capacitive touch uses the electrical conductivity of your finger to work. In part, this means, that what is always sensed is the centroid (or geometric center) of the contact patch…
… or the part of your finger that gets flattened against the screen.
And nothing else. The phone can’t (generally) sense how big your contact patch is, so can’t tell how hard you pressed, or anything else. All it gets is a point that it assigns to be the touched coordinates.
But that point is never, ever perfectly aligned. [CLICK] There is no such thing as perfect accuracy, so the user misses. Accuracy is relative, and we define it with the Circular Error of Probability, or CEP which is just a mathematical representation of how much you miss a target.
Here, I use the R95 measure [CLICK] or the Radius containing 95% of taps. When everything is imprecise, we stop calling these errors, and refer to tolerances instead. We need to plan for imprecision and problems as part of the process.
Be sure to provide the largest practical touch targets.
[CLICK] Don’t just code the word or icon as a link…
[CLICK] But like these guys do, use natural boundaries in your design (boxes, buttons, whole rows) the selectable or linked area also. Tap anywhere nearby and you hit the target.
Look around and you’ll see this is a known best practice [CLICK]. The Google drawer menu isn’t as small as it appears, with just the little arrow or menu icon [CLICK]. A default implementation also opens it on selecting the branding, so is much easier to tap than it appears.
Lots of hybrid apps don’t notice this, and code it wrong.
Remember that real users work with your touch interface in the real world.
Make touch targets as large as possible, using entire containers such as entire rows, boxes and buttons, not just the icon or word.
Don’t design in the little details, or retrofit touch design. Make your design touch centric at the grid and template level to provide enough room, and the right kind of interactivity.
Touch isn’t just inaccurate, but it’s inconsistently inaccurate.
And what is most interesting is that the largest variable is not environmental conditions, familiarity with touchscreens or anything. It’s the position on the screen they are trying to tap.
This is a chart of the accuracy of touch on various devices, aggregated over very large numbers of individuals. Black is more accurate. So now, we know HOW accurate people are, and how it varies by section of the screen.
We know that people are more accurate at the middle of the screen. And we mean pretty much any screen, any way they hold their phone or tablet. They also subconsciously know this — or it may be tied to their preference for reading in the middle — so are more confident at the center, but will slow down to tap corner or edge targets.
(From Henze, Niels, Enrico Rukzio, and Susanne Boll. “100,000,000 Taps: Analysis and Improvement of Touch Performance in the Large.” Proceedings of the 13th International Conference on Human Computer Interaction with Mobile Devices and Services. New York: ACM, 2011)
If we map that research differently, we can see how much more room is needed between items in various parts of the screen. [CLICK] The sides are a little worse than the center, [CLICK] but the top and bottom require much more room, [CLICK] and corners are the worst.
I think these actually neatly correspond to sort of structural zones [CLICK] that already exist in much of our design. Think of the rows you already design to with mastheads, tabs, the big content area in the middle, and the chyron at the bottom.
(This whole principle detailed, with many references, in http://www.uxmatters.com/mt/archives/2013/11/design-for-fingers-and-thumbs-instead-of-touch.php)
Design by zones, spacing selectable items to prevent interference based on how well people touch parts of the screen. [CLICK]
You can almost get away with calling this tip “Avoid the corners” instead. Edges and corners have less accuracy. When putting items here, space them further apart, and use fewer tabs or menu bar items. The sides are also a bit worse, so avoid actions that take place only at the left ends of a list; take advantage of the natural middle-selection preference and improved accuracy.
Whether you check digitally, or as I’ll show later, with real world tools, you measure space between centers. Center the size target in the tappable area [CLICK], and if anything else is in the circle, that has a chance of being tapped by accident.
These top icons are a bit too close together, [CLICK] and the tabs are far too short, so you can tap the action buttons or a tweet by accident.
[CLICK] the icons in the middle of the page are small, but most important is that missing them selects the tweet for viewing. This is a tactic; if interference is likely, design for resilience, so the user can make do, but certainly never so there’s an unrecoverable condition.
Email format controls, for example, should never be right next to the Send button. Send is unrecoverable.
Once you’ve accounted for interference, [CLICK] you want to design to not annoy. Things like this, where you try to click the link and instead open the reply dialog are not a catastrophe, but could be better.
There isn’t much data on how we use tablets. Till now. I went and got my own as part of this latest round of research, and was able to confirm that these same pointing accuracy levels apply to 7,8 and 10 inch tablets as well.
People click most accurately in the center…
a little less well along the sides…
And notably less well along the top and bottom of the screen, and especially in the corners.
This test app is hybrid, with their default target sizes and many users couldn’t select the menus — on handsets or tablets — as a result.
Design by zones, spacing selectable items to prevent interference based on how well people touch parts of the screen.
The sides are less accurate than the centers, so if you use lists, avoid things like delete or select being only along the left or right side. If you have to do this, then pick your vertical list spacing from the side accuracy.
When you place controls along the top or bottom use as few items as possible and space them out. The Android Action Button spacing is too tight, so loosen it up. More than 4 items on an iOS Menu bar is just asking for trouble.
And remember to plan for interference, and space unrecoverable or annoying-to-exit items far from others, provide undo features, etc.
I still get clients asking for easter-egg level hidden gestures, with the theory being that users have to explore your app to find the neat features.
Sure. If you want. But start with what works. Simple controls, that work in expected ways. And the most expected controls are those that are visible, and communicate what they will do.
Make sure selectable items are clearly selectable. I haven’t done a study specifically about this, but I am seeing enough observations on other research, and for usability studies for clients I am comfortable saying what seems obvious:
If it doesn’t look clickable, people don’t know it is. Underlines aren’t bad for text, inline, but especially for apps you mostly need to bound items. That doesn’t mean everything has to be a bold box or default button. Here, translucent backgrounds on the menu and controls [CLICK], and a sort of circular tab strip suffice to define them as functions.
I am starting to see that any bound item Is considered selectable. If a visual designer had boxed the title element [CLICK] for visual “consistency,” people would assume they can tap it to get more details. This is better; it’s differentiated by a distinct style, and combined with a few typical icons like play clearly say “these are clickable.”
Clickable items need to not just afford their action (making it clear what it does) but do so consistently.
Someone tell me why my calendar name [CLICK], attendance and the participants are selectable rows, but the location [CLICK] is a link and I have to click exactly where the link is. Be consistent, and make whole contained areas (rows, boxes) selectable as that is what is expected.
But have huge numbers of sensors, that make them aware of the user, and the world they live in.
Design by zones examples.
Design by zones examples. Bad ones here. More than 4 actions on the menu bar is too many. And all tabs are too short, or too near other elements.
Understanding when to use mousedown vs. mouseup, even just to do stuff like activate the visual click on mousedown, but not activate the action until mouseup, can be really good ways to improve the overall experience of the interaction. But, not all touch behaviors are equally supported. Check compatibility before you implement, and make sure that the platforms you need to support will work. For the web, as shown here, make sure there’s a useful fallback, so the design works no matter what even if some platforms have better features. http://www.quirksmode.org/dom/events/index_mobile.htmlhttp://www.quirksmode.org/blog/archives/2014/01/touch_action_te.html
Another example of how people hold devices in many ways. They solve problems. This is from Kelly Goto. While traveling, one of her researchers got tired of holding her phone, so built a phone holder from the barf bag.
If you miss these addresses, just Google my name and you’ll find me.