Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Home ›› The Mobile Revolution: Reconstructing Familiar Interactions

The Mobile Revolution: Reconstructing Familiar Interactions

by Mary Brodie
17 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

On a recent trip to Boston, I wanted to check my phone to learn more about the tornadoes that were happening in Dallas (I figured I could do this for a few minutes during dinner). I decided to use both the iOS weather app and the Weather Channel app. The iOS weather app defaulted to Boston weather, so I swiped to view weather in another city, but ended up getting an hour-by-hour weather prediction. I just couldn’t access my Dallas weather screen with swipes for some reason. Frustrated, I gave up and switched apps.

The Weather Channel app automatically localized the weather display for Boston, so I had to tap around to find the weather for Dallas. It was easier to use, but I had to shift through a lot of content before I found the radar map that showed a time elapsed view of the weather and what was happening with the tornadoes.

What I had anticipated to be a minute-long experience turned into a semi-frustrating 30-minute dinner distraction.

This experience demonstrated to me how mobile needs to change:

  • User experience: Gestures may have elements of surprise and delight, but sometimes you don’t want to discover surprises – you want get information and do something else, like enjoy dinner.
  • Physical device: I couldn’t get my fingers to operate the device properly. There were too many options too close together, almost running into each other. I felt like like a failure using the apps. I have a mid-sized phone and wondered if a larger phone would solve the problem. But the larger phones seem too much like a tablet.
  • Content: Both apps presented me with everything I could possibly know about the weather rather than presenting me content and functionality that I regularly access. I wanted apps to present me with my favorite tools first, almost anticipate what I needed to know at that moment to answer my questions. I felt like I had to work to find what I wanted.

Mobile device hardware has evolved so now you can carry a device that the power of a mini-PC with you everywhere you go. With it, you can access content and tools on-demand. The smaller nature of these devices should require a very different interaction model and design patterns than tapping, typing on keyboards and swiping, similar to using a PC/Desktop. In some ways, I wonder if have we replicated the PC/Desktop experience on these smaller devices out of familiarity.

New technologies are never familiar – especially not mobile

Mobile is a fairly new technology – just a little over 15 years old. But according to according to Jef Raskin, we tend to relate to technology using previous experiences and have a hard time accepting something new and different.

The present rating systems of the magazines and the similar thinking of many users, managers, and marketers about products with significant human interface components serves to preserve the status quo, even when it can be shown that a feature that is completely familiar (intuitive) is deficient. This tendency makes it more difficult for major advances in human interfaces to achieve commercial realization. When I am able to present the argument given here that intuitive = familiar, I find that decision-makers are often more open to new interface idea. –Jef Raskin, Intuitive Equals Familiar

At first glance, there is very little that is familiar about leap frog or disruptive innovations, from how to use it to how we integrate it into our lives. Such products often leverage existing technology in a different way. Netflix allowing customers to rent movies through the mail and later online was disruptive. In some cases, when customers rented movies online they didn’t always have the right technology available to watch the movies; some were still limited to DVDs. It didn’t immediately fit into people’s lives, but people adopted it. 3D printing is disruptive. Initially, it “printed” knick-knacks, then people used it to create food, clothing, and in the future, chips and computers. Many of us now have a vision of the possibilities that 3D printing offers.

Smartphones and mobile technologies have been disruptive. They combine a number of diverse tools into a single device using software.

  • Communications – a phone, email system, text messaging, video phone/chat, social media, Web browser
  • Productivity – a calculator, Microsoft-like tools, file sharing, compass
  • Entertainment/Education – video, TV shows, movies, books, music, games
  • Commerce – shopping, banking, renting, trading

We know how to use each application as its own device or software package, but it is an innovation to translate device functionality, such as a calculator, into software and then include a number of these “devices” into a single device. Each app has its own requirements for a slightly different interaction, often based on what the user expects to experience today. Designers of such apps need to leverage that familiarity and usability and incorporate a pattern library consistent with the device operating system.

With the rise of consolidated, mobile devices, we are moving away from creating and using purpose-built devices and in-person interactions to a world where software mirrors reality, content has become entertainment, and everyone is literally a touch away.

In our desire for familiarity between mobile and PCs, we have associated the mouse with fingers and added an onscreen keyboard. We are still using more traditional interaction methods with devices because we haven’t yet figured out a different interaction approach– or how to really integrate mobile devices into our lives.

UX: Moving beyond clicks and keyboards

An adult finger needs a button to be at least 57 pixels wide to tap comfortably and accurately; an adult thumb requires 72 pixels. Android’s developer manual offers advice for adding spacing and encourages designers to make buttons at least 7mm wide, or 48 CSS pixels. Looking at some apps and sites across varying device sizes, it seems that we have forgotten this.

For example, in iTunes there is a great feature where you can access music you recently purchased. There is a link above a ribbon of previous purchases to access this. On an iPhone 5, the link is 38 pixels high; the complete area including margins is 79 pixels. To successfully access the list, you need to precisely tap on the link – that 38 pixel area. However, if you use your thumb, which requires a minimum of 72 pixels, it is easy to miss the exact link.

I’m sure this is larger on a larger phone, but we should consider scaling buttons, especially links, for human sized fingers on all device sizes.

Generally on devices, we use rectangular buttons and links which work better for mice and pointers, which can slide across a screen slightly – not fingers, which are oval and circular. We need to consider the ergonomics of the situation – we can’t go below 57 pixels with some padding around it. We need more organic interfaces that are also more ergonomic for us.

But is optimizing for taps the only way for us to design better mobile interactions?

Speak more, tap less

Voice-to-text functionality is fast emerging as the next trend for mobile interactions in China and Argentinamobile interactions in China and Argentina. Some use Siri to find places and get directions, although her advice can be irregular based on her programmed interaction styleorganic interfaces. I regularly have Google read me directions while I drive so I don’t have to juggle watching the road and reading a map or a list. There are a number of times that it’s more convenient to speak information and choices than type, read, or select items onscreen.

There is a more logistical, ergonomic reason for perfecting voice-to-text technology. Thumbs have emerged as the darling digit of the mobile device. However, mobile keyboards are not designed for thumbs. The keys are 52 pixel wide buttons, allowing a 72 pixel wide thumb the ability to easily hit two keys at once. Most hold a phone with one or two hands and the thumb is the only available finger to type. This invites mistakes – and invites the need for speaking over typing.

Mobile devices simply don’t have the appropriate physical space to accommodate a keyboard. Autocorrect type-ahead features improve this flawed typing approach. On Android devices, the machine responds with a vibration when someone types, maintaining awareness of what you are doing. These features improve the experience, but they don’t solve the underlying problem that the keyboard that worked with the PC/typewriter model doesn’t scale down to a mobile smartphone.

There is a drawback of voice-to-text – security. Using voice to enter data isn’t secure at all. There are no privacy protections in place if you speak your address or social security number in public – anyone can hear it. We have this challenge today with the phone. Typing data provides a level of privacy on the device (transmission is different). However, there is a workaround.

Imagine if we were able to leverage the autofill values from our desktop across our mobile devices, creating a networked device system of available data, almost like an identity wallet. It’s not a new concept. But this approach would remove our need to speak personal identifying information or type it.

Fewer decisions per screen

Mobile experiences were always intended to be simple, quick experiences. Tap. Tap. Done! Displaying more than two to three options per screen invites too many decisions for a smaller screen. Further, fewer decisions and actions better accommodate finger gestures. Transactional apps have perfected this. This is challenging with content-driven experiences.

Nielsen Norman Group wrote a piece about prioritizing content and functionality display on a mobile site. The key takeaway of the article was to display what is most important. However, if you can’t make that decision, rather than prioritize, maybe it’s time to reconsider content goals – what you are displaying on the screen and why. We revert to scrolling because it’s an easy solution for those difficult debates. We really should be examining content and determining if it contains the right data for someone to make a next-step decision.

Some thoughts on how to do this:

  • Determine what your audience really needs to read to make the next decision. We often include too much content on a mobile screen that doesn’t drive a user to an immediate decision. Sometimes we don’t think about how to lead a user to take the next step; we think about what’s required for the ultimate, final step. When designing content for a page, consider what decision you want someone to make on that page. Do you want them to call? Fill out a form? What’s the minimum information necessary for that person to take the desired action.
  • Include complex information in video or provide highlights and a way to view all content later. It’s easier to speak than read on mobile devices. If you need to express a complicated idea, like a demo or instructions, consider including it in a 3-5 minute video. It may be easier for a mobile reader to consume it.
  • Encourage tap/click to call. There are times where someone requires additional content and context before making a decision. A phone call may be easier to receive complex information. Although people are drifting from using the phone conversations and prefer texting, texting is difficult to use for a complex dialog. If you have questions and want quick answers, calling may be best.
  • Content can be transactional as well as entertaining. Infinite scroll is ideal for entertainment oriented content. It’s easier to scroll than tap on a mobile device. For shopping environments or browsing content, consider presenting pieces one-by-one in a scrolling environment. Consider balancing scrolling content with what’s needed to move someone along a process or make a decision.

Gestures and surprise and delight

have you ever swiped on your screen and something happened that you didn’t expect? Like you thought you swiped left to right but the screen scrolled or vice versa?

Gestures are fantastic shortcuts to replace taps and complete complex onscreen actions like accessing an app or entering a security code. However, we usually don’t talk much about what happens when gestures don’t work.

Discovery

Gestures provide the surprise and delight experiencethat users and designers crave. However, how do users know what different gestures mean? We like to think of gestures as being intuitive, but given Jef Raskin’s definition of familiar, is a gesture on a new device truly intuitive? If you need a manual or video to tell you how to use it, it’s not intuitive, nevermind familiar.

For example, when NN Group first tested the iPad, many people didn’t really understand how to use it.

The iPad etched-screen aesthetic does look good. No visual distractions or nerdy buttons. The penalty for this beauty is the re-emergence of a usability problem we haven’t seen since the mid-1990s: Users don’t know where they can click. — Raluca Budiu and Jakob Nielsen, Usability of iPad Apps and Websites, First research Findings, p.6

This perception changed after people learned how to use the iPad, understood its app patterns, and designers learned better ways to engage users. However that doesn’t mean operations through gestures alone is the best solution.

Not always repeatable.

Even though humans have muscle memory, it takes a lot of training to hit the exact right spot time and time again – more than the 52 pixels for a finger or 72 pixels for a thumb. Gestures can be hard for people to get right 100% of the time in the exact spot. Furthermore, there are no onscreen guidelines to show the ideal location for these gestures. There can be hints, but it’s not clearly defined.

This is why you don’t want a lot of options per screen. There needs to be room for the finger to make the action and padding space to accommodate inaccuracies, or else there are mixed action responses. If you need to accommodate left and right swipes for page changes, a tap to select an item, a pulldown drawer from the top and bottom – that doesn’t allow for a lot of action space on the device screen. It is difficult to swipe a button next to a pulldown drawer at the bottom of a screen. If the swipe isn’t perfect, the drawer can open. It may be user error, but these actions can be frustrating for a user because the result isn’t what he expected. Imagine the user’s experience if this happens too often?

The physical-side of mobile devices: is bigger better or familiar?

Why are people buying larger smartphones?

Jana also polled respondents on whether they use their phone for watching video content. The overwhelming majority said yes, and that reinforces the focus on entertainment.

–Jon Russel, Report: People in emerging markets prefer big smartphones. Here’s why., TNW News

73% of smartphone owners said they use their phones for mobile web browsing. 36% read books or articles on their smartphones. 31% use their phones to watch TV or movies. 26% create documents or take notes on their phones. And just 16% don’t do any of those activities on their phones.

–Anne Pilon, Phablets Survey: More Likely to Buy Large Phones in the Future,

Much of what we do on mobile devices – commerce, entertainment/education, productivity and communication – mirror what we do on PCs and televisions. When it comes to media consumption, TVs are the distant cousin of mobile. We are quite familiar with that experience. It logically follows that phones will get bigger – a user wants to watch great effects on a bigger screen and have room to type and interact with a tablet like the Desktop/PC or television he is used to experiencing.

Like anything, there benefits and drawbacks of the various device sizes:

Phablet 4.7” – 6.5” display Mid-small sized phone 3.5”-4.6” display Watch/Google Glass <3.5” display Tablet 6.5+” display
Better for typing – more room for keyboards Better for watching video or pictures (like a little TV) Great for reading Easier to carry in a pocket Can put to your face to talk (harder with a larger device) Easier to use single-handedly. Great for information as you need it – literally at your fingertips or before your eyes. Quasi-laptop Great for book reading, movie viewing
Difficult to hold and interact with the device using one hand Challenging to browse and binge-watch anything Not really meant for browsing Hard to put in a pocket and carry around

Keep in mind, not everyone wants larger phones. There are size preferences emerging, mainly to accommodate those who want more of a phone and less of a tablet. And there are those with simpler needs – like having the ability to slide a phone into the pocket of slim cut jeans.

And there are those who find the grip required to use a larger phone downright uncomfortable.

To use a cell phone, we use one of three grip styles: 49%, the one-handed grip was most popular; 36% cradled the phone in one hand and jabbed with the finger or thumb of the other; and the remaining 15% adopted the two-handed BlackBerry-prayer posture, tapping away with both thumbs. –Josh Clark, Designing for Touch, How we hold gadgets, Graphic

In most cases, thumbs are the darling digit, which requires more real estate to select anything (72 pixels). And there is a complexity to using a larger phone.

…More screen means more ways to hold, making things unpredictable. The rule of thumb still applies, but with a special headache: the thumb zone isn’t consistent even for individual devices; it varies depending on stance and posture. –Josh Clark, Designing for Touch, How we hold gadgets, Graphic

In the diagram below, green represents natural reach; red is out of reach for single hand holding and manipulation. We need to consider where we place core functionality access when we design a product. (This doesn’t consider left/right hand usage, another factor).

If we view an iPhone 6 Plus as being more of a tablet device, then one could say we have reverted to designing and using the familiar – tablets have more in common with a PC than a phone. And there are many out there who don’t want to spend the extra money for a tablet, so they buy a 2-in-1 device that’s tablet-like. But is this moving towards using a mobile device, or having a mobile PC?

Going small is definitely not familiar

The Apple Watch, Google Glass and other wearables are intriguing because of their size, but we are challenged as to what to do with them, as demonstrated in sales (demand for Apple Watch and the removal of Google Glass from mainstream market to enterprise) and how to design for them or integrate them into our daily lives. Google Glass in particular had a lot of challenges in this area.

If we continue to use devices for mainly entertainment purposes, we are indirectly perceiving content as being sticky, encouraging users to browse, transforming content to knowledge entertainment. We scroll. We explore. We ramble to find what we want, if we find it. This experience mirrors television – keep you watching for hours, clicking the remote to find something “good.” And this experience succeeds on larger screens. But does this work for smaller devices? In many ways, we are creating online information orgies when mobile device sizes dictate that we should be having an information snack. Or as Google puts it, micro-moments.

For small mobile devices and wearables to succeed, people need to understand how they fit into their lives. This requires us to shift our content focus from getting lost in large, rambling experiences to succinct, direct experiences that allow you to access the right content at the right time.

But are we there yet?

The decline of sticky and the emergence of the micro-moment

Micro-moments are a better complement for our mobile, digital age.

Micro-moments occur when people reflexively turn to a device—increasingly a smartphone—to act on a need to learn something, do something, discover something, watch something, or buy something.

–Google, How Micro-moments are changing the rules

Maybe include a video

I described a micro-moment in the beginning of this piece: I wanted to get high-level weather information about tornados in Dallas and continue with my dinner. Instead, I had to swipe and navigate the phone to find the information and spend more time interacting with the device to get to the data. I had to participate in sticky thinking when my brain wanted to fulfill a micro-moment diversion.

Maybe this is the root of our mobile/PC familiarity problem – until we are able to present relevant content that’s important to the user, it will be difficult to shift how we use mobile devices or envision a world where device use and real-life are seamless and truly integrated.

There are a number of metrics that could be leveraged to immediately change and personalize content display to a user. We can easily know based on what users tap or click the types of content they find interesting or need. With the weather apps, for example, I frequently select the same 2-3 charts to view in detail. In this case, why not display those commonly referenced charts first in the app?

We could also translate personalized experiences from Web sites to apps for consistent and seamless experiences. There would need to be testing to validate, but there is high likelihood that if someone finds something interesting on the Desktop/PC experience, he will want something similar available on the mobile device.

We may also want to examine metrics for gesture usage. Have we increased complexity of an app with the addition of gestures? If a user isn’t adopting gesture use, it obviously isn’t useful for him. Possibly consider a way to remove it from the experience and allow him the ability to add it later if he finds it useful. Why couldn’t a user create his own experience based on his preferences?

We could also consider moving away from larger content compilations –books, articles, pages – to smaller chunks of content at the paragraph or sentence level. We would need to take into consideration context; some sentences don’t make sense alone. They would need tagging to be linked to other thoughts. At that smaller level, we could reasonably create custom pages, almost like we do with personalized shopping experiences. But again, we can’t do this based on the business needs – it has to be based on the user preferences, history, and usage. We should show them what they want to read and what matters to them.

For us to truly embrace mobile and wearables, we need to move away from the familiar Desktop/PC experience and the infinite scrolling ideal for entertainment, and incorporate voice commands, voice-to-text and micro-moments in mobile experiences. We need to be more selective about content and leverage metrics more frequently to inform our displays in real time. We need to provide users with what they need for functionality and content – not what we in a business believe they should have. Maybe we need to take personalization to the next level – personalize the complete experience of the app.

The size of devices will probably shrink again once we have the data structures in place to support presenting the right content at the right time, as well as interaction technologies like voice.

We don’t have the infrastructure right now ready to support a true mobile world. But we could. We need to transition the definition of mobility away from the PC and set new standards to integrate device use into our lives. We are doing it already. We only need to take it a step further.

I look forward to going to dinner and being able to quickly access the weather charts I like to use, with the app prioritizing the content on my display based on historic activity and preferences I have indicated. An app that simplified and tailored an experience for me would be gold. Using such an app during dinner would make the experience more enjoyable.

<<

post authorMary Brodie

Mary Brodie, Mary has been designing and producing user experiences for over 15 years in the Web and mobile spaces. For the past 7 years of freelancing (her company name is Gearmark), she has become highly familiar with Agile methodologies and how UX fits into the process. Her projects range from creating visions to collaborating with her clients to make small, yet valuable, incremental changes that meet business or technology goals.

Tweet
Share
Post
Share
Email
Print

Related Articles

How UX teams can provide key insights using technology adoption models

Article by Lawton Pybus
What Makes Users Adopt a New Feature or Tool?
  • The article explores the evolution of technology adoption theories, focusing on models like Diffusion of Innovations, TAM, TAM 2 and 3, UTAUT, and HMSAM, providing UX practitioners with tools to understand and enhance user behavior in product development.
Share:What Makes Users Adopt a New Feature or Tool?
6 min read

ChatGPT can identify and describe human emotions in hypothetical scenarios.

Article by Marlynn Wei
ChatGPT Outperforms Humans in Emotional Awareness Test
  • New research found ChatGPT was able to outperform humans on an emotional awareness test.
  • Emotional awareness is the cognitive ability to conceptualize one’s own and others’ emotions.
  • Given the psychological risks of artificial intimacy, AI augmentation, and human-AI collaboration—keeping humans in the loop—will be a safer and more beneficial approach for now.
Share:ChatGPT Outperforms Humans in Emotional Awareness Test
3 min read

Did you know UX Magazine hosts the most popular podcast about conversational AI?

Listen to Invisible Machines

This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and