We live in a touchy-feely, device-driven world.
Your phone, tablets and likely your laptop have some ability to interpret and use multitouch input to assist with using your devices in a wide variety of ways. These types of inputs help us interact with on-screen elements, mimic real-world gestures and lower the wall between the metaphor of icon driven graphics and physical objects.
We’ve likely seen videos of new computer users, children and even animals interacting with smartphones and tablets naturally, easily and with few troubles.
Reflect on this for a moment in the scope of your experience as a designer for computer-based experiences.
Here is a video of some toddlers flipping through a magazine, expecting it to be an iPad, for example:
Or watch this video of a young child unlocking an iPad, launching her spelling app and playing it pretty effectively:
These devices are intuitive and natural to use for people of all ages and skill levels.
When the original eLearning experiences, such as CD-ROM and Web applications, arrived, often these courses would be accompanied with an introduction that was more like Using a Computer 101 than it was related to the core learning objectives the course was created for. Tutorials like “here is how you use a mouse” (and more) all permeated these interactive pieces. This was largely needed, because many had either little computer experience or they had moved from an experience based on green screen or terminals to a GUI-driven PC.
These tutorials were also needed because using pointer devices like a mouse and a cursor is virtually nothing like anything in the real world. There is no physical parallel, and the metaphor of files, folders, and clicking and dragging is a bit of artificial contrivance at its root.
Why tap “Next” when you can swipe right to left? Why click or tap a magnifying glass icon when you can pinch and zoom? The rotate or flip buttons are pointless when you can grab an object and spin it with two fingers. The possibilities are incredible.
So, with the new vocabulary of input and control at our fingertips, why are so few of us taking advantage of it in our mLearning experiences?
In my experience, I think there are three primary drivers holding the industry up in adopting true multitouch and gestural inputs in our mLearning work:
- Lack of consideration in design and consideration in creating a mobile-first experience.
- Lack of experience and vision creating or designing for a multitouch and gestural metaphor.
- Lack of support for these input methods in common tooling we use to produce typical learning products with.
Let’s explore these issues and suggest some ways to overcome them.
Creating A Mobile-First Experience
In Learning Everywhere, one of the four main content types I explore is “Content Converted from Other Sources.”
This approach, while valid and appropriate for many pieces of content in your library, is by its nature not a mobile-first experience. This is evident when interactions and elements are brought over from a mouse-and-keyboard-driven environment.
Artifacts like mouse hovers, prompts to “click here”, Next buttons and many other elements that are needed or helpful on a computer have no place or are completely inappropriate and unusable on a tablet or smartphone.
Consider the cliché exploratory interface that requires you to hover over items to get more information on them. This simply will not work on a mobile device.
How do get away from these conventions? The answer is stop “converting” and start redesigning. Take into account the target device’s capabilities and re-examine all and any user interface or user experience factors that could or should change. On-screen prompts, user interface controls and deeper interactions in your applications and websites need proper attention given to them.
Designing For Multitouch and Gestural Metaphors
Design disciplines all have their own sets of patterns and conventions, visual languages and approaches to standardizations on how the design should be conveyed to engineers, developers or manufacturers that need to interpret the plans to create the final product.
Software designers use Garrett IA and UML diagrams. Architects use a standard set of views, elevations or projections, and specific types or elements for plumbing, doors and the like. Electrical engineers all use the same sets of figures to relay items like transistors, resistors and the other items needed for the schematics.
Likewise, a similar sort of vocabulary for gestural and multitouch, and a set of conventions on when, where and why the various types of gestures should be used, has emerged. Covered in great length and amazing detail in the 2008 book, Designing Gestural Interfaces, by Dan Saffer, this set of rules and the accompanying visual vocabulary that informs developer how to employ them is something new to the training community, by and large.
A quick Google image search for “multitouch gestures” results in a wide array of visual depictions of these gestures.
To begin incorporating these gestures intelligently in your work, it would be wise to read up on them via articles like this, but you will also want to find some libraries of graphics to begin incorporating into your sketches and wireframes. Stencils for Visio, PowerPoint and Keynote, or OmniGraffle are readily available, so get started now.
Just having access to the templates certainly doesn’t make you an expert, but it does give you a framework you can start to explore and an expanded toolkit to enable you to design mobile-first interfaces.
Supporting Gestural Input in Your Development Workflow
If you have started creating mobile-first experiences and documenting your design process with properly annotated gestural inputs, you may be wondering where to take things now.
If you are primarily a rapid eLearning tool user, you have no real options available to you at this time. At the time of this writing, no major software packages out there – regardless if they state they support mobile or not – cannot directly address gestural or multitouch input out of the box.
This certainly throws a wrench into the works, but it doesn’t need to completely stop you from trying out long presses, swipes and more in your next mobile learning project.
For the most part, HTML authoring tools don’t support this sort of input directly, either. There are documentation pages in the respective device and OS developer areas to show you how to support these gestures, but you are going to be mostly on your own when it comes time to write the code.
Recently, some enterprising and bright developers have risen to the challenge to make this easier, and have created some third-party libraries that make developing Web experiences with multi-touch and gestural input capabilities much easier to get started on.
The library Hammer.js adds robust support for the most common gestures and touch events. On top of that, it has the rather punny tagline, “You can touch this!” to boot, if that’s your sort of thing.
The world of multitouch is an expansive and interesting one to delve into. The intuitiveness of the use of these gestures to interact with content is real and demonstrable. Your users will value the time you take to craft mobile-first interfaces.
With some minor adjustments to your design workflow, and an attention to the changes needed in your development toolset, you can accommodate and embrace these new ways to empower your users.
The main thing holding you back at this time is you and your desire to grab a hold of something new.
Latest posts by Chad Udell (see all)
- #ASTD2014 Recap: ASTD Changing to ATD, Slides Now Available - May 16, 2014
- #ASTD2014 Session Preview: Content Strategy for Conversion and Multiscreen Delivery - April 29, 2014
- Buying Mobile Learning Is A Lot Like Buying A Suit - April 1, 2014