Beyond the mouse: the evolving human/machine interface

Beyond the mouse: the evolving human/machine interface

It’s a remarkable piece of research but you can’t argue with the science.

A U.S. study from a couple of years ago published in Psychological Science reached a conclusion: for memory retention, the pen is mightier than the keyboard. The context was college students taking notes in lectures – and the insight was those using a pen and paper took fewer notes but retained the information better than those who typed more detailed notes on their laptops.

As to the science, the connection with handwriting is in how the brain works – we process information in multi-sensory ways, and it turns out that using a pen is more of a multi-sensory experience compared with typing on a keyboard.

Different situations, of course, call for different computing tools. There’s no doubt the keyboard is critical for more efficient and faster writing in certain settings. To me, this concept returns to something I’ve written about on the subject of customer experience and analytics. We each use our devices in all manner of ways; therefore, you cannot assume people want to use technology in the same way as each other. Many of our ThinkPad customers purchase the product specifically because the keyboard experience is so good.

Fast and efficient… the latest ThinkPad X1 Yoga features a rise-and-fall keyboard that retracts fully flat in tablet mode.

Providing choice is the key. For example, pens are useful for taking notes or annotating; drawing or sketching on PC. Keyboards are best for long emails or for writing articles. And the mouse is best when trying to point to something specific on your screen and double click.

But the interesting thing today is, after years of PC usage where the keyboard and the mouse have been the prevailing paradigms, we’re accelerating into a new era of choice where the human/machine interface is rapidly evolving, taking on different forms as personalized computing morphs into something that is all around us.

What do I mean by different forms? Consider the uptake of the stylus for drawing, sketching and taking notes on touch screens. Or voice-activation to control multiple interactions with all kinds of devices including harnessing digital assistants as AI evolves (by 2018, Gartner predicts that 30% of our interactions with technology will happen through conversations with smart machines[1]). And today, the evolution of the interface includes virtual reality (VR) and augmented reality (AR) enabled movement and gesturing – a user interface in its relative infancy but one with exciting potential.

The evolution of the stylus

The telautograph was a pre-cursor to the modern fax machine. A stylus at the transmitting end of the communications system inputs a message or drawing which is re-created automatically at the receiving end.

In the field of “pen computing,” a stylus or digital pen is used to write or draw directly onto a touchscreen or tablet. Not too surprisingly, pen computing actually predates the keyboard and mouse as a user interface, which makes sense when you consider centuries of human handwriting. The exact origin was a telautograph stylus, an innovation patented as far back as 1888! However, the first modern instance of handwritten text recognition using a tablet was introduced via a device called the Styalator in 1956.

Today, people use a stylus in a multitude of settings – artists enjoy sketching and capturing ideas directly onto a touchscreen; students and industrial workers use a stylus as a pointing device instead of a mouse or trackpad to record information on mobile devices; and gamers use them on smaller handheld consoles.

As an interface, we’re seeing an increase in the use of pen technology for a couple of key reasons. PCs today can take a greater degree of “touch” input, and process it back to the user faster, so it feels more natural, without taking a long time to compute the information. We also see many more PCs and devices with touchscreens and more 2-in-1 options in the market, with more software being developed all the time to support stylus pen usage. Finally, they’ve become more lightweight with a better battery experience due to faster charging.

The best stylus user experience mimics the familiarity of drawing or writing with a real pen or pencil, detecting varying degrees of pressure sensitivity to vary thickness and density. By way of example, at Lenovo, based on feedback from our customers who use a stylus regularly, we embedded 4,096 levels of pressure sensitivity into the Lenovo Active Pen 2 to assist with the stylus writing customer experience, finessing the degree of accuracy for the user. We also made the pen full size, so it feels like a normal writing pen. 

The first user interface - writing and drawing with pens feels natural. Here, artists sketch directly on to a Yoga 720 convertible laptop with an optional Active Pen stylus.

You talkin’ to me?

If the stylus represents one variation – or perhaps more accurately, a complementary interface – to the keyboard and mouse – voice activation in personalized computing represents another.

In fact, widespread universal use of voice recognition on our PCs and devices is the near future, and we’re very, very close. Andrew Ng, chief scientist with Baidu explains why: “As speech recognition accuracy goes from say 95% to 99%, all of us in the room will go from barely using it today to using it all the time. Most people underestimate the difference between 95% and 99% accuracy–99% is a game changer . . . No one wants to wait 10 seconds for a response. Accuracy, followed by latency, are the two key metrics for a production speech system . . .”

Though the increase seems small, closing in on last few percentage points creates a comfort level that leads to many more usable and user-friendly voice scenarios.

Having computers “understand” what we’re saying to them is certainly an ideal the industry is working toward. Great steps have been made via devices such as the Lenovo Smart Assistant using Amazon’s Alexa tech and other smart home products. If sci-fi writers have long envisioned this human/computer interaction (think Star Trek), we’re finally well on the way to realizing such scenarios.

The rise of IoT has seen voice-activated digital personal assistants, like the Lenovo Smart Assistant powered by Alexa, enter the family home.

A recent on point article from Chatbots Magazine traces the history of human and computer interfaces culminating in the fact that 90% of all human communication still happens through voice. Little wonder then that voice activation technologies will continue to be such an integral part of personalized computing. Voice search queries on Google, for example, have spiked in the last three years. The Cortana personal assistant voice recognition capabilities that come with Microsoft Windows are increasingly popular with Lenovo customers. The consumer benefits are many: hands-free interaction with devices using voice is easier, faster, and more convenient in certain scenarios.

At Lenovo, our own customer research has highlighted how people see voice recognition with regard to desktop and laptop computing specifically. The primary motivation for wanting to use voice activation technology on PCs is to help people multitask as they go about using their devices. Our research is supported by analysis from Statista, which recently revealed the primary reason for using voice in 2016 was the answer “useful when hands/vision are occupied”, a point made by 61% of respondents.

I’m excited for the next wave of personalized computing as human / machine interfaces evolve further still. In this article, I’ve briefly mentioned how a new generation of AR and VR enabled interfaces and devices will help to redefine the user experience. For more on that topic, take a look at an article I’ve recently written about our own steps as a business into AR-enabled smart phones and a VR entertainment hub for movies, TV shows and gaming content.

We’re only just getting started… let’s see where the evolution takes us next!

What is your favorite technology user experience? Reply in the comments below. You can also join the conversation with me on Twitter.


[1] https://www.gartner.com/doc/3021226/market-trends-voice-ui-consumer



Really thought provoking - In particular I liked the quote about the significance between 95-99%. That makes so much sense, although I had never heard it put that way. Helps explain why Gartner's most recent "Hype Cycle" for Machine Interfaces puts Speech Recognition on the "Plateau of Productivity" (i.e., mainstream adoption).

Good take Dilip! It's the right combination of these technologies (along with touch) meeting up with the right use cases, at the right time in the future, that will give us the next gigantic leap forward in human interaction with our personal devices. One day I will order from my favorite restaurant on a ThinkPad using Lenovo Smart Assistant, it will project a AR/VR menu in the air that I will touch or use stylus to input my order and readout my total, then give me status updates when it's on the way/arriving. :-)

Dilip. great article, interesting statistics and insights on pen and voice usage. I too look forward to seeing how these technologies evolve.

Dilip, have we ever thought of entering the gaming console category?

Creating all of these new experiences will require a new way of designing and building the world around us. We can help make sense of it all @ Design|Research: http://bit.ly/DRsearch

Like
Reply

To view or add a comment, sign in

More articles by Dilip Bhatia

Others also viewed

Explore content categories