Reimagining the Computer Keyboard

© Sergey – Fotolia
At a special event in October announcing Apple’s latest MacBook Pro lineup, SVP Phil Schiller introduced the new Touch Bar feature by explaining that it was designed to provide a dynamic and adaptive replacement for the row of physical function keys that has accompanied computer keyboards since the early 1970s. Why, he asked, should interface design be constrained by the legacy of a 45-year-old technology?

Yet, just to the south of the new Touch Bar on this sleek, ultra-modern device sits a nearly 145-year-old technology that continues to artificially constrain computer interface design — one that I believe is way overdue for a radical reimagining:

The physical keyboard.

You’d probably think that, as a guy who makes his living herding words, I’d be the one yelling the loudest that you can have my keyboard when you pry it from my cold, dead hands. But before I can explain why I believe the future of writing absolutely demands the disappearance of the physical keyboard, first I need to go off on a highly pedantic tangent for just a moment.

The Way I Want to Touch You

The near-concurrent release of Microsoft’s impressive Surface Studio has inspired a lot of discussion about the divergent approaches that Microsoft and Apple are taking to interface design. The most common form of the argument is that, when it comes to their full-featured devices, Microsoft is cool with you touching the screen and Apple isn’t.

I don’t think that’s the real difference, though.

Now here’s the part where I get pedantic. MacOS, Apple’s flagship operating system, has always been touch-enabled (as has Windows, for that matter). You move the mouse with your hand and click keys with your fingers, and these actions make things happen on the screen. (And this is the part where you groan and roll your eyes.)

The difference is that these are intermediated inputs. Your hand is moving a device over here that makes changes appear over there. When you put your finger directly on the screen where you want something to happen, you are disintermediating the input from the output.

What Apple seems to be saying is that it defines the boundaries of its OS ecosystems not in terms of the devices they are used on or the things people use them for, but in terms of whether the method of input is intermediated or disintermediated from the output. On an iPhone and iPad, the input and output are primarily taking place on the same plane. On a laptop or desktop, the input is taking place on a separate device from the output.

This is a perfectly valid design philosophy. Whether it will end up winning the survival-of-the-fittest competition of the marketplace to become the industry standard remains to be seen. But either way, I think it offers the potential for a coherent, consistent worldview.

The problem is, as demonstrated on the new MacBook Pros, in order for this input philosophy to have a fair shot at success, it has to ditch the physical keyboard — which, along with the monitor, is one of the things that traditionally defines what a computer is.

Invisible Touch

As it stands now, Apple’s flagship laptop has two context-sensitive touch interfaces on the horizontal slab that allow users to interact with the system using a vocabulary of gestures that are both universal to the entire system and particular to the application or even to a specific function within the application.

And between these two sophisticated, modern, highly adaptive inputs sits four rows of single-purpose, permanently affixed mechanical switches.

Apple’s demonstration of the new Touch Bar featured a parade of people using it to do all kinds of magical things. We had a photographer using the trackpad and Touch Bar to edit a photograph, a filmmaker using them to splice a video, and a musician using them to lay down tracks.

Conspicuously, none of them ever touched the alphanumeric keys that took up most of the horizontal real estate.

What if that space could have been used as a light table for the photographer, a digital moviola for the filmmaker, and a full-on mixing board and platters for the musician? And then, when it came time to name a file or type a caption, have the alphanumeric keyboard appear for just long enough for them to type it, then disappear again to be replaced once again by a suite of controls appropriate for the particular application?

And when you’re using office-y applications, the keyboard of your choice — QWERTY, Dvorak, scientific calculator, adding machine, mathematical formula, shorthand, editing symbols — would be the context-appropriate interface that appears on the horizontal input surface, just when you need it.

Apple is exploring this philosophy on its mobile devices. Want your QWERTY keys to travel and click like they do on a physical keyboard? Haptics have you covered. Don’t need a mouse trackpad? It disappears from the screen while typing until you touch it with two fingers, at which time the keyboard instantly transforms into a trackpad and then snaps back to keyboard mode when you’re done. So not only is it do-able, it already exists within the Apple design philosophy.

Touch the Sky

The QWERTY keyboard is just one of many touch-based input methods that we use to create, manipulate, save, and share what we make. Requiring that it take up permanent residence on a device in the form of physical, single-purpose mechanical switches just looks and feels increasingly antiquated. It forces designers to add modern interface tools around its periphery just as, 45 years ago, computer designers had to add keys for functions, commands, and navigation around the basic, and familiar, typewriter keyboard.

The more things change in computer interface design, in other words, the more they stay the same — as long as those physical QWERTY keys have to be there.

Of course, physical keyboards have tangible advantages — quite literally. Physical keys provide tactile feedback that permits sight-free touch typing, which allows the typist to type much faster than if they have to keep glancing down to ensure their fingers are positioned correctly, which is inefficient and can lead to an increased rate of typos. This is certainly the hurdle that virtual keyboards will have to overcome, but I’m not convinced that it’s insurmountable.

If the focus is on trying to recreate a digital analogue to a physical keyboard, then that’s the equivalent of trying to invent a faster horse. If the advantage of physical keys is that they allow us to discern the locations and boundaries of keys by touch, then why can’t we invent other, non-mechanical ways to discern the boundaries of keys besides physical edges? If we unpack what it is about keys that work and why they work from the keys themselves, we might come up with some surprising options we haven’t thought of before.

For example, perhaps there could be a way to charge the glass with localized spots of electrostatic resistance that make them feel “sticky” when your finger glides over them. It wouldn’t have to mimic the exact feel of a physical key so much as provide some sort of equivalent sensory cue that indicates the boundaries of the key zones.

Users who really need or want, or who simply can’t do without, a physical keyboard will always have the option, of course. But if it’s done really well — which is what will make or break the idea — typing on virtual keyboards will be a perfectly normal experience for most users, while also offering options that are impossible with physical keyboards — adjustable key spacing, variable sensitivity, automatic key repositioning for maximum comfort, customizable haptic responses, multiple alphabets, accuracy adjustments based on typing speed, user-customizable keys, and much more.

Another way to interpret Apple’s dictum that it doesn’t want you touching the screen in macOS is that it doesn’t want you touching the output screen. Apple’s laptops now provide two touch screens on the input side: the trackpad and the Touch Bar. The next step is to unify them — along with the alphanumeric keys that sit between them. October’s MacBook Pro event demonstrated pretty clearly that the future of intermediated input is adaptive, context-sensitive, and virtual. And that means doing away with the mechanical QWERTY keyboard — and possibly even with the QWERTY arrangement itself.

The QWERTY configuration was a fine solution to a problem that effectively disappeared with advent of the IBM Selectric and digital computers 40+ years ago. It has stuck around since then because it works, of course, but also because it has a lot of inertia behind it, which is going to be hard to nudge in a new direction.

Plus, if we abandon QWERTY we’re probably looking at a protracted period of experimentation akin to the early days of typewriters before we find the new standard, but with the potential for much more disruption since QWERTY keyboards are ubiquitous today. What kind of effect will that have on communications?

The shift to virtual keyboards for traditional computers — perhaps along with other features such as predictive chording and voice input — represents a fundamental sea change with unpredictable consequences. Who out there is willing to take such an enormous risk? Will it start at the top with Microsoft or Apple, or with one of the hardware manufacturers like Dell or Lenovo? Or will it be a startup coming out of nowhere with nothing to lose? There is no way to know yet. But someone has to go first and say “come on in, the water’s fine.” Otherwise, we’re going to remain where we are — and where we’ve been for 145 years.

An earlier version of this post was previously published on Sotto Voce. Many thanks to Tom Chandler and Richard Polt for their invaluable feedback and suggestions, many of which are reflected in this version.

Author: Paul Lagasse

Paul Lagasse provides expert-to-expert communications services to nonprofit, business, and government clients in the metro Baltimore-DC area. Specialties include science and medical writing, technical report editing, and content marketing.