How can Bio help us build better interfaces?

Mirella Mashiach and Adi Mashiach, MD

Today it is mainly us, humans, who reach towards the devices we interact with. It should be the other way around. Technology should integrate seamlessly with our daily lives. We believe Bio will eventually become a bridge between humans and technology.

The interface we use helps us as humans operate and communicate with computers. The early solutions were very technical and were “natural” only to computers. Punch cards became the dominant form of computer interaction in the first half of the 20th century, though they were perhaps the least intuitive. With punch cards, we translated our own letters and words into “computer language” punched holes that a computer can read. Years later, a text-based user interface was developed, using a typewriter without any monitor. Although we were able to use our letters and numbers to form a series of commands, it was still not intuitive. Computers were operated by highly technical users that knew the ins and outs of the computer’s scripting language.

Eventually, a more intuitive approach to computer interaction came in the form of graphic user interfaces. It required developing new devices like mice and keyboards, so we can use this new form of interface. Today, most computer interactions are still based on WIMP, which was developed at Xerox PARC in 1973. We still rely on a keyboard-mouse/trackpad-screen based interface. Interface has evolved since the 1970’s, but mostly as an improvement of the same basic concept: humans need to learn and instigate the interactions with computers.

Mobile computing has been an area of great advancement, in which interface has developed beyond the keyboard-mouse-screen paradigm. The ubiquity of touch-based interfaces puts more emphasis on touch, bringing about more intimate interactions with technology. It is, however, merely a new, more intuitive form of communicating with a computer through the same old icon-menu-pointer. Mobile computing processors continue to evolve and provide more computing power. More computing power means more capabilities for mobile devices, which enable further development of their interface through multi-touch gestures, voice recognition, haptic feedback and facial recognition. As this trend continues, mobile computing interfaces will move away from the decades old icon-menu-pointer paradigm.

Recently, some consumer devices have taken interface design a step further. These devices have dropped the use of traditional interfaces like a screen or a keyboard, and rely instead on other types of user interaction. Voice activated speakers such as the Amazon Echo, Google Home and Apple HomePod do not require us to approach and operate them with the same principles of the graphic user interfaces. These devices also demonstrate a trend towards seamless integration of the interface; they provide “background” interaction, waiting to hear our voice rather than waiting for us to approach them. Activation precedes interaction. In our view, this is a demonstration of how technology is moving from being a passive interactor, to a more active, human-facing one. Instead of relying on us to approach the device, turn it on and type or click what we want to do, these devices are constantly listening, waiting for us to communicate in our natural language.

We believe this human-facing approach will continue, and Bio will be used to build better interfaces, and will develop to allow more natural interactions with technology. Our ever-growing understanding of Bio will be used to develop new means of interacting with technology, supported by new technologies such as machine learning, smaller and cheaper sensors, and more computing power in mobile devices.

There are already companies turning to Bio to look for additional ways to interface with technology. These companies start by understanding our biology, how our bodies work, and then figure out how they can harness this information to create new interfaces. For example, our understanding of gait allows us to analyze and find unique walking signatures to be used as a security measure, like the product developed by UnifyID, which raised $20M from NEA and Andreessen Horowitz. Another example are gesture-based interfaces, which allow us to use natural and non-verbal way of communication. Take, for example, Thalamic Labs, which, with $135M total funding raised from Amazon, First Round Capital and Spark Capital, developed a product used to control other devices through simple hand gestures, even while we are doing something else. Or the year-old CTRL-Labs, which raised $11M from Spark Capital and Fuel Capital, and developed a technology which could read delicate finger movements so precisely that users could type on an empty desk. These companies are developing products based on hard science, enabled by their understanding of neurobiology. They bring Bio into the consumer world with new interfaces that can be used by all of us. They are taking interface a step forward, towards more seamless integration with the way we behave and work.

There are others who believe the ultimate interface with technology will be achieved when we implant electronics in our brains to directly communicate with computers. Some, like Elon Musk, who launched NeuraLinks, and Bryan Johnson, who founded Kernel, believe implants could enhance our memory and intelligence by augmenting and connecting computing devices with our brains. It seems that such devices would have to be implanted through a complex surgical procedure, and would certainly require FDA regulatory approvals, making them, at the moment, far from being a widespread consumer product, unless a more simplistic, less invasive design emerges.

We believe brain interfaces could develop, but we think they should be non-invasive. Moreover, we believe that physically connecting our bodies with technology should not be done by directly connecting tissue with electronics. We believe it should be through an intermediary, a human-biology-machine interface. We envision that biological developments will allow us to synthesize artificial tissue that has the capacity to interface with both neurons and electronics, acting like an electronic conductor between metal and flesh. This kind of synthetic tissue would biologically adapt to our brains, leading to a more natural adaptation process between the two tissues, rather than our biology having to adapt to hardware. That’s the whole point of making technology come to us. We’ll talk more about human-biology-machine interface in an upcoming post.

We further believe that an important part of our human interaction is still being ignored. The interfaces we use today and most of the ones being developed are focused on the “physical modalities” of our being: touch, gesture, and voice. As humans, we interact with each other using words, gestures and visual aids, but we also have a subconscious line of communication: our emotions. We feel and respond to emotions from those around us and those we directly interact with. These emotions do not necessarily exhibit themselves in an apparent way. We may not immediately or continuously give our attention to them unless we are focused on them and trained to read them - just like body language. Recent research shows there is no specific location in the brain that processes all our range of emotions, which means that even implanting a brain interface would not uncover all of our emotions.

We believe that interface technologies will evolve to understand the range of our emotions and respond to them. The ultimate step of technology interacting with us as human beings will be software-based emotional intelligence, or Artificial Emotional Intelligence (AEI). AEI is not about the artificial expression of emotions by software or robots; it is about reading and understanding our human emotions and integrating this emotional information in order to better understand and interact with humans.

Developments in Bio, supported by developments in machine learning, computer vision and sensors, will allow for this ultimate step. Computers and artificial intelligence software could sense and observe the multiple data points that come from multiple modalities: our facial expression, a change of color in our skin, a change in our tone, our gait, our posture and more. Although we are using physical modalities as data sources, it is the integration of that data, and analyzing it with the aim of uncovering our emotions, that make it different.

Emotionally-enabled technology could be assisting us when we interview babysitters to find out if they are trustworthy. It could read and understand our feelings in reaction to a product and offer us a more suitable one. Emotionally intelligent virtual assistants could sense we are upset and finally respond in a way other than, “Sorry, I didn't quite get that”.