Articles‎ > ‎Computing‎ > ‎

Next Generation User Interfaces

Its interesting how the computer user interface has evolved over the decades. Here is a very brief history: First there was nothing more then switches and lights and you had to enter and read information in binary, this evolved to punch cards and tape, later keyboards and printers, and eventually terminals with keyboards and monitors. For decades, the text user interface was the status-quo.

In the 1960s, Doug Engelbart introduced the concept of the mouse and graphical user interface (GUI). In the 1970s - 1980s, Xerox took the concept and made it a reality. In the 1980s Apple popularize the GUI with the Lisa and Macintosh lines of computers. Then again for decades, the graphical user interface was the status-quo.



In the 1980's Jaron Lanier, introduced the 'virtual reality' (VR) concept. The first generation of this technology was crude by today's standards, but has advanced drastically since.  I remember seeing some early VR prototypes of future GUI in the 1990s, and being blown away by the concepts.

The first optical virtual keyboard was invented and patented by IBM engineers in 1992. It optically detects and analyses human hand and finger motions and interprets them as operations on a physically non-existent input device like a surface having painted or projected keys. In 2002, the start-up company Canesta developed a projection keyboard using their proprietary "electronic perception technology." (More Information)



In 1993, Apple released an early natural speech recognition feature as part of their OS for the Quadra.  Microsoft first included this feature with their release of the Windows Vista OS.

In 1999 Steve Manns introduced the EyeTap Glasses, in 2012 Google announce their Project Glass (aka Google Glass) project.


Touch screens are passe, but when this technology is enhanced by advancements like a 'multi-touch gestures' interface.  Then combined with other technology advancements, you get a GUI that begins to become on-par with some of the user interface concepts introduced in the 2002 movie 'Minority Report' (excerpt of the scenes I am referring to).  Microsoft now includes a 'multi-touch gestures' interface feature as part of the Windows 7 OS.

In 2006, Nintendo introduced the 3D wireless controllers.  Also in 2006 IO2Technology released a new version of their Heliodisplay called M2i 3D Holographic Display multimedia projector.   This new projector can support a display image up to 30-inch diagonal area with a 4:3 aspect ratio, 1280x1024 maximum resolution, up to 2200 lumens of brightness, and interactive cursor controls (i.e. virtual touchscreen).


In 2007 Apple introduced the iPhone, and popularize the multi-touch gestures interface for this device.  Also in 2007, the following video claims to be the world's first mobile augmented reality advertisement.  The technology was from a company called Hit Lab NZ, it was created for the Wellington Zoo‘s Close Encounters exhibition.  The mobile augmented reality advertisement was attributed to a 32% growth in visitors at the Zoo.



In 2009, Yelp help popularize the augmented reality concept (computer generated information laid over real world video), by introducing an Easter egg on the iPhone called Monocle.  Also in 2009 Microsoft introduced Project Natal, now called 'Kinect' (proof of concept video), and at TEDIndia, Pranav Mistry demoed technology concepts from a company called SixthSense, that shows how they are overlaying computer data on the physical world and making it interactive.  In 2010,  Intel is also working hard on advancing the user interface, see the following article.

All the technologies I mentioned above might seem a little disjointed in how I presented them, although the all share a few things in common.  All of them either popularized existing concepts based around advancements in user the interface technologies, or introduced new concepts that have not been used before.

As mobile devices become more powerful, and get more features (such as the motion sensor and gyroscope already built into them).  I personally believe that software developers will build on these features and concepts, to bring about more advanced next generation user interface enhancements that I believe will include location aware social networking concepts (such as Gowalla and foursquare, and now Facebook Places), augmented reality, object and speech recognition.

From a technological standpoint, I don't see developers limited by hardware anymore. I see the greatest the limitation, is the imagination needed to integrate all these technologies together into a killer application that gains mass adaption.  Although, with all these advancements comes greater concern about privacy and security.  I hope these issues will be addressed before they become a problem.

A possible future could look like this...

 Original (2010)
 Updated (2013)



Other products worth mentioning:
  • 2009: The Magic Mouse is the first consumer mouse to have multi-touch capabilities. Taking after the iPhone, iPad, iPod Touch, and multi-touch trackpads, the Magic Mouse allows the use of gestures such as swiping and scrolling across the top surface.
  • 2011: Apple Siri - an intelligent personal assistant and knowledge navigator which works as an application for Apple's iOS. The application uses a natural language user interface to answer questions, make recommendations, and perform actions by delegating requests to a set of Web services. In 2012, Google introduced a competing service for Android called Google Now.
  • 2012: Oculus Rift - an upcoming high field of view (FOV) consumer-priced virtual reality (VR) head-mounted display (HMD).  It is being developed by Oculus LLC, who have raised $2.4 million from a Kickstarter campaign
  • 2012: Tobii Technology is a Swedish high-technology company that develops and sells products for eye control and eye tracking.
  • 2012: Nike Fuel Band is not unique during this time period there were several wrist worn products came out in the same category that monitored the wearer's activity (moving, sleeping, etc.) that could later be downloaded to a smartphone or computer.


Other prototype and projects:



At Microsoft TechFest 2013, Microsoft Researchers showed off a 3D touchscreen with "haptic" feedback.   Haptic feedback provides a form of 'active resistance' so when you push 3D objects around in a virtual workspace it will give you the proper sensation of that object.  For example, if you push a rock it will feel different then if you push a stone. (More Information)