Ever see the futuristic crime thriller Minority Report starring Tom Cruise?
In one scene, Tom walks into a Gap store, sporting his newly-transplanted black-market eyeballs. A computer scans his Franken-eyeball and a holographic sales associate merrily chirps, “Hello, Mr. Yakimoto! Welcome back to The Gap. How’d those assorted tank tops work out for you?”
From this quick exchange, the viewer can deduce two things: 1) Tom’s eyeball donor supported the “sun’s out, guns out” philosophy, and 2) this future world has mastered the “Ambient User Experience.”
Ambient User Experience is the idea people should be able to interact with electronic devices with minimal user interface. The devices work together, learning and adapting to the user’s habits, providing assistance in the background. Almost to the point where the devices’ functionality becomes an unnoticeable, but critical, part of the user’s environment and life—essentially “ambient.”
The trend reasons that by connecting all of a user’s smart devices, and improving the auto-learning/pattern recognition/sensory technology of these devices, users could experience a truly smart world.
The Ambient User Experience can be broken into three unique phases:
Current technology places us comfortably in Phase 1. Individual devices learn from user patterns to simplify the life of the user with minimal user interface. Google Maps is one of the first visible examples of technology learning the patterns of users and utilizing that information to assist in users’ everyday lives.
It is a little unsettling the first time your smart phone independently informed you to leave the house now to accommodate the fifteen-minute slowdown on the commute to work.
In Phase 1, compatible devices can communicate with each other, yet devices created by different manufacturers are often incompatible and therefore cannot talk to one another. Apple devices communicate with other Apple devices, and Amazon devices communicate with other Amazon devices. Across manufacturers, there is a language barrier.
In Phase 2, all devices owned by a user or family can communicate with one another, regardless of manufacturer, configuration, or function. In this phase, the user can truly live a “smart life.”
The alarm clock monitors breathing patterns and wakes us up at exactly the right time in the circadian rhythm to ensure maximum alertness for the day. As the alarm sounds, it triggers our thermostat to raise the temperature of the room to a comfortable seventy-two, since it has learned we sleep hot, and drops the temperature to a frosty sixty-five degrees during our resting period. The morning playlist on the home stereo plays automatically while we dress for the day. As we walk out the front door, the interior lights shut off, the thermostat rises to eco-saving mode, the doors lock, and the security alarm activates. The music left playing on the stereo automatically begins to stream through the phone and wireless headphones. As we get in the car, the music switches to car stereo and the navigation system calculates the fastest route to work given the current traffic situation.
When we pull into the parking garage of the office, the workstation picks up the signal from the phone, and begins its startup process so it is ready and waiting as we walk in the door.
The main barrier preventing us from fully entering Phase 2 is the need for a common platform or software that makes device manufacturer and configuration irrelevant. The Amazon Echo and Google Home are two examples of technology beginning to break down this barrier, putting their owners into a kind of Phase 1.5.
Both devices allow the user to enable “skills” for other devices, such as smart thermostats, smart lights, and security systems. Compatible devices are manufactured by many different companies, yet they have the functionality to communicate with and be controlled through the central “smart home” hub of the Echo or Home.
However, to fully transition into Phase 2, the Echo and Home would need to make device compatibility nearly universal, and develop functionality for connected devices to communicate with one another.
Sometime in the not so distant future, the Ambient User Experience may move into Phase 3. That is both exciting and terrifying.
In Phase 3, all devices can work for you, regardless of ownership. Let that sink in…
Following the example in Minority Report, the Phase 3 Ambient User Experience would use a unique identifier such as an eyeball scan, or an implanted microchip or a wearable device to identify us and adjust to our preferences.
Walk into a hotel room, and the room adjusts temperature and lighting levels to our liking. Borrow a friend’s iPhone, and the phone reflects our contacts, settings and photos instead of theirs. Billboards reflect ads tailored to us when we walk into a store, and digital restaurant menus display options based on our dietary preferences and restrictions.
The most obvious challenge to Phase 3 is developing the technology to create and facilitate this massive network of information. The more concerning and relatable challenge is user willingness to be put “on the grid.”
While the promise of a custom-tailored smart world sounds convenient, there is no denying the creepy Big Brother factor. When every device becomes potentially “your device” or “someone else’s” device, concerns about security and privacy increase exponentially. A hack to a network of this scale could compromise sensitive personal information for everyone on the grid.
And the idea of implanting microchips in people or incorporating retina scanning as a part of everyday life is enough to make even the most pro-technology early-adapter say, “ehhh, not me.”
The demand for Ambient User Experience has the potential to drive technology development into the next century. Market research has already established the average Millennial and Gen-Z consumer is highly tech-savvy and values convenience and instant information over almost anything else. They are the perfect target market for this experience.
If implemented correctly, with a focus on security and privacy, Ambient User Experience will become a way of life in the next few decades and will change the way we live, work, shop and interact with technology. However, it will be up to the consumer to draw a line to tell technology developers how far is too far, and how much information is too much information.
And maybe we don’t want everyone in The Gap to know about our affinity for sleeveless garments.
This article has been adapted from a chapter from Trenegy’s book, Jar(gone).
Trenegy is a non-traditional consulting firm dedicated to helping companies clarify the latest business jargon, putting it into useful terms and solutions that actually benefit your company. Find out more: info@trenegy.com.