Augmented Reality almost always refers to a visual overlay of technology. Bose is trying to change that by thinking about an audio experience that creates augmentation with audio instead of visuals. Their idea is to create wearables for the future that use GPS coordinates from the phone along with the body’s orientation in space to pinpoint the exact view a person is looking at. In the demo they presented at SXSW 2018, they created a spatial interaction through auditory feedback. For instance, as I move my head from left to right, a different playlist gets selected, and I can nod to accept the one I like to listen to. This extends to location where a gallery of playlists is waiting for me when I reach work, gym, or other hotspots. For the Mirrorworld, this concept creates a new layer of data on top of the real world which does not require a device at all. As voice assistants are getting better at handling instructions, a voice-based interaction is gaining ground. Combining that with smart glasses with visual overlays, we are gradually transitioning to a screen-less interface. What our hands did a decade ago would now be done through our face and voice. It is important to note that the change in interaction requires a change in habit from its users. Since the new generation of products would typically be hands-free, the youth preparing to develop them would need this skillset. With 3D printing, AI systems, and robotics, machines will do the mundane and repetitive tasks of building. Jobs will change as they did during the industrial revolution.
For 2030, I foresee audio AR becoming an everyday interaction in which sound reaches our ear more easily than it does now – without a noise-canceling headphone blocking sounds from the real world. Just as you can open your phone in a crowded concert, you would be able to listen to and give voice feedback to a digital assistant through your voice and face gestures. The future of user experience is exciting.
1 Comment
|