Tags: Artificial Intelligence, Augmented Reality, Cloud Communications, Cloud Solution, Machine Learning, Webex Hologram
Two years ago, I wrote about the Future of Meetings in 2030 and hinted at an effort my team was building to make this a reality. Now, we have publicly unveiled Webex Hologram and brought the reality of a real-time, end-to-end holographic meeting solution to life.
With Webex Hologram, you can feel co-located with a colleague who is thousands of miles away. You can share real objects in incredible multi-dimensional detail and collaborate on 3D content to show perspective, share, and approve design changes in real-time, all from the comfort of your home workspace. Webex Hologram is a true game changer that you have to experience to fully appreciate, but perhaps the best way I can describe the feeling of co-presence is the reaction of our Product Manager when she first joined the team—despite being on 2D video conferences all day, the first time a colleague appeared holographically in her home office via Webex Hologram she said she immediately felt the need to tidy up because she felt as if he was really there. That’s the game changing differentiator of Webex Hologram.
We still have a ways to go before my blog above can become a reality, but we are getting closer! So, now that photorealistic holograms rendered in real-time right in your living room are a reality, what sort of technological advancements should we expect to see around 3D interactions over the next 12-24 months? Here are my 5 predictions:
Thanks to Mark Zuckerberg’s heralding of the metaverse there is an over rotation presently on virtual experiences right now, specifically for the enterprise environment. Virtual staff meetings, virtual offsites, virtual watercooler conversations… But just because you can, doesn’t mean you should. As the hype dies down, the focus on entirely virtual experiences in fanciful environments will abate and a resurgence in focus on augmented experiences—interjecting virtual content into the physical world around you for an enhanced experience that blends the best of physical and virtual—will emerge.
The ability to have curated information at one’s fingertips, still holds an incredible value prop that has yet to be realized. Applying AI to predict, find, and present this type of augmented information in both 2D and 3D formats will become incredibly useful. While it’s not the sexiest use case, I predict the actual functional value of this experience will cause curated content displayed in a way that augments your physical interactions to drastically increase. This will not only apply to the 3D form factors mentioned above, but access to augmented and curated information in a 2D form factor is an underutilized area I expect to experience a drastic uptick.
One of the biggest points of friction to 3D interactions currently is starting them. Most interactions require putting on some sort of headset to enter this 3D experience. Despite the leaps and bounds the AR/VR headset industry has made over the past few years, headsets are still clunky, they take a bit of getting used to position them just so, they contribute to flat hair and forehead indentations, and whether using a controller or voice command or hand gestures, there is a steep learning curve to go from feeling awkward trying to select that object for the 5th time, to feeling like you can navigate your way around. Right now, switching from a 2D interaction to a 3D interaction (and back again), is incredibly awkward, but as there is more and more focus on the user experience on not just 2D experiences vs. 3D experiences, but on the experience of transitioning between these experiences, much of the friction will be removed and the transitions will become much more fluid.
As with the quirks of any still blossoming technology, the first few generations are always awkward. With regard to AR/VR headsets, these interfaces will get lighter, more ergonomic, and easier to use. Magic Leap has announced their ML2 form factor is coming and there are abundant rumors about other players releasing their own headsets in this space as the race continues to get as close to a pair of reading glasses as possible. In addition to wearable form factors, there are several companies pushing the envelope for 3D displays that do not require any headset component at all, varying from single-viewer tablet-sized displays to massive, room-sized screens. If the number of contenders at CES this year showing off either headsets or screen-based displays in this space is any indicator of the investment in this area, we can expect a lot more options in the near future.
Imagine if you couldn’t feel the chair you just tried to sit in, or touch the tabletop you placed your coffee on, or feel the grip of the handshake you just made. Touch is the first sense we experience in the physical world and an essential part of how we process and interact with the world around us. It helps ground us (literally!) and makes us feel present and secure. Most 3D interactions today completely glance over the sense of touch. But as the visual and auditory inputs of 3D interactions continue to improve, we’ll see a rush to add in haptic feedback to create realness to the virtual objects and experiences around us.
Check out our session “The Reality of AR in the Enterprise” at Enterprise Connect on March 24th
Explore the possibilities with Webex Hologram
Learn more
New Webex Integrations to make your meetings more engaging