Multi-experience solutions use a combination of various interaction modalities (touch, no-touch, voice, vision, gesture, …) to create a seamless and consistent digital user experience. This is part of a long-term shift from computers as individual devices to multiple connected devices with sensors adapting to different environments around us. The resulting people-centric, smart spaces that blend the digital and physical world is allowing humans to interact with computers in novel ways.

For example, imagine a scenario where you plan a route on your desktop or mobile device using input like keyboard, mouse, touch or speech to text input. Then you get into your car, and it already has the route loaded for your navigation with a smart speaker system guiding you step by step through those directions with the option to change the route or start music, etc. along the way via voice command for a completely handsfree experience. Taken a step further with autonomous vehicles, your car could use the route to autonomously drive you to the destination while you continue to enjoy the movie you started on your home TV on your car entertainment system. All of this requires a unified experience approach, with user-centered design and a strong cloud backend to deliver each step of the experience on the frontend. Great multi-experience solutions are seamless where the user will not notice any friction or delay when switching between interfaces like computer to car navigation screen or TV to entertainment system.


We introduced the concept of Multiexperience (MX) in our Top Trends for 2021 post as a trending technology to watch in 2021. In this post, we go a bit deeper and share market research demonstrating the opportunities for business leaders to begin integrating more user-experience focus into their business processes now, in order to prepare for changing customer needs and expectations in the future. We will also highlight some multi-experience concepts and tools like Zero Interfaces, chatbots, Spatial Computing with AR/MR/VR and the potential for increased accessibility in a diverse and inclusive work environment.

Market Analysts See a Shift to MultiexperienceGartner and other market research companies are categorizing Multiexperience (MX) in an early, emerging phase with a transformational character, their highest benefit rating. Analysts predict implementations will grow over the next five years with acceleration from the COVID-19 pandemic. MX will provide a unified digital experience that is seamless, collaborative, consistent, personalized, and ambient.

Personalization can also be fueled by Digital Twin of the Person (DToP), which we covered in our previous post. However, such personalization must be protected by strong digital ethics, security, and data governance policies, from the start, to secure highly sensitive data and also meet evolving legal/compliance requirements. MX technologies, with always-on smart speakers and alike, raises similar privacy, security, and social questions. Therefore, security along with ethical and responsible usage will be crucial to successful applications of these technologies and societal acceptance.


1-gartner-hypecyle.png (1)
Figure 1: Gartner Emerging Tech Hype Cycle including Multiexperiences (Source:

The MX goal is to remove friction and effort for users through applications of digital technologies, in the context of the user’s environment and preferences. This mentality shift to a user-centric approach with outstanding UX has proven to help application leaders better align with business objectives and deliver positive business outcomes. Gartner analysts advise a focus on understanding how unified digital experiences could impact your business through evolving multi-experience technologies that create targeted solutions for customers or internal constituencies. For this a handful of high-value proof-of-concepts (PoC) should be identified and developed together with a strong multidisciplinary team of IT, business leadership, HR, facility management, UX, product and experience design experts. A key challenge currently faced is that many OEMs (original equipment manufacturers) and large tech companies are developing their systems independently with proprietary interfaces, creating strong barriers to seamless integration. Valorem Reply is a system integration specialist, our experts work each day to turn such heterogeneous devices, applications, and services into homogenous, smooth user experiences (UX).

As the name implies, multiexperience (MX) comes in various shapes and forms and can also be called Zero Interfaces as the interface has zero friction and moves into the background. In fact, Zero Interfaces is a new emerging term and our colleagues in the Reply network have used their innovative SONAR trend platform which uses AI to identify and measure trends ranked by the growth and citation volume of each topic. You can get an overview of the MX results in this Zero Interfaces trend report here.

Multiexperience with ChatbotsOne very popular and growing application of multiexperience concepts is conversational AI platforms like text or voice-based chatbots, which are already in use today with smart speakers and voice assistants. Valorem’s research analyst team recently released their analysis of the chatbot market which was estimated at $430.9 million in 2020 and is forecasted to have a compound annual growth rate (CAGR) of 23.7%. Meaning the market value could reach $1250.1 million by 2025!

Figure 2: Global Chatbot market growth according to our own market research data analysis.

According to MIT Technology Review, 90% of businesses are reporting faster complaints resolution with the help of chatbots. Companies are already leveraging this technology to provide more personalized customer experiences and build long-term relationships to boost revenue. This has led to the incorporation of chatbots through diverse messaging applications, such as WhatsApp, Facebook Messenger etc.

Chatbots are especially popular in healthcare, utilities, banking and financial services, retail, and entertainment industries. US retail, healthcare and utilities industries are seeing the highest consumer engagement with chatbots at 40%, 22% and 21% respectively. You can find more in-depth analysis including global market overviews, key trends and more, in our chatbot industry analysis reports for healthcare, fin tech and retail, coming soon.

There is no better time than now to get started with chatbots with a growing set of tools that make creating, deploying, and managing them simpler by the day. At Build 2021 Microsoft announced new features for Azure Bot Service including a visual authoring tool which is enabling users to test, debug and publish bots to multiple channels with minimal code changes. If you are considering your own bot, make sure to read my colleagues Liji Thomas great article “In Bots We Trust - 7 Guidelines for Building A Chatbot Your Users Can Believe In” that goes more in depth on the technical side.

Multiexperience for Accessibility User interfaces need to be built in an inclusive and accessible manner and multiexperience tech like conversational AI chatbots can even help to solve for ability constraints. Designing digital interfaces in such a way that they can be used by people with disabilities just as effectively as people without is finally table stakes these days. Doing so helps drive adoption and usage among those users who require color contrast, rely on screen readers, navigate UI using only keyboards, voice, or eye tracking, etc.

Chatbots for example, are often integrated with Voice User Interfaces (VUI), which are mostly seamless to use and provide broader accessibility for vision impaired audiences. MX solutions by nature are helping to improve accessibility with diverse user interfaces and adaptive input modalities. For example, Natural User Interfaces (NUI) using hand or eye tracking not only unify experiences but also provide input interfaces for people that cannot use classical input controls like keyboard or mouse. Another innovative example is adaptive controllers, like the Xbox Adaptive Controller, that support a range of dedicated devices and can therefore be customized to the specific accessibility needs of the user.

3-xbox controller.jpg
Figure 3: Xbox Adaptive Controller and external devices such as switches, buttons, mounts, joysticks. (Source:

Microsoft in general is at the forefront when it comes to responsible and accessible tech design. The Microsoft Inclusive Design toolkit provides a great framework for integrating accessibility into your design considerations. Also, the MSFTEnable YouTube channel provides helpful content on accessibility design in well-made video form. You can also watch recordings from the recent Ability Summit 2021, an inspirational online conference where I’ve learned many new things, including that Microsoft is one of the few companies that has a Chief Accessibility Officer, showing their deep commitment to enable all people around the world, of all ages and abilities, to do more. Microsoft Office users may have already noticed some of their strong accessibility efforts with the Office 365 Accessibility Checker now enabled by default in the most popular business productivity tools out there.

The Ability Summit also made me think about how we can improve accessibility by leveraging the latest human parity Image Captioning AI available via Computer Vision Cognitive Services. One example I came up with, is an idea for a browser plugin that provides AI suggested image descriptions for Twitter (and other social media) before one posts a photo. Ideally the plugin would recognize when the user wants to share an image, then pre-fill the image description / alt text using the automatic image captioning AI. Reducing the friction for Alt Text by automatic generation will encourage more people to add Alt Text to their posts for screen reader tools that help the visually impaired fully participate. I have shared this idea on social which has spun up some great conversations in the community. In fact, Microsoft Cloud Advocate Lead, Jen Looper, pointed me to an exciting new project called Elly built by Microsoft Student Ambassadors that is aiming for exactly this concept! I just got access to the preview and would encourage everyone to also sign-up here and give it a try to make your social posts with images more accessible, with minimal effort.

To keep accessibility top of mind at Valorem, we created a dedicated Microsoft Team where we exchange ideas, share barriers, and discuss how to solve them. Our experts also leverage tools like Accessibility Insights to ensure each innovative app we modernize or create from scratch for clients, provide high accessibility.

Multiexperience with Spatial ComputingSpatial Computing innovation with Augmented, Virtual and Mixed Reality (AR/VR/MR), wearables, smart speakers, etc. is very relevant for multi-experience scenarios and accessibility as well. Devices like the HoloLens 2 with built-in voice recognition, text-to-speech, and hand/eye tracking, provide true-to-life, natural user interfaces (NUI). For example, an experience where you can touch virtual holograms with your hands in the context of the physical world around you. Or leverage eye-tracking to digitally interact with objects in your physical environment, just by looking at them. Also Azure Mixed Reality services with Azure Spatial Anchors that are creating a spatial map of the world, enable accessibility use cases like navigation for visually impaired with the help of computer vision and Spatial Sound within the Seeing AI app.


Of course, there is still a long way to go for Spatial Computing in the context of accessibility to reach similar success to the Xbox Adaptive Controller for example. Thankfully there’s already great research happening for additional accessories like the Dots project, which uses an external tracker to provide gesture recognition capabilities for people with disabilities. You can learn more about the challenges and opportunities with accessibility tooling in this great interview with Regine MaryAnne Gilbert, an expert in accessibility specifically for Spatial Computing with Augmented, Virtual and Mixed Reality (AR/VR/MR).

Figure 4: Dots gesture recognition system used with a HoloLens.


A combined shift in perception and interaction models through technologies like Spatial Computing, is leading to multi-sensory, multi-device and multi-interaction experiences for all. In addition to accessibility, communicating with your users across multiple human senses in this way, provides a richer, more meaningful environment for delivering nuanced information.


Immersive Workspaces are another interesting implementation of running a consistent, multi-platform experience with adapted interfaces. We covered Immersive Workspaces including Microsoft Mesh in an earlier post highlighting the benefits for our modern, hybrid work world and would encourage you to read this article next.

Figure 5: Microsoft HoloLens 2 Mesh app Preview in-action for immersive collaboration across 9 time zones without delay.



At Valorem Reply, we are hyper focused not just on the latest and greatest technology, but the right mix of technology, people and process for our customers to create outstanding user experiences regardless of their current stage of digital maturity. Leveraging innovative design and development processes with great UX, our clients can meet their customers wherever they are and collect the insight needed to prepare for the next customer need. Do you already have a project in mind? Email to schedule a consultation with our experts!