top of page

Bean Tech
AI Product Design Internship

Designing an AI-driven conversation feature that empowers drivers

Beantech illustration.png

OVERVIEW

I was as an AI Product Design intern at Bean Tech's AI team. Bean Tech focuses on designing intelligent and autonomous vehicles in the era of IoTs and boosting the digitization of automobile industry to shift from auto manufacturer to service provider. In the 10 weeks, I designed and pre-launched an AI-driven conversation feature called Q&A on a VR model, as part of our team's effort to optimize user experience on the Fun-life intelligent and connection system in the car.

TEAM

Bean Tech AI

WORK

Motion Design, Visual Design, UX Research

TOOLS

ProtoPie, Adobe XD, Principle, Photoshop

DURATION

July 2020 - August 2020 (2 months)

Objective

Objective

Create an easy and direct way that uses voice assistant and interactions to provide additional information to and collect feedback from drivers about their experiences through VR driving, thereby optimizing the usability and functionality of the car's intelligent system. 

Challenge

Challenge

How might we collect drivers' feedback from their VR driving experience to optimize user experience on the car's intelligent and connection system?

We tend to think about what users see and touch, but their voices matter as well.

My Responsibilities

Though I was the only product design intern at our AI team to work on the conversation design phases for the this project, I was lucky to meet and work with a group of inspiring product managers, designers, engineers and marketing researchers during the research and ideation phase. I was mainly responsible for designing and creating the wireframes and hi-fi prototypes of the conversational experience.

Research

Research

stateholder interviews.png

Secondary research

  • Audit of existing voice and conversational products: Alexa, Siri, Google Assistant, Cortana, etc

  • Audit of existing conversation design practice in driving and in VR product (very few)

  • The user group of the car models that equipped with Bean Tech's intelligent in-car system

  • The background of the Three Kingdoms storyline in the VR driving scenario

  • The role each stakeholder is involved in the ecosystem

Stakeholder interviews

nearly 70% are mid-age (30-45 years old) male drivers

stakeholder1.png

Users might not be comfortable if they wear the VR headset for a long time

stakeholder2.png

We want to collect drivers' latest feedback on our new car models equipped with Bean Tech's intelligent in-car system

stakeholder3.png

Actionable insights

I was able to synthesize qualitative data into actionable insights through interviews and research results.

Creating Voice Assistants

designed voice assistants who prototype the characters from the Three Kingdoms

Deciding the length of conversation

collected user feedback via a quality conversation with them, while considering time

Showing Guidance

​provided instructional content on how users can and will interact with the machine

Solution

Solution

We created an 85-second conversational feature called Q&A, located at the end of the VR driving experience to collect driver's questions and feedback immediately.

CSA flow chart.png

VUI flow diagram

Because this is a brand new project, there is no existing user flow or VUI flow. I was able to follow the guidelines provided by the marketing team and our stakeholders to create a user flow and VUI flow diagramming by myself. The guidelines help me understand where and how the conversation is needed.

VUI Flow diagramming.png

User journey

Here's an example of Q&A between the voice assistant ("Liu Bei") and the user (Monica) who asks a question about the car model after she finishes the VR driving adventure.

CSA simple user journey_2x.png

Personalities for Voice Assistants

To help guide a better design decision, we introduced four VR characters from the Three Kingdoms (the storyline of the VR driving adventure) to be the voice assistant and who have the personality we evoke. While I adopted one of the characters ("Liu Bei") in my final design, our team collaborating with our stakeholder VR firm to create four characters who have the personalities as below -

  • Empathetic

  • Culturally Aware

  • Straightforward

  • Respectful

beantech_illustration3.png
Ideation

Ideation

Since there are three cross-functional stakeholders (e.g. car manufacturers and suppliers, VR firm.) and four collaborating teams, in this project, all necessary to-do is to make the product successful. Therefore, I created different design explorations in the form of paper sketches that intend to address those requirements. By voting, we decided one that received the highest number of votes to continue on. I then iterated the design based on their feedback and created the interactive prototypes. 

Q&A-2.JPG
Q&A-1.JPG
Design Decisions

Design Decisions

Onboarding

beantech_step1_animation.gif

After the adventure, users will leave the home screen and be directly led to a zoom-in screen where the voice assistant (Liu Bei) guides them to ask questions about their driving experience and the car model. A black-squared chat box appears, and in which we provide sample/guiding questions that users can refer to, an 85-second countdown timer where users keep track of the time used and remained, as well as a wave illustration changing in shape as the user speaks.

Text-to-speech (TTS) interaction takes place in Liu Bei's while Speech-to-text (STT) interaction exists in user's activities. This means the design recognizes user's voice input and transcribes into written text, while it converts the system's default questions into natural voices by the assistant as a form of speech synthesis.

*The hints of steps you might take to talk to Liu Bei are: 1)智能网联系统支持的功能有哪些? 2)结束体验; 3)联系我.

Q&A

beantech_step2_animation.gif

After the system detects user's question, it follows the hierarchy of information and provides the corresponding answer (a supported synthesized image from VR car and a short paragraph of written words), and user's question is shown at the bottom of the chat box along with a dedicated speech recognition circles that changes from the wave illustration. The circles mean that users can interrupt the conversation at any time, either to end up the Q&A or continue asking the next question. 

Ending & Exit

If users are satisfied with the answer, or simply just want to leave the conversation, they can simply say "End Now" as the first triggering word. To make sure that they do want to leave, the assistant will ask users to confirm with a second triggering word "Contact Me" in order to successfully end up the session. Afterwards, the assistant will ask the user to take off the VR headset.

The triggering words are provided by marketing team based on content strategies.

beantech_step3_animation.gif
Visual Guidance

Visual Guidance

I followed Bean Tech's design language when choosing fonts and color palette. I wanted to create a easy and approachable application that users would find it convenient to use. Green (#60cc18) is my primary color that represents the firm's theme, sustainability and innovation, while blue and yellow are my secondary choices that support the fluency of some important UI components such as wave illustration and speech recognition circles to create a dynamic user interactions. 

CSA topography & colors_2x.png
Testing

Testing

CSA iterations.png

User testing

Here's a series of example questions a user asked the voice assistant.

At the time of creation, I was able to pass the design to engineers who then created a simple TTS based on the dialog flow. With update of vocabulary generalization, we conducted TTS user testing to 25 randomly selected users by them talking to the trained models. The feedback I received was helpful for strengthening existing design solutions while it highlighted what we should take into account to refine the product.

Doing well:
Things to improve:
talking.png

Users can focus on talking, hands-free

speech.png

Limitation on speech recognition & NLU

digital-assistant.png

Solid conversational structure and interaction

doc.png

Need more word/vocabulary generalization

Software testing

I created a chart to explain why ProtoPie works better on VUI and VUX than other design softwares, such as Principle for Mac. My prototype was initiated on Principle for team critique, where it had no sound. However, in the later research and collecting feedback, I found a new design software that has been rapidly developing for voice design. It's ProtoPie, so I learned to animate in a way that any user can talk to the device. It's helpful for upgrading my interactive prototype to a higher level where I was able to design a fully self-supported voice interaction experience. 

CSA design software comparison@2x.png
Takeaway

Takeaway 🚀

By the end of my internship, due to time limitation, I finalized interactive voice prototypes on ProtoPie and created a video to demonstrate my work with the team.

 

In October and later, my team implemented design into VR devices for the car exhibition in Guangzhou, China, where more than 500 people came for the VR car journey. Our project, overall, combined cultural interaction with intricate ML decisions to make driving experience more accessible and operable.

Bean space1.JPG
bottom of page