Lyft - OTTO

Creating Trust in Artificial Intelligence. VoiceUI + Multi-Modal UX

Summary
& End Results

BACKGROUND

OTTO is a speculative service exploring Lyft’s fully autonomous ride-sharing. By leveraging dynamic A.I. assistants and multi-modal interactions, OTTO allows customers to build trust through bespoke and personal experiences.

OBJECTIVES & OUTCOMES

Our design brief asked for a speculative design of a virtual assistant for the future of automobiles. Set at least five years from now (2020), our design would need to pursue a novel idea for an existing company while being grounded in research and the current problem space.

The final design was a virtual assistant system called OTTO. This service was framed within Lyft’s ride-sharing and addressed the problem of trust in A.I. drivers. OTTO solved this problem by leveraging many unique A.I. drivers, rather than one character. To represent this shift across platforms and reimaging Lyft’s dashboard we created animations and concept videos using Wizard of Oz techniques and 3d renderings of a city. This speculation tomorrow’s design has applied computation efficiency while enabling more trust in a service built around human connections.

 

My Roles:

  • Product Designer

  • Product Manager/Director

  • Motion Designer

My Team:

  • Carol Ho

  • Lulin Shan

  • Matt Geiger

My Tools:

  • Figma/ Framer/ Principal

  • After Effects

  • Adobe Illustrator/ Audition

My Timeline:

  • 4 weeks to design

  • 2 weeks of production

Feb-25-2021 03-34-36.gif
Feb-25-2021 03-34-49.gif
Feb-25-2021 03-35-01.gif
Feb-25-2021 03-35-15.gif
Feb-25-2021 03-34-23.gif
Feb-25-2021 03-32-51.gif

*Quality reduced for web viewing

Complete Story
& Design Process

Introduction to the Brief

This project took place in IxD Studio Class at Carnegie Mellon University in Fall 2020, under the instruction of Dina El-Zanfaly and Kyuha Shim. Our brief was to create a speculative design for a vehicle, set at least five years into the future. We would need to include some form of virtual assistant and choose a brand to build within. While they were not a direct sponsor, our team chose Lyft’s brand and service. This decision came from our initial research around trends in travel and barriers created by public distrust in autonomous vehicles.

 The Process

Investigation

Exploratory research and prototypes for Lyft’s future VUI and driverless ridesharing service.

Group 23.png

Researching for the Future of Autonomous:

Our research began with establishing a clear idea of what the future autonomous would look like and what problems currently existed with this future. Familiarizing ourselves with a variety of sources and reviewing each with a S.T.E.E.P. framework, the team established 3 major insights that would likely face Lyft’s autonomous ridesharing service.

  1. Nervous and Scared

    • A public opinion of nervousness and fear towards autonomous vehicles

  2. Poor Communication

    • Technology that doesn’t yet communicate driving style or intent

  3. Lack of Trust

    • No trust for A.I. driver’s decision-making capability

Along with these problems our research allowed us to created a profile for our potential users. Because Lyft - OTTO would be a service for everyone, we delved into the broad category of urban commuters between 25-35. This large group along with their key needs helped us Identifying what our users would need most.

Screen Shot 2021-02-26 at 2.16.14 AM.png

Design Principles

While automated drivers might improve Lyft’s service, they would need to exist within an established brand. Exploring how we would evolve and address the existing brand we created three design principles, authentic, personal, and responsive. From these themes, we centered our brand and design.

Initial Voice Interface Designs

We felt an immediate connection to the rounded angles across Lyft’s brand. In many ways, this shape represents the spirit of Lyft, symbolizing, harmonious, openness, and structure all at once. Modeling our VUI after them was a perfect choice for both brand and function.

Screen Shot 2021-02-26 at 1.50.42 AM.png

While the form meshed with Lyft’s brand, we had concerns that its simplicity was short of the “person” feeling of OTTO’s brand. However, before we could fully address this, we needed to know how users would interact with the interface. Setting aside this potential issue, the team reviewed the most common way a user would need to with OTTO. This analysis showed smartphones could do the job, but testing also showed us that users didn’t want to be stuck staring at their phones. The solution was an update to Lyft’s iconic Amp.

Frame 4Talled.png

Reinventing the Amp to display our virtual assistant allowed passengers to focus on a “face” during the ride and meant they could close their phones. This design showed promise, but we would need further iteration as the customer’s journey revealed itself.

Journey Map

To understand the potential journey of a customer, we generated feature lists from our research. From this collection organized and developed 7 categories of interactions, we would need to manage.

Possible problem space - New frame (6).jpg

Based on comments from Professors Paul Pangaro and others, we took these features and into simple user flows. Focusing only on the elements that would be highlighted in the concept final video. To ensure to test and validate some of our early assumptions about these interactions, we relied on Wizard of Oz and other simulations created by our team. Through envisioning and acting out the experience with ourselves and others, we quickly generated a speculative journey.

Desktop - 1.png

This exercise surfaced many, as-yet-to-be-designed elements of OTTO, but the completed picture gave us a bird’s eye view of the comments working together, what states would be needed, and how where we still needed to work.

Iteration

Prototyping, evaluating, and refining UI interactions and service features.

Group 24.png

OTTO’s VUI Motion Design

With OTTO’s Amp as the display for our virtual assistant and a journey map, we had clarity around users would interact with OTTO. To manage all these behaviors we began to explore animations that would support the VUI’s need for clarity and sincerity. To begin, we established definitions and roles for each UI state would.

Screen Shot 2021-02-26 at 9.01.17 AM.png

Once each state was well defined, we began expanding on our original circular shape.

9 motions-01.jpg
Group 59.png

From our initial work, the ring forms aligned best with our needs, allowing identity, personality, and clarity throughout the design of each state. However, even this one shape offered thousands of options for motion and color. Without more guidelines, we realized our animation could become disconnected or inconsistent with our communication needs. In order to iterate the design further without creating divergent patterns, we established universal rules for animations and established color references based on Lyft's brand.

Screen Shot 2021-02-26 at 9.13.00 AM.png
 
Screen Shot 2021-02-26 at 9.12.20 AM.png

With these rules applied we went through several iterations, eventually removing shadows and shifting in the animation’s subtly and speed.

The completion of these essential interactions allowed out voice assistants’ inputs and outputs direct links to the specific reactions and stimuli, giving OTTO a foundational non-verbal communication system. Our final consideration was the display’s visibility. Testing revealed back seat passengers would have a hard time seeing the OTTO AMP. Our solution was to move the Amp to the ceiling (replacing the mirror) and to design OTTO’s accompanying app to also display each reaction. With these final adjustments, we ensured accessibility communication between drivers and riders.

 

Designing Adaptive, Personal, and Authentic A.I.

The experience was coming together but still missing elements that could change the balance of trust and elevate Lyft’s brand. Looking to resolve this conflict and inspired by the Greek mythology’s Scylla, our team imagined an OTTO where each car would be a unique A.I. while sharing the same body of collective knowledge. While this design was inspired by fantasy, science also supported that a unique A.I. personality could provide heightened trust and comfort to users. For such a system to work, we needed clear methods to generate dynamic personalities and display different icons for each vehicle.

To create each driver’s personality, we established baseline profiles from the BIG 5 personality matrix, from these each A.I. would be developed further from rider feedback and star ratings. Functioning similarly to Lyft’s system today, the least successful personalities would eventually improve or be removed based on the community preference.

 
 

While the A.I.’s personality would have a baseline and method of growth, we also wanted each driver to adapt for their familiar passengers. Just as a person might speak differently around familiar faces, OTTO would become familiar and more personable over time. So each personality would be a combination of the A.I.’s baseline personality, the context of the ride, and the user’s stated/unstated preferences. With such a mechanism in place, each A.I. driver could provide each customer with a bespoke experience.

Possible problem space - Frame 1 (1).jpg

The consideration for the UI display was influenced by today’s public trans. When we enter a ride-share or climb aboard a bus, there is a moment of connection and trust when you see the driver’s face. For OTTO to be trusted, we knew it needed to create a similar moment of trust. To recreate this trust, we relied on the phenomenon of people recognizing a smile, even though every face is different. We called this concept a Facet, a single of many, all belonging to the same object, and the design required each A.I. to display a unique icon but to always follow the same familiar pattern.

 
Frame 8 (1).png

This shift to many facets also created a business advantage, by isolating the negative impact of a bad ride to a single driver rather than the entire Lyft brand. Most importantly, this update would keep the service anchored to a deeply authentic, adaptable, and personal ride experience, mirroring human ride-sharing today. OTTO’s Facets allows Lyft to maintain a trustworthy experience, even with fewer humans driving.

Finalize Mobile Demo

Throughout our building process, we knew that a mobile design would be key. Most notably, our mobile app would be the primary controller when voice commands weren’t practical, such as the start and end of rides. Keeping within Lyft’s brand, we added a new primary color and established “Lyft Pink” as the primary association with the A.I. driver.

The design of the app followed the established heuristics of ride-sharing and Lyft’s current app. For our Journey, we only needed the customer to schedule a trip, accept a carpool and rate the experience. However, we also designed and built out passenger profiles and a trivia game for passengers to challenge each other with.

To ensure customers had access to OTTO features across their devices we also designed UI for wearables that gave access to the same functions and quick confirmations.

 
 

Initiate

Tying it together with storyboards and script, evaluative testing, and a final concept video.

Group 15.png

StoryBoard & Script

Now that we had our largest pieces developed and built, we need to create a world in which all of them would interact. This meant further developing our user journey into scripted dialogs, interactions, storyboards. To begin with, each of us sketched out the most critical flows when using OTTO. The goal of these many flows was to evaluate the many ways a person might interact with OTTO even within our set parameters.

 
 
Desktop - 4 (1).png

Our initial sketches helped get us in the right direction, but we needed a framework to analyze the interactions between the A.I. and human. We found this structure in an Input-Output map, allow us to ensure consistent interaction flows between the user, the interface, and even backend systems.

With this structure to reference, we wrote and developed a complete script of what each scene would need. Surfacing many of the low-fidelity sketches and dialogs to test and confirm.

Now that we had the raw material for what would become our concept video, we started iterating and gathering feedback around the best way to execute this story.

Prototyping & Testing the Experience

While we had tested our designs throughout the process, our biggest output would but a concept video. To create a video that would sufficiently communicate our ideas and outcomes we tested several prototypes around how to display our content.

Our first attempt was working within a very flat and comic-stip style narrative. The focus of this concept was showing the conversation happening between the users and the A.I., but still allowing the viewer to see the VUI’s language of motion.

Carpool 14.png
Pre-Pickup 4.png
Transit 30.png
Drop-off.png

While this concept worked to display some of our core features like the conversational element, it failed to show viewers how the VUI would interact with users on a day-to-day basis. A common thread in our feedback was uncertainty about interactions with the A.I. once inside the car.

Inspiration for our second attempt came from a promotional image from Lyft. A beautifully created 3d rendering of a Lyft office. This seemed to answer many of our problems, we could use a 3d framework to show the interaction between the A.I. and the passenger but still present OTTO’ss full integration city’s context. Because the project was taking place during COVID 19, we also knew that we wouldn’t have the chance to shoot any real footage and that creating a 3d rendering of a city would be our best chance of show OTTO in-action.

Screen Shot 2020-10-12 at 8.32.06 PM.png
Lift pick up.png
Frame 2 (1).png
lyft city.jpg

Final Production

As we came closer to a finished concept, organization become a key component of our success. With testing finished and feedback gathered, we finalized scripts, voice recordings, images, and scene descriptions. To surface all this and ensure that the entire team member was on the same page, we collected and updated a large spreadsheet almost hourly.

As the story finalized, so did the characters involved. For the 6 scenes, we wanted to show we had 7 humans and 6 A.I. and to ensure that each scene felt real, A.I. and person had a different voice, personality, and mood.

Once I had mapped out the scene direction, camera angles, and dialog cues two members of our group went to work rendering the city using SketchUp and Blender. Once the city was made it was my job to create the final animations and scenes in After Effects, stitching together voices, Amp animations, screen demos, and dialogs, all set to the pace of my voice-over recording.

5 Key Moments of Concept Video:

 

01. User Coordinating with A.I. for a Pickup

 

02. Context-Aware Reactions

 

03. Applying Personal Preferences

 

04. User Confirms A.I.’s Carpool Initiative

 

05. OTTO Creating Passenger Connections

Full Concept Video:

Outcomes + Reflections

Final takeaways and learnings from the experience. Some potential next steps if the project were to go further and some areas to improve on.

Group 38.png

Outcomes

Throughout the building of our concept video and the concepts of OTTO our team was able to push into the future and ask questions not about how the technology might make things easier, but most importantly how it could make A.I. more human. Our goal was to develop the speculative world where A.I. is fully present where trust is still held at the highest value.

While our team was fully committed to this idea, our logic arose not from our own fancies, but from observing the studies on the subject and by looking at the importance of trust in driverless ride-sharing. In this case, business and design outcomes both led us to the same solution, A.I. with a human touch.

Reflections

One of our biggest hurdles was balancing rendering a 3D city and the time it takes to output and edit such large files. While we made this concession due to COVID, the final quality of the video is not quite where we could want it to be. As a team, we learned the immense complications in balancing a 3D workflow alongside a 2D workflow.

However, as the primary animator, director, writer, editor, and product manager, I’m quite pleased with the novel idea we developed. As with any strong outcome, none of it would have been possible without a collaborative approach and effective communication throughout the process.

Previous
Previous

The League - Design for Relationships

Next
Next

Ontra - Design in Legal Tech