Mei Yi Tan

Design Process

An outline of how much of the design process was hit upon during this project.

Empathy
  • Research
  • Competitive Analysis
  • User Interview
Define
  • Problem Statement
  • User Flows
  • User Personas
  • Storyboard
Ideate
  • Mid-fidelity designs
  • Style Guide
  • Site Mapping
  • Design Language
Prototype
  • High-fidelity prototyping
Test
  • User Testing Interviews

Project Summary

A Texas Hospital noticed that its chat bot symptom checker users will choose to speak to a live nurse 37% of the time. This was putting strain on the nurses who were answering questions that the chat bot had already addressed earlier in the chat. After looking at the chat bot, assumptions were made that the information given by the chat bot was both not obvious enough and even incentivized users to speak with a nurse. A design overhaul of the experience was made in hopes of not only resolving the initial issue, but also providing its users with more useful information that would convince them to follow the chat bot’s proposed care recommendations at the end of its flow.

Impact: Increased nurse communication efficiency by 20%.

Roles

  • Sole UI/UX design contributor (employee at the hospital)
  • Communicated with key stakeholders to gather requirements and feedback
  • Collaborated with software team to understand technical limitations
  • Collaborated with Research team to create artifacts to be shown to users

Project Type

Project at a large Hospital

Tools

  • Figma
  • Teams

Duration

July 2025 – December 2025, 6 Months

Empathy

Background

This Texas hospital has a for-profit branch aimed to provide employee’s of the businesses subscribed to their care an enhanced care system. Part of that is a triage chat bot whose major functionality was a symptom checker that redirected potential patients to clinically appropriate care types (virtual provider, at-home care, ER, etc.) depending on the severity of the combination of symptoms. 

The example on the right shows completing the questionnaire. the symptoms expressed were somewhat urgent, so the care recommendation points the user towards scheduling with urgent care.

Original state of care recommendation results

Research

One of the KPIs for the chatbot included successfully routing a patient to schedule an appointment with their care recommendation. However, our statistics showed less than 34% were scheduling follow-up appointments after using the tool, with the majority
waiting to speak to a nurse only to ask questions that our recommendation should have answered upfront.

For example, one chat history showed that a patient had waited 5-10 minutes for a nurse to respond only to ask what their care recommendation was. Whereas many others showed that they simply wanted help scheduling their appointment, of which we had another department to handle that instead of our nurses. Therefore users were not being routed to the most appropriate person to resolve their issues.

We hypothesized that:

  • The recommendation is hidden in text that users did not read
  • There is no clear next step for users
  • Our first displayed button is for the nurse, so they are clicking it

 

Therefore the first iteration of this card focused primarily on emphasizing the care recommendation and scheduling an appointment as a “quick fix” addressed these two points by making the schedule button more apparent and the recommendation a large title.

Define

  • Project Objectives
    • This project aimed to reduce human resource workload for redundant and simple questions. 
  • Project Deliverables
    • Mobile-first visuals to be added at the end of a symptom checker questionnaire flow that outlines what care recommendation was given along with any relevant information a patient could want in relation to the recommendation.
  • Project Assumptions
    • Patients will understand their care recommendation with a design change.
    • Patients will be less likely to feel the need to reach out to nurses about their recommendation.
     

Problem Statement

How might we design a mobile-first visual experience that communicates important information effectively for a chat bot solution whose questionnaire results are often overlooked?

Ideate

Gathering Information

Collaboration with Developers

During this phase, I made sure to contact the developers on my team about the technical limitations. The chat bot had very limited functionality, curbed by what Microsoft Adaptive Cards could do. I was linked to the resource page from Microsoft, and asked them their understanding of the format. this helped me create a design that would be easily implementable with a fast turnaround and approval for at least our MVP.

Collaboration with Design System Designers

After understanding the technical limitations, I confirmed the design system limitations. This type of care display was a net-new pattern that did not exist in the design system at the time. To ensure this would fit into the overall experience of the digital hospital experience, I made sure to communicate my intent and needs to the greater design department. From there, I could collaborate both on a call and async on how these care recommendations could be visualized in a way cohesive to the greater system.

Initial concepts

The first low fidelity concepts was based around re-arranging our current content in a way that would emphasize scheduling but allow for chatting with a nurse as a secondary option.

Impact

The team started rolling out a more finalized concept of the care recommendation card, featuring a primary button to draw users into preferring that action and a more subtle link-like secondary option to chat with a nurse. 

Between the beginning of the year and now at the end, users asking to talk to nurses dropped an average of 20% across all care recommendation types. This drops the average of choosing to speak to a nurse from 37% of the time to 17% of the time.

Below is the breakdown:

The final look of the Quick Fix

Preparing for User Testing

Design Evolution

At the same time this fix was being developed and launched, the product team started to explore how to further enhance the experience aiming to further reduce time nurses spent explaining care recommendation basics. This included information about a user’s care recommendation regarding cost of care, when to see a provider, and most importantly, why this recommendation was important and explaining what some newer types of care are (e.g. customers reporting not knowing what urgent care was).

3 Prototype Variations

To further refine the care recommendations, the team looked into some variations on how to most efficiently display projected user-desired information based on chat log data.

Additionally, I had to consider other areas of the hospital like other programs that eventually want a chat solution. Due to my unique position in the hospital, I was heavily involved in another project at the same time and understood that whatever this project set as a precedence would affect the future of all chat interfaces in other departments. Therefore when testing, I took into consideration what other programs have expressed desire in functionality for their own chat experience such as a pill that prompts conversation. 

The base design

This design taken to test was reflective of the original quick fix and is simply a display of all information in one scroll.

Very similar to the quick fix with no frills.

The hybrid conversation pill and list display design

This concept would shorten the amount of scrolling and introduce an interaction that would flip through optional secondary information much like a content switcher. This would compromise between showing what we considered urgent information upfront and allowing users to see other information one at a time to attempt to resolve potential overwhelm.

The pill conversational flow design

The head researcher that I was in contact with to oversee this project insisted that when it came to pills that prompt conversation, a conversational flow made the most sense. I wasn’t sure about this, but we tested it regardless to make sure we covered a wider array of interaction types that would ensure user engagement and reading.

User Test Results

Very similar to the quick fix with no frills.

Participants ultimately preferred the base version that displayed all information upfront. They quoted that when they were experiencing a medical condition or in times of illness, they desired less barriers to information. The majority of participants also liked having the schedule appointment CTA emphasized as a clear next step. 

One participant expressed concern about information getting too lengthy and becoming hard to scan. However, if the information was condensed enough, it would not be a problem to them at all.

Next Steps - Evolution

UI Revisions

From the research conducted, there were 4 key areas of information where users desired more clarification for their care recommendations:

  • Where to go
    • When clinically appropriate, show secondary non-virtual options (Requires variations of follow-up questions or pills)
  • When to go there
    • Summary of their triage to demonstrate “active” listening and personalization
    • Explain why this is the most clinically appropriate
    • Make clear this isn’t just chatgpt, this is clinically backed
      Elegant “labor” indications. Don’t make it feel magical, make sure it conveys accuracy
      Explanation should be context aware if an escalation or de-escalation
  • What the recommendation is
    • Explain what the recommendation is. (e.g. telemedicine is a video visit with a provider…)
    • Cost expectations (range is acceptable)
  • How to get there
    • Prominent CTAs – proceed to scheduling
    • As secondary – consider an FAQ or secondary menu ahead of escalating to Nurse queue
    • For Urgent Care and Virtual Urgent Care, easiest to pull in some carrot of availability

Enhanced Care Recommendation Prototype Proposal

Based on the key takeaways outlined, a second prototype provides an enhanced chat experience that is more tailored to a user’s potential needs.

Not all items taken away from the research were explicitly displayed on this design since further consideration was needed for more tech-involved modifications (illusion of labor) or content design (summary of a user’s triage with “active” listening and personalization).

Not tested but included here is the ability for a user to ask for further help. Clicking “I have more questions” will lead a user to a menu that would further clarify if they need a nurse to talk about symptoms or a scheduler for appointment aid. 

Reflection

Thinking back on this project, there were a few gaps that I thought we could’ve performed better if there was not a rush, and if the product team did not make so many assumptions upfront.

More Research, Upfront

There were gaps we discovered later when we realized we didn’t know the full extent of what information would be necessary for a user. This was especially apparent when we were trying to figure out what kind of sub-menu options alongside chatting with a nurse would make the most sense. Had we done more explorative research upfront that was more open about all the possible information a user could want regarding their care recommendations, we would have had a better idea of what form the results would need to take visually.

Prioritized Tech Spikes

It felt as though we did not know the limitations of our tech enough prior to design, which caused some back and forth during the design phase about the development being unsure if something was possible. At a point, it wasn’t even sure if we could send the CTA button to be displayed at the same time as the rest of the recommendation information. I had to create several backups just in case but during the implementation stage, it turned out alright. It just didn’t feel efficient, and perhaps if development was included in design and product considerations much earlier it would have prevented this.

Guessing at What Content Would Feel Personalized

I believe there needs to be another research project in the future in collaboration with the content designers as to explaining to a user why they got the recommendation they did. There was not enough thought put into content in general for the project despite it being the key information that it revolved around.