Kieran Powell, EVP of Channel V Media
Task repetition and scalability — use AI. AI is great when it comes to processing large datasets, automating repetitive tasks, and scaling operations where human resources are limited. This frees up humans to focus on high-value, creative, or empathetic tasks. At Medbridge, we use AI to summarize many tasks that patients complete so the provider can spend less time reviewing each patient’s chart. This allows us to track progress at scale, something no clinician could do manually for each and every patient. This way, our therapists can dedicate more time to personalized patient interactions, improving engagement and outcomes.
Asartificial intelligence (AI) continues to advance and integrate into various aspects of business, decision-makers at the highest levels face the complex task of determining where AI can be most effectively utilized and where the human touch remains irreplaceable. This series seeks to explore the nuanced decisions made by C-Suite executives regarding the implementation of AI in their operations. As part of this series, we had the pleasure of interviewing Sarah Jacob Singh.
Sarah Jacob Singh is Chief Product Officer at Medbridge. Sarah has been a product leader in the digital health technology space for over 10 years, both at businesses that were acquired and ones that have gone public. She’s built world-class product organizations in startup and scaling environments along with enterprise and consumer-facing solutions. Prior to this role, Sarah served in multiple product leadership roles with companies including Optimize Health, Accolade, and CareCloud. Sarah is an expert at developing differentiated products and loves to build and mentor product organizations.
Thank you so much for your time! I know that you are a very busy person. Our readers would love to “get to know you” a bit better. Can you tell us a bit about your ‘backstory’ and how you got started?
Sure! I guess I was always destined for healthcare. I thought I wanted to be a doctor but actually dropped out of medical school almost immediately. I fumbled around for a bit before I found myself in product management at an EHR startup in Miami, Florida, called CareCloud. That’s where I really learned my product chops — built products (both successful and otherwise), learned how to build teams, and really fell in love with product management.
I moved out to the West Coast, to Seattle, to join Accolade, a care navigation health tech company in the employer health space. I learned how different it was to build products and sell to employers than it was to sell to providers! But it was a great experience in learning what goes beyond building great products and the importance of understanding the market, launching products, managing P&Ls, and all of the important product marketing that works around building great products. I was there when Accolade went public in 2020, and it was an incredible experience to go through.
Accolade gave me the startup itch, being a part of a successful IPO — so I joined a small startup called Optimize Health that serves the remote patient monitoring (RPM) space. That was really a start-from-scratch experience where I built the product and engineering team from zero, rebuilt the product from the ground up, and launched the product to a new category entirely. It was a lot of fun, and I was lucky enough to work with some really great people.
From there, I joined Medbridge as their first Chief Product Officer to really move the company from a sales-led organization to one that’s product-led. We really wanted to make the shift in the market into the digital care space, and that’s exactly what we’ve done in the last couple of years. It’s been a thrill to work with such brilliant and hard-working individuals, to build products using the latest technology and innovation, and to really help people feel better, move better, and live better.
It has been said that our mistakes can be our greatest teachers. Can you share a story about the funniest mistake you made when you were first starting? Can you tell us what lesson you learned from that?
Boy, can I. It’s hard to choose! But I’ll never forget, back when I was just moving through the ranks of product management, I remember my CEO asked me to present what’s coming on the product road map to our commercial organization during our annual sales kickoff conference. I was excited to do this, partly because there were so many exciting things on the road map, and partly because I got to expand my role a bit into the commercial space. So I started my presentation, got about halfway through it, and then we had an intermission.
During the intermission, my CEO came over to me and said, “Listen … you’re sucking the air out of the room here. This is not good. You need to do much better, get the team much more excited, if you expect anyone to want to sell anything on this road map.”
After my jaw dropped and I was finished being humiliated, I started realizing the importance of storytelling and “selling” as a product person. What’s the point of building great products if you can’t get the commercial teams excited about them? It’s a lesson I’ve never forgotten and one that I try to mentor my teams on every day — albeit, not necessarily in the terms my old CEO did with me.
Are you working on any exciting new projects now? How do you think that will help people?
Here at Medbridge, I get to work on some of the most exciting projects I’ve worked on in my career. Our mission is truly to help people move better, feel better, and live better. This means the products we get to build really help our users improve their lives. And it’s been insanely exciting to get to start using AI to build some of this functionality.
Thank you for that. Let’s now shift to the central focus of our discussion. In your experience, what have been the most challenging aspects of integrating AI into your business operations, and how have you balanced these with the need to preserve human-centric roles?
There are a couple of things that have made this transition tough. One is the logistics around readying the business to take this head-on. This is everything from employees being comfortable using AI, to the R&D teams being willing to take on such a new challenge. It’s not something we learned or were taught — it’s something we need to truly make the time to learn and embrace. And that’s not always for everyone.
The second is the industry we’re in — healthcare — and the fact that it’s generally behind when it comes to new innovation. ChatGPT has been out in the market for almost three years now, if you can believe that. And of course AI existed well before that — but it just goes to show how slow industries are to embrace change, and healthcare is no different there.
I believe it’s especially difficult to make these changes in healthcare because healthcare professionals and clinicians are so educated, and many are real subject matter experts in their field. They don’t want to feel like they’re being replaced. After all, what were all those years of schooling and experience for? This is the balance we’re constantly after.
Can you share a specific instance where AI initially seemed like the optimal solution but ultimately proved less effective than human intervention? What did this experience teach you about the limitations of AI in your field?
This is a fine line we walk every day, particularly at Medbridge, where we’re building products that support our clinician users in their patient care. So determining which products or features are actually better than human intervention can easily backfire.
And I’ve seen this both in my field and outside of it. For example, in the mental health space, we’ve seen the proliferation of AI mental health chatbots used in lieu of human therapists. And we’ve seen how much this backfired. It was supposed to be a scalable solution, particularly for patients with limited access to therapists. But they aren’t as empathetic, weren’t great listeners, and weren’t as personalized as actual humans. Seems kind of obvious after the fact, but we really had to see it come up short to understand the nuance.
Another very public example of this is Duolingo, the language-learning app. Duolingo tried replacing a lot of their language tutors with AI and were very public about this move. But the reality is it’s fallen short in fostering deep conversational skills, cultural nuance, and contextual understanding that human tutors were providing. This was especially obvious when students were learning idiomatic expressions or nuanced pronunciation.
These are good teachers in the limitations of AI. Most AI solutions today in the healthcare space are trying to help clinicians be more efficient (scribe software, diagnostic tools, patient scheduling, etc.), but we look at it a little differently.
We want our AI technology to act as an extension of the provider. So when we use AI-enabled motion capture computer vision — it actually does the work of reading the patients’ movements during exercise, something providers would normally have to take the time to do themselves. Similarly, we’re working on an AI care coordinator that could do a lot of things a human care coordinator can do, but the reality is a human care coordinator can only manage 200–300 patients at a time. A care coordinator’s role is to check in on patients, see how they’re doing with their at-home exercise programs, monitor them in case things go wrong, and be a sounding board to be able to augment their program. We think a lot of this can be done with AI, so that organizations can manage even more patients remotely. The reality is, our providers can’t keep up as it is today. We want to be an extension for them so more patients can receive the care they need.
How do you navigate the ethical implications of implementing AI in your company, especially concerning potential job displacement and ensuring ethical AI usage?
It’s a good question and one that’s top of mind for us. We’re a bit lucky on that front in healthcare, specifically as it relates to job displacement, because the job of AI in healthcare is really to empower providers, not replace them. We see this every day at Medbridge; there are real care shortages in therapeutic areas, especially conservative care. Meaning, there are not enough therapists and providers to see patients who need conservative care, like physical therapy.
We recently released a digital pelvic health program. Can you guess what the waitlist times are for some of our organizations that have patients waiting to see a pelvic health provider post-pregnancy? At least six months! This is not acceptable, and the care shortages are real and getting worse. So the purpose of AI for us is to help empower providers and clinicians to see even more patients — not to replace them. We need more of them!
To address displacement concerns, we prioritize reskilling. We help our organizations train staff to work alongside AI, ensuring they remain integral to care delivery.
As far as ethical AI usage, we embed principles like transparency, fairness, and patient safety into our development process. We ensure AI tools are rigorously validated to avoid biases. And we also maintain human oversight. Our clinicians always review the AI outputs to ensure they align with patient needs. That is always a gate. We’re also very transparent with our patient users in that they always know AI is a part of their care.
Ultimately, our approach at Medbridge is to use AI as a force multiplier — enhancing human expertise, not replacing it — while staying vigilant about its ethical implications to protect both our workforce and our patients.
Could you describe a successful instance in your company where AI and human skills were synergistically combined to achieve a result that neither could have accomplished alone?
Our development of AI-powered motion-capture technology is a prime example of AI and human skills working together to achieve outcomes neither could accomplish alone. In this case, our AI motion-capture technology analyzes patient movements during home exercises, and it provides real-time feedback on form and progress. This allows for precise tracking of patient adherence and performance, which is just impossible for clinicians to monitor remotely at scale. However, the AI doesn’t operate in isolation. Our clinicians use their expertise to interpret the AI-generated data, adjusting treatment plans based on nuanced patient needs, such as pain levels or mobility limitations, which the AI might not fully contextualize. For example, the AI might flag a patient’s improper form, but a therapist’s human judgment is critical to determine whether the issue stems from discomfort, misunderstanding, or a need for a modified exercise.
This combination has led to improved patient adherence and outcomes in our hybrid care programs. Clinicians can manage more patients effectively, as the AI handles repetitive monitoring tasks, while therapists focus on personalized coaching and building therapeutic alliances. It’s also a good example of our commitment to using AI as a tool to empower clinicians, not replace them — ensuring better patient outcomes while also addressing care access challenges.
Based on your experience and success, what are the “5 Things To Keep in Mind When Deciding Where to Use AI and Where to Rely Only on Humans, and Why?” How have these 5 things impacted your work or your career?
1 . Task repetition and scalability — use AI
AI is great when it comes to processing large datasets, automating repetitive tasks, and scaling operations where human resources are limited. This frees up humans to focus on high-value, creative, or empathetic tasks. At Medbridge, we use AI to summarize many tasks that patients complete so the provider can spend less time reviewing each patient’s chart. This allows us to track progress at scale, something no clinician could do manually for each and every patient. This way, our therapists can dedicate more time to personalized patient interactions, improving engagement and outcomes.
2 . Need for emotional intelligence — use humans
AI struggles to replicate the emotional intelligence required for building trust or navigating complex human emotions. In healthcare, empathy is critical for patient motivation and adherence, and this is an area where humans excel. AI can support, of course, but humans should lead here. Medbridge has been shaped by prioritizing patient trust, which drives our product design. Recognizing that AI cannot replace emphatic interactions has guided us to position AI as a support tool, ensuring clinicians remain at the heart of care delivery.
3 . Complexity and contextual nuance — use humans
AI can perform really well with standardized datasets, but it struggles with rare or ambiguous cases requiring contextual understanding. In healthcare, complex diagnoses or treatment adjustments often demand human judgment to integrate diverse factors like patient history or social determinants. Our AI motion-capture tool is a great example of this, as it analyzes patient movements during rehab exercises. There have been cases where the AI flagged a patient’s knee angle as incorrect, but the therapist was aware of the patient’s recent injury and recognized the deviation was due to compensatory movement from pain. By overriding the AI suggestion and then tailoring the exercise, the therapist prevented potential harm. This human oversight ensures safety and is critical in these cases.
4 . Ethical considerations and bias mitigation — use humans
AI can perpetuate biases or make errors if not carefully monitored — and we’ve seen this in the news. Human oversight is important here because it builds trust with clinicians and patients. It’s also important that there is final accountability in healthcare apps, and that sits with humans.
5 . Patient and user experience — balance AI efficiency with human-centered design
AI can streamline processes, but we can’t alienate users with impersonal app experiences. A human-centered design approach ensures technology enhances user satisfaction. This is critical to driving engagement. At Medbridge, we’ve implemented AI chatbots in our support workflows but not within our patient-facing workflows. This is because most people have come to rely on relatively impersonal support Q&As, but when it comes to their at-home programs and nuanced Q&A, we found that chatbots weren’t the answer there.
Looking towards the future, in which areas of your business do you foresee AI making the most significant impact, and conversely, in which areas do you believe a human touch will remain indispensable?
In healthcare, it’s becoming incredibly clear where AI will have the most significant impact, and that’s likely in operational efficiency, as well as predictive analytics for proactive care. AI will streamline clinic operations, from scheduling to resource management. Optimizing therapist schedules, prioritizing urgent cases, and even predicting equipment needs based on patient volumes will become the norm. The technology is too good in this area. And in predictive analytics, AI is better than humans in terms of processing longitudinal patient data. It can just remember and comprehend more. This means it will be enabled to predict risks like nonadherence or injury recurrence, before it occurs. Truly future-facing stuff.
When it comes to human touch, I still believe it will come out on top when it comes to empathetic patient relationships as well as complex clinical decision-making. There have been years of data showing the importance of empathy when it comes to patient recovery, and this is something only human touch can provide. Building trust and motivation is at the heart of rehabilitation. Patients recovering from surgery or managing chronic conditions often need emotional support to stay committed. And as far as complex clinical decision-making, it’s much more nuanced than what AI can do well, which is excel at pattern recognition. Complex cases require human judgment.
How can our readers further follow your work online?
All of the exciting work we’re doing to serve our patients and providers can be found at medbridge.com.
This was very inspiring. Thank you so much for joining us!