March 19, 2026

How to develop AI that addresses health inequities

How to develop AI that addresses health inequities
Apple Podcasts podcast player iconSpotify podcast player iconYoutube Music podcast player icon
Apple Podcasts podcast player iconSpotify podcast player iconYoutube Music podcast player icon


Joe Alderman has this great ability to break down complex things into a good hearty discussion. Joe is an Anaesthetist by night and AI academic by day. His insights on what it takes to deploy and monitor AI in healthcare with a lens on not leaving people behind are relevant wherever in the world you are.


We discussed some of the nuance and challenges around AI in healthcare, as well as some anti patterns 


  • If you are a data or AI scientist, or shipping AI products, what can you do starting tomorrow to begin thinking more proactively about underserved communities or ensure you’re not creating unintended harm to vulnerable groups. 
  • Stuff the industry is not doing great on: We’re starting with the dataset or the technology we happen to have, instead of working backwards from what we are fundamentally trying to achieve in the world. (If I nodded any more at this point my head would have rolled off)
  • A revisit on where things are with the Standing Together initiative, something Xiao Liu talked about in the very first episode.  In the UK, there are big areas where life expectancy is falling. Health inequalities are rising. Our datasets have bias. It’s tempting to fall into the narrative that this is a tech deficiency problem. It is not. It is a whole society problem and we’re not going to solve it with JUST tech. 
  • If govts, regulators and health systems find it challenging to be on top of the safety issues with LLMs, how are lay people going to? Joe covers how the upcoming work of the Healthcare Chatbot User Guide can help people navigate some of the key things they need to consider


We cover a LOT in this 1 hour discussion, including how patients and clinicians can think more critically about which tasks to best use Gen AI with, and how to think about safety and liability. 



This one is for AI and data scientists, product folk, developers and clinicians. Get it on your weekend playlist

Links:

MHRA’s National Commission into the Regulation of AI in Healthcare: https://www.gov.uk/government/groups/national-commission-into-the-regulation-of-ai-in-healthcare


Building The Health Chatbot Users’ Guide: https://healthchatbotguide.org/


The NHS Fellowship in Clinical AI: https://www.nhsfellowship.ai/


NHS Digital Clinical Safety Training: https://digital.nhs.uk/services/clinical-safety/clinical-risk-management-training


Chapters
00:00 Introduction and Joe's work
06:36 The Role of AI in Patient Care
07:52 Navigating AI Risks and Benefits
18:13 Addressing Health Data Poverty
24:28 Building Responsible AI Systems
30:18 Exploring Large Language Models in Healthcare
34:05 Addressing Bias and Representativeness in AI Models
40:04 Navigating Regulatory Challenges for AI Technologies
40:51 Empowering Patients in the Age of AI
44:32 Public Perception and Trust in Digital Health
47:00 Identifying Real Problems in Healthcare Technology
52:00 The Future of AI in Healthcare
55:29 Joe's insights for those building in low resource settings
56:57 A book recommendation and critical thinking for patients and clinicians

About Joe

Joe is an NHS anaesthetist in the UK and an NIHR clinical lecturer (assistant professor) in AI and digital health at the University of Birmingham, UK. 

He leads mixed-methods research and policy engagement to help get the most from AI in healthcare - including the STANDING Together initiative. He recently founded an international initiative to build ‘The Health Chatbot Users’ Guide’.



Joseph Alderman (00:00)
one big thing that frequently annoys me about the sector is solving the easy things or fixing problems that aren't really problems.


So I think those are fundamental issues. Look, not every technology that comes about comes about because of a need. Sometimes a truly transformational technology just arrives and changes the world. No one needed ChatGPT before it was developed, and now everyone needs ChatGPT. But I think it's fair to say that most of the world's best products have come about to solve something that really matters to people. And so I think


then probably what you're trying to do is fix a problem that healthcare professionals are experiencing or patients are experiencing. And it's often tempting to reach for the easy or to say, we have this data set, what can we do with it? Rather than say fundamentally, what are we trying to achieve in the real world? And what pain points are we trying to fix? What system are we trying to make more efficient? How are we planning to make an organization save money or save time? And if you don't really have that kind of, that ability to think through what the goal is and then think,


from right to left, back to what the steps to try and fix that. If you don't understand the system you're developing for, if you don't understand the people that interact with that system, then it's likely that your quote unquote solution will be off piste. And there are so many examples as a health professional where I'm forced to use bits of software which just don't work for the workflows that we have as professionals because they just don't understand the way that the system is.


but also not thinking that you can play lip service to this by speaking to one doctor or maybe one patient.


because that's not enough. truly understanding a system is much more than having one perspective. It's deeply understanding it. So meaningful engagement with clinical teams who actually live and breathe this stuff, and meaningful engagement with patients who actually live with these diseases to understand what solutions will be most pertinent to them. I think that's where the best products will come from.


Shubhanan Upadhyay (01:56)
the people who need healthcare the most are the least likely to receive it. And that's kind of generally in healthcare. And that's called the inverse care law. But health data poverty can exacerbate that in the sense that if you're developing an AI algorithm, you are essentially training your model on data that is available.


the data that exists inherently excludes underserved communities, people who are not accessing healthcare or find it more difficult, people who are in rural areas, people who are in paper records or no records at all.


And so there's a lot of work that needs to be done. And this is linked to this kind of other big buzzword in this industry called responsible AI. You know, we need responsible AI. Well, I think that does the concept of disservice. it links these two words and it removes the most critical component, which is us.


It's us who need to be responsible. It's us who need to make the hard choices to think about where to slow down and think


what work are we doing wherever we work in the ecosystem to make sure We're making choices that ensure that people are not left behind.


It's with this in mind that I'm really, really thrilled to be able to talk to Joe Alderman. He is by day an academic fighting the fight around policy, regulation, what it takes to develop and validate and evaluate AI models, with the lens of not leaving people behind. He's part of the University of Birmingham's amazing group,


And by night, he is an anesthetist, working and using data to inform life-saving decisions. So I'm really, really excited to kind of bring these two areas of the conversation together, because that's really what we're talking about. We're talking about promise and the guardrails of models and how they interact with what needs to happen in the real world.


for often very timely needed.


critical life-saving, life-altering decision-making.


So let's get into it. It's gonna be worth it.


Shubhanan Upadhyay (04:09)
Joe Alderman, welcome What a delight. is to have you on the show. We'll definitely be getting into your deep experience both as a clinician and working in research and policy. But Joe, please tell us about yourself and some of the things you're working on


Joseph Alderman (04:27)
Yeah, of course. Well, first up, thanks so much for having me. Really great to be here today and to be talking to you about this stuff. So my name's Joe. I am an anaesthetist and intensive care doctor working in Birmingham in the UK.


I'm somewhere in the middle of my training. I faffed around a little bit doing a PhD, which we'll talk about a little bit later on. And now I work as a postdoctoral researcher as an NIHR clinical lecturer in anaesthetics. But predominantly my non-clinical time, my university time is spent worrying deeply about artificial intelligence in healthcare, thinking how can we make the most of this exciting, powerful technologies to improve patient care, improve all of our lives really, and make our society healthier and happier. But also how do we do that in a responsible way, recognising ?


that there are great potentials of these tools but also great opportunities to get things wrong and to cause misery at population level. So I guess some bits I'm working on at the moment. I recently finished my PhD in the summer of last year. That was centered around the evaluation, governance and regulation of clinical predictive models, these widely used tools in medicine, often statistical models rather than AI models, but have deep ingrained nature in the way we practice healthcare.


We're working as our team at the University of Birmingham on a couple of different projects. One to create an AI readiness checklist to help particularly NHS provider organisations in the UK, but also healthcare organisations maybe outside the UK think through, am I ready to deploy a particular piece of AI technology for a particular purpose? What are the considerations I need to think about for my organisation? And maybe what's the delta? Where are the gaps where I need to make strategic investments or changes to safely deploy this tool? And also a bit of work to, actually a hot off the


this one Shubs, to create a health chatbot user guide recognising that millions of patients worldwide are already using direct to consumer large language models, chat, GPT Claude, Gemini etc, to manage their own health and it's not our job as physicians to say no but it's really important that we help the public understand what the risks are in this kind of new world of LLM enabled healthcare but also what the opportunities are to make ⁓ tangible improvements to healthcare. So just a few things I'm working on.


that.


Shubhanan Upadhyay (06:36)
I think this particular... of collection of experiences that you have, where on one hand you're worrying deeply about the implications of this, both for your practice, but also for patients, and very, very timely...


in terms of the Health Chatbot user there's definitely been an inflection point within at the time of recording this where suddenly now people have the opportunity to put their own personal health record into open AI. And whilst this is amazing, like any medical intervention, there's this kind of critical question to ask is, which is,


you know, in this case, do the benefits outweigh the risks? Do we have a good handle on the risks? How do we understand this? How do we mitigate that? So that we can have the right balance of that equation. what's the role of...


policy and regulation, as clinicians who help people decide how to navigate their healthcare,


you're a clinician using algorithms to help with your decision-making, often very critical, life-saving, timely decisions. You know, if you're in intensive care, you know, minutes, seconds,


you're using data, you're using guidelines, you're using static or maybe dynamic algorithms for your decision making. How are you thinking about this stuff?


Joseph Alderman (07:52)
Gosh, there's an awful lot in that one. So why don't we touch on the large language model bit first, then we can move on to my experience of algorithms. So it's interesting. My mind's changed a lot over the last few years. I think sometimes we have these kind of like black and white positions in healthcare, in particular when it comes to AI in healthcare. People are worried about...


Shubhanan Upadhyay (08:00)
Yeah.


Joseph Alderman (08:14)
two extreme scenarios and don't really consider a middle path. So one extreme scenario of we'll all be replaced by AI in the future, there won't be any doctors. And one other extreme scenario of AI is too dangerous, we'll never be able to use it. And I think...


Well, no one should predict the future because it's a fool's game. You'll probably get it wrong. But thinking to the short to medium term, isn't it much more likely that the way we practice healthcare will change? That the way that patients experience management of their health will change in much the same way that, you know, physicians haven't been around forever. We've not operated on people forever. We've not had evidence-based medicine, randomized control trials, methodologies forever. Things change. The way we practice changes. And it feels to me like AI is one of those


points where


where we are likely to see shifts in the way that healthcare is delivered. So I don't think patients need doctors for every aspect of their healthcare anymore. There are great examples out there, a patient self-advocacy groups, the movement internationally to build widespread management of type 1 diabetes now is insulin pumps and closed loop systems where you have a continuous glucose monitor talking to a pump and continuously managing your blood sugar. That wasn't necessarily invented by industry. That was actually a


an initiative driven by patients who said, we want this. Patients can have an enormous ability to push towards changes they want to see. I think AI could give people the ability to self-manage with more and more of their healthcare. Think about someone with asthma or eczema or some other condition which normally is well-controlled and which has a chronic nature, but where they want to try and prevent flare-ups and worsenings. Well, they might see their respiratory or


dermatologist once every three months they might have access to a nurse whenever they need to but what they certainly don't have is day-by-day counselling almost or coaching on how to manage their health and direct-to-consumer language models can deliver that. Whether or not they're any good at it is a different question but in terms of actually delivering that care at scale on a daily basis you could do that and I think as the technology improves


It seems to me certain that the way we practice that kind of outpatient medicine will shift somewhat and patients will do more of the management themselves if they want to do so. I think though the critical question then comes, how do we enable that safely? You talked about regulation. Well, clearly if I am a company or I'm a healthcare provider and I provide a large language model or any form of software to my patients and it's for the management of their health, then that will fall in most jurisdictions under some sort of regulatory framework for medical devices. And so I'll need to demonstrate.


safety, quality, effectiveness, those sort of things. And I can be held to account, I can go to prison if I get that wrong. It's very different if we're talking about direct consumer general purpose AI systems. So if I, for instance, as a patient log into ChatGPT or Google Gemini or Clawed or whatever, and ask it for advice on managing my asthma, first of all, it may or may not give me advice. It might say, actually, you should go see a doctor, but...


Increasingly we're seeing less and less of those warnings from language models. We're seeing them actually engage that sort of questioning a lot more. The governance frameworks underpin those tools are evolving and I think the public probably at the moment aren't well placed to understand some of the risks and benefits of those tools in the sense that not all of us are AI safety experts. I certainly am not an AI safety expert. Although I tried to keep up to date, I'm not fully appraised on the latest literature.


And so I think it's really important that we build mechanisms for the public to understand the risks and to communicate that clearly with them. We need to meet people where they are and recognise that there are great potentials of these tools. So we don't want to ground our thinking in risk avoidance. That's impossible. It's more about risk mitigation and benefit maximisation. We can think about other forms, parts of healthcare where we know that risky behaviours might happen.


but rather than trying to eliminate that behavior, we just try to take the risk out of it as much as possible. I think that's kind of where we are maybe with direct consumer language models. You talked a bit about my experience of algorithms as well. ⁓ Yeah, it's a really good point. like I on a daily basis in my anesthetic practice will be asked to make some sort of estimation of someone's risk. You know, if they're gonna go have, let's say an emergency laparotomy, major abdominal surgery is an emergency procedure. In the UK, a standard of care,


Shubhanan Upadhyay (12:16)
Yep.


Joseph Alderman (12:36)
is that we estimate the patient's risk of dying or having a serious complication using a tool called the National Emergency Laparotomy Audit Risk Calculator. And that's been available for a few years now. And look, it's certainly better than me licking my finger and feeling the wind. That's what we used to do before that. But whenever we drop an algorithm into practice, whether that's an AI algorithm or a statistical model, inevitably there'll be performance gaps. It'll work well in some cases and work less well in others.


it will have a predictable error rate. And if we don't understand the nature of those errors, then we risk putting certain groups at risk of harm. And I'm not sure we've always had great discourse when it comes to that. You we talk about headline performance, 99 % accurate. Well, that's 1 % inaccurate, isn't it? And at population level, that's tens of thousands of people getting the wrong answer.


Shubhanan Upadhyay (13:21)
Mm-hmm.


I mean, there's so many things to pick out on that. you talked about the patients seeing a need. There's also, global context of actually, there's less and less clinicians available. And lots of reasons for that, obviously. So in any case, there's this


there's a new way of thinking about how healthcare is going to be delivered. And so, what you've highlighted is really important and making sure that we're thinking about this in a way that's, working backwards from where we're trying to get to, what do we want healthcare to look like, both from a delivery of care perspective for a clinician and an experience of care from a patient. ⁓ And that's the opportunity to say, Hey, like, where are we doing things not that great right now?


I really liked what you said as well about the mitigation of risks we've had to do that with humans, Doctors, clinicians are fallible themselves. They don't change in their kind of black box nature and decision making as much as algorithms. And so I guess my view of the ecosystem is that


Health systems have built the safety nets and the kind of risk of human fallibility in a way that recognizes that occurs for better or worse. Sometimes someone might triage someone on the phone and say, hey, like based on what I think you have, I think you need to go to the emergency department.


Sometimes that won't be right. And when you have all the information as the emergency physician on the other end, they say, actually, you didn't need to send this person in. But the emergency department will know that some people will be directed who don't need to be there. And that kind of system is kind of the built in capacity in the system or kind of the way that system works to know that kind of there's human fallibility


I think that's really important where, you know, now we're in a situation where the order of magnitude has changed and the order of change has changed massively. So we're now suddenly having to mitigate things and risks that are themselves changing on a kind monthly basis.


Joseph Alderman (15:23)
Yeah, no, it's a really good point. you know, human beings, human professionals, we all have error rates. We can probably estimate them if we wanted to do that sort of study, though it's very hard without wanting to get too lost in philosophy this early in the morning. It's difficult to actually define what we mean by right and wrong sometimes, or true and false. We might have different perspectives. A patient with a advanced tumour.


where they might have surgery and there's no guarantee that surgery will be successful and the significant risk of comorbidity or complications. If we don't operate, then they might only have months to live, but they don't have the complications of surgery and recovery time. What's the right answer here? Now can quote statistics from observational studies and maybe give you tolerances around those risks, but what I can't tell you is what the right thing to do is because it comes down to a patient's personal values. Like what do they want from their life? Is their priority to extend their life at all costs?


Is there priority to live comfortably and to avoid injury? don't know. Most people will be somewhere in the middle. So I think healthcare is littered with examples where we presume there is a correct answer and it's just out there for us to find. Whereas I don't think that's always true. I think that's part of what makes this so difficult.


But kind of leaning back from that kind of very high level thinking for a moment. Yes, human beings, we get hungry, we get tired, we get stressed. We have kind of fixation on recent bad events. So if I'm involved in a surgery which goes wrong in some way, it might not be my fault. It might just be one of those things that happened. An unavoidable, unforeseeable risk that happened because we know that those things happen. It will affect my practice for weeks afterwards, maybe longer. And I will make different decisions. I will be more risk averse.


And risk aversion has other implications. If I move more slowly, if I see fewer patients, if I do things differently, then more patients wait longer. know what mean? Like I'm talking about me personally, I don't really mean me, I mean physicians. yeah, humans are fallible in that way. I think one of the protective factors in healthcare though is that humans are probably fallible in different ways in the sense that my error is probably slightly different to your error. And at a population level that might average out to mean...


Shubhanan Upadhyay (17:19)
100%.


Joseph Alderman (17:36)
actually quite a low level of error. Well, that's probably not true. But you know what mean? Like, it's not like they all multiply in the same direction. The risk with algorithms is that algorithms might be systematically wrong. And whereas if I get if I make the wrong call throughout my career, I might affect, you know, a few hundred patients if I make the wrong if an algorithm makes the wrong call, it could affect millions of patients in one morning, which I think is more worrying.


Shubhanan Upadhyay (17:57)
let's let's get into some nitty-gritty and one of the reasons I was excited to speak to you was This is very linked to the kind of one of the first episodes of the show And we talked about health data poverty and it's also linked to quality and risk. I think as well ⁓


Essentially, there's this concept in medicine, called the Inverse Care Law, which is those who need healthcare the most are the ones who are least likely to receive it, i.e. the most vulnerable in society. And we talk about poverty and social deprivation and that occurs in, all the countries around the world have their own distribution of this.


And we talk about this big promise of healthcare, AI and digital health. And depending on our choices, make this better or make this worse. I talked about this with Xiao and she talked about the Standing Together Initiative, which was kind of this multi-layers of the ecosystem academic paper, which talked about how we all need to stand together as an ecosystem to really solve this problem of health data poverty,


And if we're thinking about this algorithmic healthcare driven future, the people who are going to be excluded are people who are invisible or poorly represented in data sets. Right. And that's a fundamental problem. If I'm a developer, I'm like, Hey, I'm developing this model. Obviously I'm going to be like, Hey, ⁓ I'm going to train it on the data that's available. And so it creates this systemic problem in our ecosystem where then it just.


makes, you it's like a cascading effect. ⁓ And those people will then be excluded from algorithmically driven healthcare. you've been involved in that Stand Together initiative and writing that paper. You've seen that evolve over the five or six years since it's been written. It's two years since I spoke to Xiao. What's your current thinking on this? it'd good to get your arc


where things are at


Joseph Alderman (19:50)
Yeah, look, it's a really good question and I'm glad you mentioned standing together. That was a, it's been a big piece of work and it's project very close to my heart, which I and colleagues led on for the last few years. Look, yeah, I've seen discourse change enormously over the last few years. When I first joined the team back in 2022, I think it'd be fair to say that the understanding and recognition and salience of algorithmic bias was still developing. So,


we would give talks to both clinicians and to AI audiences where it would be a bit of kind of microtop moment where you talk about an algorithm being biased and disadvantaging a group of people. know, Obermeier's seminal work showing that an algorithm in the States to prioritize healthcare


expenditure, ⁓ systematically disadvantaged black patients and probably caused ⁓ disadvantage to millions of people in the US. That was one of those moments where you used to get gasps from the audience. And I think that's now much better understood. So I think that's been an enormous shift really. I think we can't take credit for that in its entirety because lots of people internationally have been talking about algorithmic bias, the need for better data diversity, the need for better understanding of the way that biases can translate from


society


through to data sets and into algorithms and the amplification that can happen across that pathway. So I think that's much better understood now. I don't think it is solved, nor do I think it's solvable. I think it's one of those things where we can work together to reduce that phenomenon as much as possible. But if we are going to continue to have societies which advantage certain people over others, if we're going to continue to have social deprivation, the effects of poverty,


You know, it's been 30 years since the social determinants of health rainbow diagram was developed showing, you know, that your living conditions, the place you live, your employment, etc. all impact on your health. None of that's changed. And in fact, slightly depressingly, work in the UK by Professor Michael Marmot shows that inequalities are rising in the UK, life expectancy, healthy life expectancy in some areas is falling. If anything, the gaps are widening. And so we can't really expect our data sets to not show bias.


when we have biases at societal level, if that makes sense. So some of the problem here is not a data problem, it's not an algorithm problem, it's actually a social problem. And insofar as this political will and a movement to try and solve those things, then we can see improvements, but you can't really fix an underlying social issue with technical fiddling with datasets, unfortunately. You we can do our best to mitigate it again, but we can't really eliminate it, I think.


Shubhanan Upadhyay (22:33)
I think that's what makes it so hard because then, I've talked about this with several people, which is, when something's an everyone problem, it becomes a nobody problem, right? Like everyone's like, it's someone else who has to fix this. And it kind of gives us this kind of learned helplessness. And so I think it's really, really interesting that, political will has gone down on this.


it's the reality that we're in. ⁓ But I wonder if we could get into these mitigations then. there's not going to be suddenly one day where we're now fair and equitable, right? It's, I guess it's like the idea is that it would be a graph that's ideally always going down as we learn more about what it takes to mitigate, but it will never quite get to zero. And so we're never done. We have to always be on it.


We have to be always aware and proactive. And I think that's a key shift how do we make sure we're not reactive about this? And so I work with, you know, product teams and teams who are kind of developing models, et cetera, and they do care about this. The tough thing is they say, well, what can I do in my corner of the world?


to improve this. And often it's a trade off, right? It's a trade off between shipping fast versus, know, ⁓ do I need to kind of take the harder road here? What does that mean? What does the harder road look like? What can I even feasibly do this? Or do I just kind of stick to the well trodden path of like, okay, well, we know that there's a problem, but this is the data we have available, et cetera. So let's get into the head of data scientists,


a product team in the development phase and they're saying, Hey, like we want to develop an algorithm. want to find a good representative data set. I wonder if you could go into what does good look like here in terms of what's feasible for a team with very limited resources. How do they.


go beyond the first layer of just grab what's available.


Joseph Alderman (24:28)
Yeah, look, it's a difficult question to answer that one. The first thing I'll flag is that there is, I think, quite helpful guidance out there. So first up, obviously, we're talking about standing together. Standing together recommendations, I think, are good place to start. They're certainly not the end solution, but what we do in our big paper, admittedly, a 20,000-word article, not the most approachable piece of text, but...


But certainly at a quick glance, it will be helpful. We link out to lot of resources and other references. So for a casual reader, you can skim through the introduction and then go straight to the recommendations and you'll see that those are referenced with a glossary as well. So, you know, we link out to the the good machine learning principles that were developed, co-developed by the FDA, MHRA and Health Canada. That's not a bad place to start. That talks about the need to ensure that data sets are representative of the intended use population. Now I've used a bit of regulatory terminology there, the intended use population. So


What's your algorithm for? What is it intended to do? Because I talked earlier on about that this is a regulated sector. You know, if you're making a health care product, which is a piece of software, and it's intended to be used for the purposes of managing someone's health, diagnosis, treatment, prevention of disease, etc. Then you are producing a piece of software, which is a medical device, and you must adhere to the legislation and regulations in most jurisdictions. And that includes saying what your thing is meant to do and who is meant to help. And if you don't understand who it's meant to help,


then that's the first step really, understanding what this thing is for. You can't say this is for everyone because, you know, something that's for everyone will be very difficult to assess. who specifically is this product for? Is this for secondary care patients in hospitals? Is it for primary care patients, people at home? Is it for adults, for kids, for both? And then when you've understood who you're trying to help, then it's then thinking through who are the people, who are the groups who we are


most worried about hurting. So that might be because we know that they're already disadvantaged in the system. We know this particular group has a higher incidence of this disease, or we know that this particular group already experiences social disadvantage. There are groups who are frequently disadvantaged, where almost by nature of their group identity, they experience worse health and worse kind of social positioning. We often talk about race and ethnicity, but it can go beyond that. Think about people who


are experiencing homelessness or job insecurity or who have care responsibilities. In the UK people who live in rural and coastal communities with limited job prospects or people who are suffering with addiction and those sort of things. These are people who are often very vulnerable across the board not just for this particular health condition. And then there might be very specific groups because of this particular algorithm that you're worried about. if you know that there's not enough data for group X, if you're developing a, I don't know, ⁓


algorithms trying to diagnose skin cancer and most of the data sets only have images of ⁓ white skin. Well, it stands to reason that your algorithm might not work so well for people with darker skin tones. so understanding both on the technical and on the social side, kind of who the groups are that are at risk of injury from if this tool goes wrong, and then proactively taking steps to identify those effects that are happening. So structuring some sort of evaluation, whether that's a trial or an observational study


whatever, which can help you answer that question to disprove that hypothesis that my algorithm is biased. ⁓ And then to be transparent about that, to acknowledge that you thought this would be a problem, you've collected some data about it, and actually turns out it wasn't a problem, or it turns out it was a problem. So here's our steps to mitigate that risk in some way. I think that whole piece of understanding the nature of the algorithm, understand the populations it's for, being open and honest about the risks and benefits and


structuring good quality evaluations, that's a really good step in the right direction. For small teams that might be really hard because you know a small startup with maybe only three members where one's a fractional medic also working full-time that's going to be really challenging to do well. But I think it's really important because many products will live and die based on their safety and quality metrics and so many startups fail. So think seeing this as a nice-to-have would be a real mistake. This is a mission critical step that's worth investing in.


it improves the odds that your product will be successful and adoptable in the future.


Shubhanan Upadhyay (28:53)
Yeah, absolutely. And I think part of it is people might disinvest in that step because, well, these are just edge cases and we'll work it out as we go, right? And that might be the case, but I think what I like about what you've laid out


is the notion that you need to be perfect at the beginning. And I think it just comes from ask, what I reflect on that you say is it comes from asking the right questions and being honest about where you are and where you want to get to and being intentional. so knowing that, okay, this is one of the key limitations that are important. Here's how we're going to approach this.


Therefore you are then being proactive rather than reactive, right? We know that data is limited here. Like let's actually observe this and then use that.


You're then actually creating the data because you've been intentional about it and you've been intentional about serving a particular demographic. And if you put that into product and design ways of thinking, it's, know, certain key important personas that you have prioritized. And that's really important that moves you from a helplessness or like analysis paralysis where you're like, we have to be perfect and therefore we can't move forward. And therefore let's just not do anything at all to...


these are milestones that we can predictably and confidently get to in a way that utilizes our limited resources


Joseph Alderman (30:18)
Yeah. Yeah.


Shubhanan Upadhyay (30:19)
Thank


you. That's really helpful. And just kind of just to follow this, you've talked about this with predictive algorithms. The other thing that lots of teams are working on are then LLMs, is a different, same thing but different, right? You suddenly have an input space that's much wider. You have now data that is much more subjective.


and contextual. I liked the model that some other kind of language that someone used once of like, you know, language is lossy compression to like what's actually going on in the world and what that means for a patient is I have a lived experience of what's happening and I'm trying to translate this in a constraints of language.


And so some of it helps capture a lived experience and what is important to a patient in that moment. Do you have any thoughts around kind of applying what you've just said to developing language models?


Joseph Alderman (31:23)
Oh yeah, again a pretty big question. Look, I'll caveat, I'm not a LLM expert. I'm an anesthetist who kind of has a special interest in AI evaluation regulation, but I'm definitely not a computer scientist. So caveat all of what I say with the lens of relative inexpertise. I think your point around the limits of language is well made. The world is more complicated than the words we speak and...


⁓ it's unreasonable to expect a large language model to develop a coherent understanding of the universe through the human translation of the language. Whilst they're obviously very powerful and we're seeing an enormously impressive performance of these tools in narrow tasks, they're very fluent, they're very interesting, very engaging.


don't currently think in the same way that people do. Whether or that matters is a different question. So I think, yes, your comment around the loss of information is important. Now look, this will change over time, of course. We're already seeing the development of computer systems which have multimodal capability that can use either recorded video, recorded audio, or maybe even live streamed video and audio.


Shubhanan Upadhyay (32:38)
Yes.


Joseph Alderman (32:45)
to make real time analysis of the world. And models trained in those different data streams will obviously have different performances and get closer to understanding the real world. And as we move forward into the 2030s and beyond, again, I'm not gonna make predictions about the future, but one possible future would be almost having embodied intelligence where we have systems which have sensors so they can measure things that we perceive.


light, other forms of electromagnetic radiation, gravitational attraction, that sort of stuff. And then I think when you build systems which can interpret those rich data streams and think about the application to healthcare where you might have invasive devices which have sensors attached to them, where there's high fidelity monitoring and multiple data streams, then I think you get towards a place where algorithms start to quote unquote understand whatever that means, the world and our bodies a bit more. But I don't think we're there yet.


Shubhanan Upadhyay (33:18)
Yep.


Joseph Alderman (33:44)
I don't know how far off that is, not working for a hyperscaler. Maybe it's closer than we think.


Shubhanan Upadhyay (33:44)
Yep.


if you in a product team, data science team, and you're kind of developing large language models, recognizing that there's some inherent bias in what you're doing, and then how do you be proactive within this modality?


Do you have any recommendations about building for improving the representativeness or kind of recognizing its limitations and then kind of making sure you're proactively improving on those specific people that you're trying to help.


Joseph Alderman (34:20)
Yeah.


Representativeness when it comes to the data feed into language models is very complicated because, well, frankly, many of the models, many of the kind of highest performing models are closed source. So you don't necessarily know what the data is that that trains them or the full nature of it anyway. And whilst there are open source models and it's theoretically possible to examine all the data that feeds into them, it's practically impossible, isn't it? I if we're talking about billions or trillions of documents, I mean, no human can look through all that to look at the representativeness of information.


So.


Yeah, it is challenging, know, the kind of the uncertain provenance of information and the fact that most information will be in the English language probably limits performance in other languages to some extent. The fact that, you know, we know that when we speak to language models in certain forms of dialect or certain ways of phrasing, we get different responses back, you know, that again, this will be engineered out in time. But right now, if you if you use words which are


maybe more common in certain communities, then the algorithm gives you different responses, which is obviously undesirable, but engineering teams might not even think about that. If I'm giving users access to this tool and the way they ask the question differs and it gives them different responses, that's undesirable behaviour. I think it's incumbent on those using LLMs or trying to experiment with LLMs for healthcare purposes to really understand that this is a much


less certain and much riskier proposition than standard, you know, boring, narrow AI systems which have more predictable failure modes. And these tools can can be


Shubhanan Upadhyay (35:58)
You


Joseph Alderman (36:03)
can be more complicated. If you're working with latest frontier models from one of the hyperscalers, the hyperscalers might update the model and that could change the performance characteristics of tool. You have the considerations of software of unknown provenance where you will have a real tough time explaining to a regulator kind of exactly what the thing is because you don't really control the engine, you just lay on top of it. So all these things make it much more challenging than I think for other AI systems. Not to say it's impossible, just that right now that is


I think more difficult.


Shubhanan Upadhyay (36:35)
And I think what you've alluded to there is, like something's out there and deployed and implemented in the world. And then if it's based on an off the shelf model and those things change, like what do you need to be doing as a team then?


to make sure that once you've implemented and once you deploy that you're in that process, how are you doing your monitoring and evaluation in a way that addresses what you've just said, but just going back to the beginning of the thread, which is thinking about your unintended consequences for underserved populations, your vulnerable communities. How do you monitor that? How do you think about that?


Joseph Alderman (37:13)
It's a it's an important question. I don't have a very good answer for this because I well partly because I think it might not be known yet and also partly because I'm not ⁓ an expert in the regulation of LLMs. But look, I think getting the expertise in you need is very important here because this is your if you're working in LLMs for health, you're at the very cutting edge of health technology and where you are at the cutting edge of anything and forging a new path. It's really important that you have the expertise you need so that you actually you know, you


Shubhanan Upadhyay (37:20)
Yeah.


Joseph Alderman (37:43)
burn through investment money for a start so you're not going to waste your resources but also I think worse than that you don't cause injury to real people because that's ultimately what could happen if you get this wrong. So if you don't have well-designed quality assurance systems, if you don't fully understand the nature of your model and how it will perform in certain LI short then it would be irresponsible to push it out to wider use because the worst thing that could happen is you could harm or kill patients and that's not a good place to be if you're developing anything frankly. It's not a good use of any resource for society.


at large. One thing I will flag is that right now, and I touched on this earlier, the regulation of large language models for healthcare is a bit challenging because ⁓ regulatory systems worldwide tends to be written with kind of static products in mind, which don't really shift that much. You you push a version update every two, three years, whatever, and you apply for a new regulatory clearance. That's not the nature of language models because they update much more rapidly and their performance is so much less certain than for other forms of AI. There are moves internationally


internationally


to try and update or at least to improve regulatory frameworks for AI. ⁓ One thing I'll mention, the MHRA, the UK's medical device regulator, the Medicines Healthcare Policy Regulatory Agency currently has an AI commission live, which is ⁓ creating evidence and guidance for the MHRA on how the regulation of healthcare AI could be different in the future for the UK jurisdiction. And we're probably going to see similar efforts internationally to do this. And part of those considerations


will be how do we regulate these emerging tools, these new forms of AI which don't really...


which aren't really necessarily well served by existing thinking and regulation. Now, what's the right way doing this? Balancing the absolute need for patient safety and quality of health care. That's a must, but also where possible, enabling new technologies to come into into force because we can we can comment a lot on the risks as we have in the last few minutes. But the reality is we touched on this at the start of our conversation. Health care already has deep problems and many people can't access health care internationally, even in advanced jurisdictions.


and wealthy countries, have groups of people who are experiencing very long waits for care. So in a system which already has inherent risk, dropping in very risky technology can make things worse, but doing so in a responsible way when you de-risk that technology can actually make things a lot better too.


Shubhanan Upadhyay (40:05)
one of the important things that you've touched on is it's hard enough if you're a developer and implementing these technologies to kind of stay on top of the risks.


and safety and of the critical nature of that. as a regulator, they're still also trying to catch up with how do we make this legislation in a way that recognizes the shifting nature of this technology, the changing risk profile, the way that we enforce this, but still allow the conditions to where there are real benefits and people really need them.


people get them quickly, right, And then you're a patient, you are a person with a health problem who has their own difficulties with a health system, difficult enough to navigate in kind of high income economies, but imagine low resource settings where there's often nothing.


Do you have a way to think about that or do you have any, do you know any resources that will help patients to think about?


Joseph Alderman (40:58)
Yeah, good questions. So on the regulatory angle, experts in regulation would say that the current framework does cover large language models and other forms of emergent AI. It's just that it's very difficult to provide evidence that such a thing is safe and effective, basically. So the requirements are that your thing is safe and that it does what it says it's going to do, and you have to able to prove that.


regulations currently written make that very challenging for large language models. So it's not that regulations don't apply, it's just that it's very difficult as a producer of such a tool to actually get your product through that process. And that could be for a good reason. If ultimately we conclude that this technology is too risky for healthcare, the risk can't be mitigated adequately and the risk to patients is too high, which is a place that people could argue for.


then actually you could argue that's exactly what regulation's for, it's to protect patients and the public and consumers. So that's not a bug, it's a feature. So I think first up, we shouldn't just assume that the answer has to be we need better regulations. We might conclude that regulations are fine, thank you very much. But if we wanted to make some alterations and maybe that will come in time, I already flagged the MHRA's AI commission, that we'll be thinking about those things, other jurisdictions will be thinking about similar considerations. How do we allow these products to market? How do we...


Shubhanan Upadhyay (41:49)
Yeah.


Joseph Alderman (42:11)
How do we recommend people produce evidence? How do we manage those risks, frankly? How do we think about risks differently? In healthcare, we sometimes assume that what we do is not risky and that therefore the new thing is risky. But that's just not true either, is it? We know that there's risks all the way throughout healthcare and that current practice, the status quo, is risky and might even be more risky than the thing that looks risky that's coming to follow. So whilst


using this specific example, a large language model product for healthcare purposes might introduce new and different risks. Those risks might actually be better at system level than the current situation, which is 12 hour plus waits in emergency departments, two year waits for elective orthopaedic procedures. Insofar as we can make efficiency at a system level, that could be a net improvement. But I think for patients, this will be bewildering because frankly, most patients turn up to see their


clinician and they get given either a drug or some advice and they have a lot of trust in that system by and large. know, the NHS in the UK and other health systems are trusted entities because we know that there are lots of checks and balances in place to protect patients. And whilst those go wrong in very high profile ways, healthcare is really complicated and most of time it gets it right. So patients can normally trust that system. But when we're talking about new technology types, in particular,


if we're delivering the health technology outside of the healthcare sector, so if we're going to direct consumer technologies where there's no doctor or nurse involved, it's just the patient and the technology company, then clearly that introduces different kinds of risks. And communicating that with the public, I think, is really important because what we're effectively doing is taking one of the players out, taking the healthcare professional out of the system in that particular way and leaving a bit of a void.


And so I think it's important that patients understand any responsibilities they're now taking on, which they didn't have before, but also that health technology companies understand what responsibilities they're taking on, which maybe they didn't have before. so yeah, difficult questions. I don't think we have an answer yet. This will definitely need thinking about properly over the years that follow. I think having proper public discourse around AI and public use of a large language models is really important.


Shubhanan Upadhyay (44:32)
Yeah, and I think what you've laid out there in terms of like public perceptions and trust is going to be such an important driver for good adoption, but


like you said, there's so many considerations needed and we don't have the answers yet, but really, really important how you've laid that out. What do you think that we're getting wrong? Do you still see anti-patterns that we still really need to get right and do better?


Joseph Alderman (44:56)
Look, one big thing that frequently annoys me about the sector is solving the easy things or fixing problems that aren't really problems.


So I think those are fundamental issues. Look, not every technology that comes about comes about because of a need. Sometimes a truly transformational technology just arrives and changes the world. No one needed ChatGPT before it was developed, and now everyone needs ChatGPT. So I'll caveat my answer by saying that sometimes if you build a thing that's good enough, then the need will arise. But I think it's fair to say that most of the world's best products have come about to solve something that really matters to people. And so I think unless you think you're the next Sam Altman


then probably what you're trying to do is fix a problem that healthcare professionals are experiencing or patients are experiencing. And it's often tempting to reach for the easy or to say, we have this data set, what can we do with it? Rather than say fundamentally, what are we trying to achieve in the real world? And what pain points are we trying to fix? What system are we trying to make more efficient? How are we planning to make an organization save money or save time? And if you don't really have that kind of, that ability to think through what the goal is and then think,


from right to left, back to what the steps to try and fix that. If you don't understand the system you're developing for, if you don't understand the people that interact with that system, then it's likely that your quote unquote solution will be off piste. And there are so many examples as a health professional where I'm forced to use bits of software which just don't work for the workflows that we have as professionals because they just don't understand the way that the system is. know, having to put a password in 10 times.


during the course of a ward round when you want to see 30 patients is infuriating when the alternative back when I first started training was a piece of A4 paper. A piece of A4 paper is not very good for digitalisation, but it's extremely good as a blank template for writing my notes down. so, yeah, better understanding the nature of the problem and understand the system you're developing for, but also not thinking that you can play lip service to this by speaking to one doctor or maybe one patient.


because that's not enough. I see so many examples of, know, but we spoke to a cardiologist and they said that, well, every physician will have their own perspective on this. And truly understanding a system is much more than having one perspective. It's deeply understanding it. So meaningful engagement with clinical teams who actually live and breathe this stuff, and meaningful engagement with patients who actually live with these diseases to understand what solutions will be most pertinent to them. I think that's where the best products will come from.


Shubhanan Upadhyay (47:30)
100 % and there's lots of models of thinking about this, Amazon talk about working backwards and like working backwards from the outcomes you want to achieve. So we need to think about that as a society. And I think, yeah, I definitely think to use the same term in a different way. I think we've gone backwards ⁓ in terms of, know, 10 years ago we were thinking, think about the problem, et cetera.


And now we've got this once in a generation opportunity of LLMs. We're trying to shoehorn it in every task possible because that's where the money is. that's necessarily always developers' fault either. The system and the decision makers and health systems and buyers are like, hey, this is where the money is. If you can give me something with LLM and it's going to save me money because I read it in a Forbes article, of course.


you know, that's what people are going to respond to. therefore the incentives of the system are making us think the wrong way. They're making us think from left to right. And so really, yes, I definitely agree that thinking about this from what do we want the experience of healthcare to look like, do we want the delivery of healthcare to look like,


I really like what you said about as well about, know, clinicians are all different. Even each


trust in the NHS or every each workflow each specialty has its own way of working. A rural clinician practices differently to an urban clinician. Someone who's practicing a socially deprived environment or a district hospital is very different to a of know super tertiary center and so many different things to understand about a system and a context and so yeah I agree that those who will win will actually do the work.


to understand that and then prioritize therefore what's important. And I think some of that should lead to hard truth like, ⁓ actually this problem might not necessarily be best solved by an LLM, but actually by good system redesign or by something not so tech and sexy. But here are the tasks that


LLMs will really help with, here's the ones that other types of AI, other types of technology, other types of innovation will help with. And I think we need to go back to thinking at that level. So what you said really resonated Anything else we're getting wrong?


Joseph Alderman (49:49)
I mean, I think this isn't necessarily a thing that developers are getting wrong, but a thing that maybe society is getting wrong is that particularly in the UK, maybe less so in the US, we don't currently have a way of enabling that kind of cross-pollination to happen between clinical expert teams and very enthusiastic and intelligent development teams.


So there's a big gulf when it comes to development of products where you have massive technology corporations and pharmaceutical companies and increasingly big tech companies developing. And they have their own in-house medical teams maybe, but then healthcare is over here doing something else. And actually those two teams aren't necessarily speaking to each other that well. So part of the problem I think is that physicians and medical teams are too busy.


and we don't give them enough time or resource or frankly freedom to be innovators and people on the technology side don't have anywhere near enough exposure to how healthcare works and so that kind of development of the kind of really nice system just doesn't really happen. My smartphone is a joy to use, it's very easy, a three-year-old could pick it up and immediately get to TikTok videos, know, they're user friendly in a way that's kind of terrifying.


Why don't we have that for healthcare? Why are healthcare software packages so awful? Well, it's got to be because there's not enough kind of discussion between those two silos. And look, we're seeing improvements here. There's an increasing amount of innovation funding for clinicians. I had to browse LinkedIn and see so many people who are 10 years my junior leaving medicine to forge startups. So I think this is getting better and I'm excited about the future, but I don't think we're there yet.


Shubhanan Upadhyay (51:06)
you


Mm.


Joseph Alderman (51:34)
If I had a magic wand and unlimited resources, I'd be pumping money into pump priming, giving clinical teams access to innovation funding, high risk money, and letting them experiment and fail and learn and build stuff in protected spaces, not necessarily for patient care, just literally because they want to experiment and learn stuff and recognising that that can be good for both the health system and for the economy at scale. You don't get the most use out of people like me.


by locking us in clinic rooms and forcing us to work clinically, we also can help on the innovation side too. And enabling that expansion of the market, I think, will be a really good thing for society at large.


Shubhanan Upadhyay (52:12)
100 % and you know, there are tools available for clinicians to be able to do that but there's also ways that clinicians and developers can be more hand-in-hand and lots of there's lots of innovative ways this happens I mean even in I mean one of my podcast episodes about a small country called Eswatini the developers there's two developers they're on a ward with the nurses and they're developing an electronic health record and kind of daily task managing tool and they're on the wards and


with and shadowing and seeing how it's being interacted with and then seeing where it gets in the way and then going out into the corridor, refactoring the code and the UI and retest it. I'm not saying that everyone needs to have that short feedback loop, but that's what you need. That's what a good co-design is, I think. And I think there are good examples of that, but yeah, it's definitely not the incumbent kind of slow moving, big.


electronic health record systems that are kind of investing in that. It's kind of new players who are coming in who are like, who see that that's really what drives adoption and change. So definitely I agree.


do have any kind of hot takes or kind of uncommon sense views around AI or LLMs?


Joseph Alderman (53:24)
Well, I guess a hot take is that we, where we are now, where the kind of future might be in terms of the short term isn't necessarily the same as the long term.


What I'm trying to say here, I think that ⁓ it's tempting to say that I'm worried about job replacement and task replacement and to have very extreme discourse around this. I think what history teaches us is that there's an inevitability to transitions over time and that we're not going to wake up tomorrow and find that every doctor has been replaced by a large language model or an AI system. But it's quite likely that over decades or longer term,


that more and more of what's currently done by a healthcare professional, whether that's a doctor, a nurse, a physio, or whatever, individual tasks will be done better by machines in some way. Now that might be a large language model. It might be some other form of emergent AI system. It might be something else that follows on. But I think there's an inevitability to that slow transition away from human led task delivery. And so the hot take really is that


The healthcare profession needs to move with the times and recognise that that is going to happen over time, not tomorrow, but at some point. And so you either resist that change and go into this protectionist mindset of only I can do what I do and therefore my job is to advocate for human-led delivery of healthcare, which I think is a mistake because it's inevitable that that's not true, but also it will deprive our patients of many of the benefits we talked about today over time. The alternative, of course, is to realise that


jobs will change, tasks will change, and the nature of work will be different in the future. And so yeah, there's an importance in having this discussion, not in a really worried and kind of breathless way, but more in a pragmatic, how can we plan for the future way? How can we work alongside the machines to deliver healthcare for our patients well? Because ultimately that's what we care about, isn't it?


Shubhanan Upadhyay (55:29)
Perfect, that's so good. Do you have any recommendations for people in low resource settings developing, finding datasets, training, validating people who are trying to create impact at the last mile in low resource settings in particular?


Joseph Alderman (55:44)
Yeah, good question. One piece of work I'll flag is that some members of our team at the University of Birmingham have been collaborating with an NGO called PATH to explore this exact area really. In resource limited situations, the risk benefit profile inverts, it? In an advanced healthcare system where healthcare is already pretty good, to be honest, notwithstanding some of the issues we talked about, you can do great harm by deploying LLMs because people already have access. If the baseline you're working from is


no one having access to healthcare or very limited access to healthcare or extremely inequitable access to healthcare. Well, even if there's risk to having a level of enabled healthcare, if the alternative is no healthcare, that is different, I think. So first of all, it's recognizing that the risk is different, the benefits are different, the calculus is different overall, but also recognizing that the responsibilities aren't different. You still have a responsibility to do the good engineering work to understand the problem and to do the evidence generation.


Shubhanan Upadhyay (56:37)
So key.


Joseph Alderman (56:43)
So I think, yeah.


Shubhanan Upadhyay (56:43)
and to do the work


to mitigate risks. doesn't absolve you of the responsibility just because you're in that setting I really like that. That's really great. Any resources, books?


things that you've read that have shaped your thinking


Joseph Alderman (56:57)
Yeah, good one. So I've read a of books recently that have changed my mind about stuff. One we talked earlier on about ⁓ kind of quote unquote solutioneering and the importance of understanding the problem you're trying to fix. there's a really good book by Bent Flyvbjerg Flyvbjerg, I think, can't pronounce his name, How Big Things Get Done, which is a, it's not AI specific actually, to honest. It's a discussion about big projects internationally, Sydney Opera House.


HS2, Heathrow Terminal 5, and other things. Examples of where it's gone well, examples where it's not gone well, and what the kind of common themes are around planning. So that's a really good one. The other one that has been really influential recently has been The Future of the Professions by Richard Susskind and Daniel Susskind, which is a discussion about what the future of work might look like in the professions, including in healthcare.


Shubhanan Upadhyay (57:34)
Also.


Joseph Alderman (57:48)
Definitely worth a read.


Shubhanan Upadhyay (57:49)
Amazing, thank you. If you're a patient and you've got all these amazing tools at your disposal, how do you think critically about A, which ones to use or B, how to best use them? What type of things to use them for in healthcare?


Joseph Alderman (58:06)
If I was a patient right now, I would be bewildered by the whole system and I'd be looking for a bit of external guidance. So look, I talked about a Health Chatbot user guide already. That will be following this year. We're hoping to push that and launch it by July. So there will be guidance out there. But I guess the main message would be to think very carefully about this because ChatGPT is great at giving you recipes and helping you think through your next trip to Europe, whatever. But when it comes to your health, there are lots of things that could go wrong.


So it's really understanding where it could be helpful, where it might be harmful, and when actually you should just speak to a healthcare professional. Because the risk of getting your health wrong is very different to the risk of getting a trip wrong.


Shubhanan Upadhyay (58:46)
Final question, how should clinicians think about which tools to use, which tasks of theirs to outsource and think critically about this?


Joseph Alderman (58:56)
Yeah, so clinicians need to understand that healthcare is regulated. You can't just pick up chat GPT and start asking a patient questions and putting identifiable information into that. Tools that we use, the AI tools for healthcare will be medical devices. And so you should be using ones that are appropriately regulated. But ultimately anything you do on behalf of your patients, you're responsible for. So whilst you might share a bit of liability with a company that provides a service.


If you use that service incorrectly or you give bad advice to your patients, then you're the one who's responsible for that. So it's understanding your obligations. And if you don't understand what you're doing, then it's to upskill and learn. There are some great resources out there. For those that have been training in the UK, there's the NHS Fellowship in Clinical AI. There's all sorts of online resources, including clinical safety officer training. So definitely worth checking that out if you don't already know about


Shubhanan Upadhyay (59:46)
Super. Joe, this has been such an insightful discussion. We started really philosophical. went into, how should, how do we think about the front line of developing models and how that impacts the front line of delivering good care. We got into kind of some of the anti-patterns. We've talked about the Standing Together Initiative and like how we avoid health data poverty. Really not an easy problem to solve. It's a whole society problem.


But you've given us some important things that as a developer, as someone who's developing a model, some important questions you can ask yourself, a mindset that you can have in terms of being proactive and not thinking about it as a state of perfection that you have to have at the beginning, but milestones to make sure that you're thinking about unintended consequences towards underserved populations who are inevitably going to be impacted by this. So I think that was really, really helpful.


We've talked a little about regulation and safety, et cetera, and the impact of this. We talked about the kind of struggles and the challenges a will have, or a person will have in terms of the plethora of tools that are out there and how they need to think about this. And yes, some of the things that we need to do better as an industry.


Joe, how insightful to talk to you. We'll share the links to the things that you've, the resources that you've talked about as well. And thank you for sharing your experience and your deep passion about this topic.


Joseph Alderman (1:01:09)
Thanks, Shubs