Babylon

Services

Business

Learn More

GP at hand Open in new tab

Careers

Legal

All articles Health Tech Business

Want to build a personal AI doctor? Crack these 5 data challenges first

At Babylon, we are building an artificial intelligence (AI) platform that will enable everyone, everywhere to have the benefit of a personal doctor in the palm of their hands. We don’t want to stop at helping the sick. We want to give people the whole Circle of Care: from triage to treatment, prevention to long term health management. It’s an enormous ambition - the greatest in global health - and to achieve it we have to solve some of the toughest data challenges in AI. In this blog series, we explore five of them.


Introduction

Babylon is using AI to transform the way the world handles healthcare. In a world where 50% of the population doesn’t have access to basic healthcare, we aim to bring quality healthcare to everyone, everywhere, at any time.

Whether you’re checking a symptom you’re worried about, managing your blood glucose, keeping an eye on your daily step count or changing your diet to reduce the risk of heart disease - our aim is to let you do it all through one app. We are designing an AI to give personalised support on demand, covering sickcare and healthcare, urgent needs and continuous support.

This is our vision and we call it the Circle of Care.

Diagram 1: The Babylon Circle of Care, which covers our AI services, virtual services and physical service

Building a personal AI doctor that provides the whole Circle of Care is no mean feat. Where do we even begin?

We begin by getting to know each person we serve (we call them members) as a whole. That means we need to gather all the health data that we can about our members - always for the purposes of improving their care, with their permission and within regulatory boundaries of course.

Gathering the data is the easy part. Doing something useful with it is a lot harder, and we’ve encountered all kinds of problems along the way. Some will be familiar to other data-driven companies, while others are more exotic and stretch the boundaries of modern AI.

Here are five major data challenges we realised we had to crack before we could stand a chance of successfully building a personal AI doctor:

  1. How do we decide what data is relevant?
  2. What can we do about members purposefully or unknowingly giving us inaccurate information about themselves?
  3. How do we get a holistic view of our members’ health when not all their healthcare happens through Babylon?
  4. How do we extract meaning from free-form, unstructured medical text?
  5. How do we make sure that data generated by any Babylon service can be understood by everyone who needs to understand it?

Stay tuned, as we explore these questions in turn and the lessons we learnt as we strived to answer them. First up...


Challenge #1: How do we decide what data is relevant?

This is part one of a five part blog series exploring some of the challenges we’ve been facing while building a personal AI doctor.

When we say our ambition is to give our members the whole Circle of Care, what does it really involve?

It involves making a series of evidence-based decisions that result in a diagnosis, prescription, suggestion to go to hospital, advice on how to reduce disease risk etc. When humans make evidence-based decisions, we may have a lot of evidence to hand, but we don’t use all that we know, all the time.

Let’s take a simple example we can all relate to: you’re about to head outside and are deciding whether to bring an umbrella with you. You’ll probably take a quick look at the weather forecast - if it’s forecast to rain soon, you bring an umbrella. You may not even notice that you’re filtering out a lot of other information as you’re making the decision - the humidity level, wind direction, UV index, to name but a few! In fact, displays that are crowded with all this other information can make it harder for you to find the information you need and slow down your decision making.

Similarly, regurgitating absolutely everything we know about our members to our doctors (whether human or AI) would overwhelm and slow them down. That’s why we choose to focus only on relevant data.

Problem: How do we decide what data is relevant?

There are many definitions out there for “relevance” but one that makes a lot of sense in the context of building products is:

"Something (A) is relevant to a task (T) if it increases the likelihood of accomplishing the goal (G), which is implied by T." - Hjørland & Sejer Christensen, 2002

What we can take from this is: whether or not something is relevant depends on the goal we’re trying to achieve.

And at Babylon, there are many different goals we’re trying to achieve through many different products.

As an example, one goal during consultations is to support our doctors to make safe, effective clinical decisions. They tell us they need to know their patient’s symptoms, conditions, disease risk factors, medications and allergies - so we ensure they have all this.

Our Healthcheck product has a different set of goals. For example, Healthcheck wants to let people compare how healthy they are against their peer group. To support this goal, we supply Healthcheck with data on our members’ lifestyles, disease risk factors and demographics.


LESSON 1: Clarify goals first to decide what is relevant.

But just deciding which categories of data to include isn’t enough.

Let’s say you approach a doctor about a splitting headache you’ve got. When the doctor asks follow-up questions they probably won’t be asking about symptoms in your feet - because they only want to gather data that is clinically pertinent to the presenting complaint.

We mirror this logic when building our AI doctor. And to give our members truly joined-up care, we need to know when data from different parts of the Circle of Care are clinically pertinent to the presenting complaint.

Diagram 2: Identifying when historical data-points about a member from all around the Circle of Care are relevant to the condition currently being considered

Another dimension of relevance is temporal validity.

For example, whether a data-point is considered likely or unlikely to be valid at the point in time of interest. Going back to the headache example above: whilst the doctor may care that you had a knock to the head last night, they are unlikely to consider a minor head injury three years ago to be relevant to your current headache.

Similarly, our AI needs to both remember things that happened to you previously and estimate how likely it is that those things are still true.


LESSON 2: Relevance is a function of the interaction between clinical pertinence and temporal validity.

Diagram 3: Concepts must be both highly clinically pertinent and highly temporally valid to be considered highly relevant

So we’ve narrowed down the field of view with clinical relevance and temporal validity to provide focus and efficiency for both our doctors and AI.

But we need to be careful with where we draw the line. If we narrow it down too much, we risk accidentally omitting information that could be key to their decision-making process. Clinical safety is our top priority, so we prefer to include too much data than too little.


LESSON 3: Clinical safety must not be compromised by relevance measures.

Found this post useful? Stay tuned for the 2nd blog post in this 5-part series next week

Can you help us learn?

If so, get in touch because we’d love to hear from you! Just email ai.blog@babylonhealth.com 

With many thanks to...

AI Engineering: Mohammad Khodadadi, Domenico Corapi, Jonathan Moore - AI Product: Maurizio Morriello, Martin Robbins - AI Research: Jet Shamdasani, Giorgos Stoilos - AI Clinical: Alex Szolnoki Science - PR: Edward Sykes, Amy Palin - Design: Matt Jakeway - Data Trust & Privacy: Cormac West