Our technology

We’ve made great progress

We have a long way to go to reach our mission, but we’re incredibly proud of our achievements in AI so far:

  • We’re proving the credibility of AI in primary care

    In early pilot studies, in association with Stanford and Yale, our AI was compared against seven highly-experienced primary care doctors using 100 independently-devised symptom sets (or vignettes). In these specific tests, our AI scored 80% for accuracy, while the seven doctors achieved an accuracy range of 64-94%. We’re currently preparing a number of studies to evaluate the impact of our Al in real-world settings.

  • We've built an Al for medicine that is not just a ‘black box’

    Our probabilistic graphical model is fully explainable, meaning humans can understand how a conclusion was deduced. This is also referred to as being interpretable or transparent. In this way we can ensure that whilst the AI continues to develop behind-the-scenes, each release that goes out to the public through the app is only released once the specific changes have been approved by our clinicians. We’re also constantly improving our privacy-preserving techniques.

  • We’re deploying AI in healthcare at scale

    Our models reach millions of people in multiple countries across the globe through our app. We believe our AI has the largest reach of its kind.

  • We’ve pushed what natural language processing can do.

    We have published numerous pieces of peer-reviewed research on Natural Language Processing. Our AI has been created to understand medical terms and data so it can gather information from medical datasets - but it can also read and learn from patient health records, including the consultation notes made by our clinicians in the different countries where we work.

There’s much more to come

It’s fantastic to see progress but it’s also important to recognise that we’ve got so much more to do. One of the things we’re most excited about is that we’re pushing a whole new area of AI:

Causality

Machines have been programmed to reason by associating a potential cause to a set of conditions. As Judea Pearl, Turing Award winner and professor of computer science at UCLA, puts it, the current common practice in AI “amounts to curve fitting.”

Machines don’t actually know much about the causal relationship between variables. Put another way, they have the ability to associate fever and malaria, but not the capacity to reason that malaria causes fever. Once this kind of causal framework is in place, it becomes possible for machines to ask counterfactual questions — to ask how the causal relationships would change given some kind of intervention. This is a whole new area of AI, and its implications are colossal. For further reading on Deep Learning Models with Constrained Adversarial Examples.

We’re part of the community

We contribute to the AI community by publishing papers, speaking at conferences and open-sourcing some of our work for the benefit of all.