Babylon
Services
Business
Learn more
Careers

Our technology

We’ve made great progress

We have a long way to go to reach our mission, but we’re incredibly proud of our achievements in AI so far:

  • We’re proving the credibility of AI in primary care

    In early pilot studies, in association with Stanford and Yale, our AI was shown to perform on par with a group of GPs in 80% of commonly presenting conditions in primary care medicine. We’re currently preparing a number of studies to evaluate the impact of our AI in real-world settings.

  • We’ve built an AI for medicine that is not just a ‘black box’

    Our probabilistic graphical model is fully explainable, meaning humans can understand how a conclusion was deduced. This is also referred to as being interpretable or transparent. In this way we can ensure that whilst the AI continues to develop behind-the-scenes, each release that goes out to the public through the app is only released once the specific changes have been approved by our clinicians. We’re also constantly improving our privacy-preserving techniques.

  • We’re deploying AI in healthcare at scale

    Our probabilistic graphical model reaches millions of people across multiple countries. We believe this is one of the largest deployments of its kind in the world.

  • We’ve pushed what natural language processing can do.

    We have published numerous pieces of peer-reviewed research on Natural Language Processing (you can see more here). Our AI has been created to understand medical terms and data so it can gather information from medical datasets - but it can also read and learn from patient health records, including the consultation notes made by our clinicians in the different countries where we work.

There’s much more to come

It’s fantastic to see progress but it’s also important to recognise that we’ve got so much more to do. One of the things we’re most excited about is that we’re pushing a whole new area of AI:

Causality

Machines have been programmed to reason by associating a potential cause to a set of conditions. As Judea Pearl, Turing Award winner and professor of computer science at UCLA, puts it, the current common practice in AI “amounts to curve fitting.”

Machines don’t actually know much about the causal relationship between variables. Put another way, they have the ability to associate fever and malaria, but not the capacity to reason that malaria causes fever. Once this kind of causal framework is in place, it becomes possible for machines to ask counterfactual questions — to ask how the causal relationships would change given some kind of intervention. This is a whole new area of AI, and its implications are colossal. For further reading on this area, see here.

We’re part of the community

We contribute to the AI community by publishing papers, speaking at conferences and open-sourcing some of our work for the benefit of all.

  • Estimating Mutual Information Between Dense Word Embeddings

    Vitalii Zhelezniak, Aleksandar Savkov, April Shen, Nils Hammerla

    Read more >

  • Hybrid Reasoning Over Large Knowledge Bases Using On-The-Fly Knowledge Extraction

    Stoilos, Giorgos and Juric, Damir and Wartak, Szymon and Schulz, Claudia and Khodadadi, Mohammad

    Read more >

  • Can Embeddings Adequately Represent Medical Terminology? New Large-Scale Medical Term Similarity Datasets

    Claudia Schulz, Damir Juric

    Read more >