Babylon statement on the Forbes article

Forbes’ article has fundamentally misrepresented Babylon by giving greater weight to the views of two anonymous individuals than the findings of multiple government regulators. In creating this narrative, the journalist has neglected evidence that did not support their story. They have used tone and inference to build a deeply misleading account of how Babylon operates, to question the integrity of Babylon’s medical staff and to dismiss the science that Babylon have shared with the world. Babylon are working to help improve the lives of millions of people and are doing so whilst meeting, and often surpassing, all the required regulations.

There are serious flaws in primary care across the globe caused by a drastic shortage of doctors; even in the richest countries patients are at risk because of lengthy waiting times. Babylon’s aim is not to create AI to replace doctors, but to use technology to empower doctors - helping them make more accurate decisions and reducing waiting times for patients.

Babylon operates to the highest levels of safety and have procedures embedded through our company, with hundreds of doctors working to ensure that patients are always receiving the best care. Last month, NHS England put on record that each technology used in Babylon GP at hand ‘meets the standards required by the NHS and has been completed using a robust assessment methodology to a high standard.’

The science done at Babylon is cutting-edge and relies on clinicians, scientists and engineers - our 11 peer-reviewed publications are testament to this. In June 2018 Babylon took the unprecedented step of publishing a study into the effectiveness of Babylon AI. In this study, which was conducted under test conditions, it was found to perform with 80% accuracy.

Babylon’s aim is to put accessible and affordable healthcare into the hands of everyone on earth and we will continue in this endeavour.

Key factual inaccuracies in the article:

STATEMENT IN ARTICLE: “Interviews with current and former Babylon staff and outside doctors reveal broad concerns that the company has rushed to deploy software that has not been carefully vetted, then exaggerated its effectiveness”

BABYLON’S RESPONSE: We have provided evidence of our high levels of safety and satisfied every regulator and independent audit, including NHS England, the Care Quality Commission, NHS Digital and local NHS Clinical Commissioning Groups. All our clients, including some of the world’s best known corporations like Samsung and Prudential have carried out their own extensive due diligence, often employing professional third party advisors with corporate liability. Last month alone, NHS England put on record that: ‘The DCB 0129 safety cases submitted by Babylon and GP at hand for each of the Babylon technology products used in the GP at hand service have been considered. These are the Artificial Intelligence symptom checker, the Babylon clinical portal, and the Babylon Healthcheck service. Each safety case meets the standards required by the NHS and has been completed using a robust assessment methodology to a high standard.’

STATEMENT IN ARTICLE: “Late one Friday evening in December 2017, a group of worried Babylon Health doctors sat down for a meeting with Ali Parsa”

BABYLON’S RESPONSE: “In healthcare everyone has a duty of care and all Babylon employees are encouraged to raise any concerns. All of Babylon’s clinical staff are independently covered by their professional codes of conduct (e.g., the General Medical Council and Nursing and Midwifery Council), which have their own rules for escalation of safety issues. If doctors, or anyone else in Babylon, had any concerns then they would be encouraged and duty-bound to raise them and they would be listened to. Our CEO regularly meets with employees from all sections of the company, including clinicians whose role specifically includes tailoring and overseeing the rollout of new clinical features, but this process is controlled by our Chief Medical Officer and Clinical Safety Officer. No product is ever released until all safety issues have been resolved and they have passed industry standard safety tests.

STATEMENT IN ARTICLE: “In June, a British doctor who was testing the new diagnostic chatbot on Babylon’s app found an error: It had missed symptoms of a hypothetical pulmonary embolism”

BABYLON’S RESPONSE: This person, claiming to be a doctor (but remaining anonymous; their behaviour is against General Medical Council regulations), runs hundreds of tests on our AI output and has publicised the few outlier problems they find. We have repeatedly tried to engage with this individual but they have refused any constructive engagement. As part of our rigorous testing programme, Babylon’s own staff also repeatedly test the system and make post-market safety checks, whilst clinicians review interactions with the chatbot. Any inaccuracies that are discovered are quickly resolved.

STATEMENT IN ARTICLE: “Babylon has yet to publish any of its research in a peer-reviewed medical journal, a process that takes time.”

BABYLON’S RESPONSE: To date, the main focus of Babylon’s efforts has been on developing AI. We have published 11 peer-reviewed pieces of research in AI journals and at AI conferences, as other scientists do. Medical safety is covered by checks from the MHRA, NHS England and other bodies of authority. Babylon operates fully within the law and would never release anything found to be unsafe.

STATEMENT IN ARTICLE: “Hamish Fraser, a Brown University biomedical informatics professor who disputed Babylon’s assertions [that the AI had achieved equivalent accuracy with human physicians] in a recent article in The Lancet […] points out that Babylon’s software had answered only 15 of the 50 exam problems and was allowed to give three answers to each question.”

BABYLON’S RESPONSE: Babylon’s technology is built for everyday use, rather than for sitting exams. Babylon asked the Royal College of General Practitioners to collaborate on a methodology to enable benchmarking of Babylon’s technologies but were turned down. Babylon therefore tested the technology on a subset of publicly available questions. Some of these questions do not have definitive answers and thus accommodations were made in the methodology, which were shared publicly by Babylon. Babylon have never claimed to be better than a doctor. We have a tool that offers answers as accurately as doctors on an ever growing set of questions, and that accuracy is increasing. As stated in our paper ‘further studies using larger, real-world cohorts will be required to demonstrate the relative performance of these systems to human doctors’ and we are currently in the process of doing this with our academic partners. We intend to publish the findings in a peer-reviewed journal.

STATEMENT IN ARTICLE: With regards Circle

BABYLON’S RESPONSE: The journalist was informed that “The hospital was in debt when Circle took it over and it was in the bottom 10% of performance rankings. Under Ali Parsa’s leadership the hospital broke even and attained top 10% scores for performance. It went on to win the CHKS quality of care award for performance during Ali Parsa’s tenure (see here), becoming the first district general hospital ever to win this award, normally reserved for top university and academic hospitals.”