Q&A with OIA’s Founder: Complexity Frontiers

A causal network with hundreds of multilayer cause-and-effect chains underlying every aspect of disease and intervention

We thought it would be useful to provide a summary of the opening keynote presentation that our founder and CEO, Dr. Hector Zenil, was invited to give to the Complexity Frontiers event, as he discussed the potential for an approach to Artificial General Intelligence (AGI) to be used in a controlled, transparent way to help reveal disease signatures from longitudinal data, in particular enriched blood data. Dr. Zenil explained how OIA’s research in causal discovery based on symbolic computational intelligence offers an opportunity to produce and generalise models and explain each step of critical decision pathways. This complements the work of healthcare professionals by offering a self-improving solution which over time is able to reason in a similar way to a medical expert.

Given there is so much discussion at the moment around the future of AI and what is required in order to deliver more sophisticated autonomous applications, this summary is designed to provide an overview of some of the principles behind OIA inherited from our founder’s vision. In this question-and-answer session, he explains why he agrees that there is far too much adherence to the belief that deep learning (DL), as originally put forward, can deliver the ultimate goal of AGI. He suggests DL models traditionally ignore a lot of the complexities around human behaviour and the fundamental high processes of the human mind, such as model abstraction and generalisation, that make the human mind intelligent. This requires different approaches to AI such as mechanistic model generation and causal deconvolution from first principles, which Hector has been researching for a number of years. This is particularly relevant when applying AI to fields such as biology and medicine where clinicians must have faith that algorithms are making justified decisions from the available information using the chosen method rather than just relying on crunching vast amounts of data. To replicate human intelligence, Hector contends that having a more transparent model which can cope with the complexity of the real world is more important than having ever-increasing greater access to more and more data.

1.     Why are you using your research into causal discovery to identify disease signatures in longitudinal blood analysis?

Because of its dynamic nature and importance, blood carries most of the information about an individual’s health. The bloodstream is basically the highway of information in and out of the human body.  About 70% of all medical decisions for diagnosis are based on laboratory blood results. It carries the messages and main indicators of the immune system that are the first responders and ultimate guardians of a person’s health and so it holds the key to understanding the transition between different health states, in particular when it moves from health to disease. 

Understanding these transitions to building a mechanistic view of the first principles of health over time is key to transforming medicine and finding solutions to preventable diseases. We want to transform the current system from taking care of the sick to keeping people healthy. We can see this in drug discovery and drug availability. Drugs are for sick people because we understand health very poorly. We don’t think of drugs for healthy people.

2.     How are you convincing clinicians and patients to believe in the potential of AI?

Most misdiagnoses come from symptom confounding, as human intelligence has a hard time connecting symptoms to root causes. And as said before, about 70% of those decisions are based on laboratory blood results. Therefore, if AI is to be used to maximise its impact it should be connected to the blood testing pathway, but it must also be accountable and mechanistic. Mechanistic means that, in principle, a doctor may be able to understand the process step-by-step if they wanted to inspect it, as opposed to most black-box deep learning approaches followed today. At OIA we are convinced this can be achieved by making AI relatable and not a stranger to human reasoning.  We are also convinced that although such approaches to AGI may fall short they are more open-ended and powerful than current deep learning approaches which means that ultimately they will have a better chance of being able to drive healthcare automation. A clinician or patient must be able to follow the partial chains of the reasoning behind the algorithm or have access to the model explaining a causal connection, like human experts’ minds do, as we explained in a paper published by the journal Nature Machine Intelligence, and the video that the journal Nature produced to explain our research.

We want AI to shed light operating invisibly in the background, and to achieve that in healthcare, AI must strive to approach human-expert levels of reasoning. This cannot be achieved with current AI approaches, given the infinite potential variations but, more importantly, the special cases of symptoms and conditions that may contribute to understanding and therefore diagnosing a disease. The most successful application of AI in healthcare will be one that does not have to disclose it is AI because clinicians and patients will focus only on better and faster outcomes or maintaining health, confident they can access the technology to understand how decisions may have been taken. 

3.     What benefits do you think it could deliver to clinicians and patients?

The world is not training healthcare professionals at the rate it requires and staff shortages are impacting healthcare providers worldwide.  For example, only in the UK, there were 4,029 full-time equivalent (FTE) district nurses in the NHS last July (2022), these are nurses that are sent to the community to collect blood samples and take care of people that are unable to travel (e.g. after surgery, severely immunocompromised or with limited mobility). This is a third fewer than the 6,101 FTE district nurses a decade before. There has been a drop of 1% in their number since July 2017. While community nurses are central to the NHS’s plans to provide out-of-hospital care, their numbers were reported to be falling even as early as 2016, and this will only become worse if technology does not fill the gaps. 

The AI revolution in healthcare will bring faster and better diagnosis through early detection and better health management.  By understanding health better, we will be able to prevent a person from transitioning to unwell states and keep them in their optimal form.  On top of saving lives and preventing suffering, the system and taxpayers will save billions of pounds by becoming preventive instead of reactive national health systems. Our technology, Algocyte, will enable NHS district nurses to do much more and get results on the spot rather than nurses and doctors having to wait for hours or days even in urgent cases. The NHS will save millions of pounds by deploying an Algocyte fleet to support nurses in the community and billions when we turn sick care into health care with automation tools and super-human intelligence to prevent illness.

4.     What is the difference between your approach and deep learning?

Deep learning as traditionally introduced and used is not accountable and is prone to simple mistakes that human minds would never do based on cause-and-effect models.  Our approach uses deep learning only when it does not obscure a fundamental explanation.  For example, in representational tasks, to capture an object in a computable numerical fashion, but not to fully take a decision that cannot be completely accounted for unlike a mechanistic model which is able to explain the chain of cause and effect. We recently surveyed some of these approaches in a paper published by the journal Entropy and, in the context of cancer dynamics, in another paper published by the journal Frontiers in Oncology. Deep learning is a great statistical method to deal with large combinatorial variance in data that always comes with an answer and does not crash even if it may provide the wrong answer but it is not equipped or has found it very difficult to drive innovation in more critical areas of human activity.

5.     There is much debate currently between leading researchers about the capabilities of deep learning to deliver artificial general intelligence – what are the limitations of Deep Learning models?

Deep Learning (DL) as it was originally introduced, is a combinatorial approach that captures a large number of features from the data, but often not the right features.  When it goes right, it can be helpful and very powerful in problems of classification but useless to gain a deeper understanding of an object of study unless there is symbolic reasoning in the form of human or machine intelligence involved.  New approaches to DL, which it is debatable whether they can continue to be categorised as DL, involve layers of complexity that take advantage of more symbolic approaches able to deal better with dynamic physical models rather than just mathematical objects seeking to maximise over some variable. Deep learning researchers used to think, for example, that the differentiability of the learning space was fundamental, but we have shown it is not in a paper we published in the journal Frontiers in Artificial Intelligence. So, DL researchers have been updating their beliefs after realising their limitations.

6.     Why do you believe OIA’s approach can be more applicable in healthcare settings?

Because it is able to do causal discovery which reflects the complexity of the diagnostic pathway. It can produce explainable mechanistic models and is based on the best theory of general intelligence. We have proven its applicability and power in dozens of papers and in industrial applications in areas such as time series prediction and reconstruction of causal explanations from observations of complex dynamical systems. In this chapter published in a book by Springer Nature, on Cancer, Complexity, Computation, we explain our Causal Diagnostics approach taking into account all the features of model accountability that we believe will drive the future of medicine.

Previous
Previous

Presenting Algocyte in the Metaverse at Arab Health 2024 — Second consecutive year at the global healthcare medical expo, Dubai

Next
Next

Oxford Immune Algorithmics (OIA) Successfully Completes QMS ISO 13485