Researchers aim to predict cardiac events with AI technique used to analyze earthquakes
Sebastian Goodfellow, an assistant professor in the University of Toronto's department of civil and mineral engineering, and his team have partnered with researchers at The Hospital for Sick Children (SickKids) to help detect and diagnose heart arrhythmias.
The project, supported by a grant from the Canadian Institutes of Health Research, aims to leverage techniques developed by Goodfellow and his colleague in their previous work, . While the goal in those projects was to learn how to recognize the signals that precede seismic events, such as earthquakes, the new project will focus on a different kind of data: that generated from electrocardiograms, or ECGs.
Machine learning techniques developed in the context of geology could be adapted to pick up on the warning signals that precede cardiac events, such as arrhythmias, which affect roughly 700 critically ill children at SickKids each year. Goodfellow and his team are collaborating with Dr. Mjaye Mazwi 鈥 a staff physician in the department of critical care medicine at SickKids and an associate professor in 缅北强奸's Temerty Faculty of Medicine 鈥 and the team at .
Writer Phill Snell sat down with Goodfellow to talk about the project.
How did you get involved in this project?
I joined Laussen Labs in 2017 to bring my signal-processing expertise to the group. My PhD research focused on applied seismology, which is the study of seismic waves generated by engineering processes such as mining.
At the time, Laussen Labs had just started acquiring physiologic waveform data, such as ECGs, which are the electrical signal of the heart. The analysis and modelling of high-frequency time series data require a skillset called digital signal processing. When analyzing earthquake seismograms during my PhD, and afterwards in the private sector, I acquired this skill set.
Is it unusual to see this kind of collaboration between mineral engineering and medicine?
It鈥檚 more common than you may think. Many of the important problems of today and tomorrow spill across borders, cultural divides and fields of knowledge.
For example, Laussen Labs developed a bespoke time-series database for the storage of physiologic waveform data at SickKids. The lead database architect was a hydrologist by training whose previous experience was developing a database for the storage of drone photography for a flood plane mapping application.
Over time, the gap between AI in mineral engineering and AI in health care has become smaller and smaller for me. Beyond publishing proof-of-concept studies in academic journals, deploying AI models in the real world is very hard and the challenges span mineral engineering, health care, and beyond.
Can you describe the new project in more detail?
We are building and deploying a model that detects and diagnoses common pediatric heart arrhythmias using continuous ECG data. Currently, this is a task staff physicians in the ICU can do very well.
The challenge is there are only two staff physicians on duty at any given time to service 42 ICU beds, and detecting and diagnosing heart arrhythmias is just a small part of their job. As a result, these arrhythmias often go undiagnosed for a period and the longer the delay, the worse the outcome for the patient.
The idea is to use our expert clinicians to train an AI, which can match their performance and monitor all ICU beds 24 hours a day, seven days a week, looking for arrhythmias.
This animation shows an ECG signal transitioning from a normal rhythm to an arrhythmia. In the top right corner is the model score for a particular pediatric arrhythmia called Junctional Ectopic Tachycardia (JET). When the signal transitions, you can see the model score increase.
What are the key challenges to developing such a model?
The algorithms are actually pretty straightforward; the challenge is what we call the 鈥渢ranslation gap.鈥
You can find thousands of academic papers on AI, machine learning and deep learning applied to health care. However, if you dig a little deeper to see how many of those AI models actually made it to clinical deployment, it鈥檚 less than 0.1 per cent. We made the decision to keep our model simple, so we could focus on translation.
The translation gap is a result of multiple factors. These include difficulties creating computational infrastructure that can reliably ingest data for real-time classification, the requirement of a production-grade Machine Learning Operations platform for serving, monitoring, and re-training AI models, regulatory challenges integrating AI models into clinical domains, and concerns about responsible validation and bias, sometimes described as 鈥渁lgorithmic fairness鈥.
The team that can close this gap must include a wide range of expertise including bio-ethics, MLOps, law, cloud, software development, human factors, cognitive psychology, digital signal processing and machine learning.
Can you talk about your experience in AI?
Before joining 缅北强奸, I was the AI Lead at a startup in the mining industry called KORE Geosystems. We developed an AI product that automated various parts of geotechnical and geological core logging workflows, for example, rock type classification and fracture counting.
In this role I had to deploy AI models that geoscientists relied on to do their work. I was able to bring this experience to Laussen Labs where they were running up against similar challenges.
When you鈥檙e building products, you鈥檙e forced to start from the business requirements and work backwards to the technical solution. Because products are built for users, it鈥檚 no surprise why this is the preferred approach.
Do you think people will be reluctant to rely on a machine rather than human experience to predict arrhythmia?
Trust is always a challenge when introducing any new technology into an established workflow, and AI is no exception. It is imperative to think about your AI model as a product from the very start, which will involve those end users 鈥 in this case, doctors 鈥 in documenting requirements and ultimately building trust.
It also matters how we present the performance of a model to the end user. We need to use metrics that map to clinical key performance indicators, and we need to present those metrics in a transparent manner over long periods.
Most people have no clue how a plane achieves flight or how a jet engine works but they feel safe flying. The reason is there is a one in 20 million chance of dying in a commercial airline plane crash. So, an arrhythmia model that is consistently performing at the level of a board-certified cardiologist will build trust.