Necessity is the mother of invention.
Fools rush in where angels fear to tread.
The Covid-19 crisis is pitting these two proverbs against one another as well-intentioned people are aiming to care for the sick in the most responsible way possible under dire circumstances. Consider Catholic Health Services of Long Island, which is aiming to deploy next week a relatively untested artificial intelligence product made by New Jersey-based ElectrifAi (pronounced electrify) to help ease the burden on ER docs. Through ElectrifAi’s AI tool, ER physicians at all six of CHS hospitals are hoping to decide whether patients who are demonstrating Covid-19-like symptoms but haven’t been diagnosed formally should be admitted to the hospital because they are more likely to deteriorate faster or can be safely sent home. However, an expert warned against rushing a tool that works well theoretically into clinical practice even in an emergency.
The reason for wanting this capability is largely due to the fact that test results sometimes can take several days, said Dr. Craig Sherman, service line director of Neuroradiology at Catholic Health Services. Thus, when patients come to the ER, doctors have to make a call quickly on whether to admit or send them home with the instruction to return if symptoms worsen.
“We still do not have an onsite, point of care test that can give the doctor information back in 10 min or 15 min,” said Dr. Sherman said in a phone interview on March 27. “They claim they are out there and being deployed but so far our system doesn’t have it.”
[On March 31, Abbott announced the distribution of its ID Now test that delivers results within 5-15 minutes. However, only around 5,500 tests are available for the entire country and the shortage of coronavirus testing is old news at this point.]
So, Dr. Sherman’s hope is that with ElectrifAi AI’s imaging technology and clinical parameters, physicians would be able to make a more informed decision about who to admit.
“If the AI algorithm can be useful to the ER doctor, if they could do the X-ray, run the algorithm and say based upon our stats, they have a high probability of returning, doctors may instead of sending patients home, they may keep the patients for observation and prevent them for worsening,” explained Dr. Sherman, who has a consulting contract with EletrifAi.
Globally, there are more than a million Covid-19 cases and it’s conceivable that an AI algorithm may be trained on a few thousands of imaging data though there is no long-term data available. But here’s the catch — ElectrifAi doesn’t use or even need thousands of imaging data. In a phone interview, the company’s CEO, Edward Scott, said that its AI model needs only between 150 and 200 X-rays.
“The difference is that a convolutional neural network will use thousands if not tens of thousands of images,” Scott said. “We don’t require that. That’s the difference.”
That is what deeply impressed Dr. Sherman.
“What’s attractive about ElectrifAi’s technology is that it was shocking to me they really only need about 200 X-rays, he said. “That’s what our aim is — to get this model going.”
But is Dr. Sherman worried about implementing an untested AI algorithm that uses so few images inside ERs in his system in Long Island? He offered this response:
Any diagnostic test — whether established or not established — has false negatives and false positives. So the quesiton is, that this is truly an emergency and if something can work and can help, I think it’s better to develop it and try it out. Listen, if we can save one patient from going home and crashing by keeping them around for observation and being able to nip it in the bud a bit sooners, I think it is worth it. So we are trying everything. People are testing all kinds of drugs and drug therapies and this is noninvasitve. This is being done on data that’s already been obtained or going to be obtained. And if we can devlop the algorithm, the reality of it is, this may come back next year, there may be other viruses, so if we can start now in these circumstances and deploy it, I personally don’t see the downside.
But how can it be that ElectrifAi needs so few images be it CT scans or X-rays to train its algorithm? The company’s LinkedIn post describes the technology as “minimal model” AI that can provide insights based on 200 2D annotated images. However, in the phone interview, Scott displayed impatience when asked about minimal model AI.
“We don’t think about it as minimal model AI the way you are thinking about it,” he countered, explaining that the company’s efforts in the AI space and medical imaging pre-dated Covid-19. “We’re extraordinarily good with feature segmentation and feature extraction and feature vector analysis and somewhere in there is our secret sauce that lets us do this with a minimal number of images.”
Scott said that most hospital imaging data is unstructured in that it isn’t properly annotated. ElectrifAi has been working to automate this annotation process thereby bringing life into what was previously a “frozen data lake” and allow hospitals to be able to query the database. That groundbreaking work in imaging, he said, has paved the way for the Covid-19 effort to see how the technology can be applied for good.
“We think of our advanced segmentation and extraction technology as creating an incredible tool to let medical institutions finally bring life to their decades of stuck or frozen imaging data,” Scott declared.
Following the interview, Scott rebuffed several requests to ascertain more clearly what the technology is. He declined to name hospitals that are current customers. But Scott did provide some clarification through a representative via email.
We’re not using a convolution neural network (CNN) or deep neural network (DNN) – those are general tools. While they have their place, ElectrifAi is using an entirely proprietary mathematical approach that has been a much better fit for this problem. The results support our assessment, and are quite compelling so far.
From Analytics Consulting Firm to AI Company
Before July 2019, ElectrifAi was known as Opera Solutions that provided analytics services as a consulting firm. It has offices in India and China. Since July of last year, the company has undergone a name change to ElectrifAi with the goal of becoming an AI product company. That’s when Scott became CEO. The news release announcing the change also described a technological shift:
ElectrifAi had “re-architected its technology platform around an open-source, Spark-unified computational engine that allows large-scale distributed data processing and machine learning, with embedded Zeppelin notebook capability. Now, ElectrifAi’s data scientists – as well as those of its customers – can code and access data in any programming language. The incorporation of Docker Containers and Kubernetes enables ElectrifAi to build and deploy hybrid cloud enterprise solutions at scale, seeing results in weeks rather than months, thus increasing enterprise time to value dramatically.”
Other than healthcare, the company works with eight other industries/entities including the government, travel and financial services. It’s website says customers can use these AI and ML tools to manage procurement contracts, manage expenditures and reduce fraud waste. In healthcare, the website lists the company’s capabilities in the following order:
- Increase profit through accurate billing
- Reduce fraud, waste and abuse
- Improve working capital
- Help clinicians make better diagnosis decisions
- Steer patients to the optimal point of service for care
When asked how a company helping clients in reducing fraud and waste using AI was equipped to helping make clinical recommendations, Scott appeared frustrated.
“We are unapologetic for our innovation. The only thing that I guide in this company is that the innovation is practical in that it changes people’s lives and improves the way institutions that use our technology and use our products run their business every day,” he shot back. “We are [a] highly innovative, highly urgent, disruptive company.”
Are we rushing into something?
A computational biology expert with no knowledge of ElectrifAi’s model mused whether the company is relying purely on imaging biomarker data to make a recommendation to Catholic Health Services’ ER doctors or whether it is taking into consideration some clinical data as well.
“My guess is that there are a host of clinical parameters that would have greater predictive power than images,” [as to who will rapidly decline] wrote John Quackenbush, Henry Pickering Walcott professor of computational biology and bioinformatics and chair, Department of Biostatistics, at the Harvard T.H. Chan School of Public Health, in an email response to questions.
Turns out ElectrifAi does use some clinical parameters — when patients were admitted, the date of their first X-ray, date of patient release, and date of return with more aggressive symptoms.
Later, in a Zoom interview, with a virtual background of Mountain Doom and the Eye of Sauron from the Lord of the Rings trilogy aptly capturing our modern-day peril, Quackenbush reflected that this data “is a start” buth other clinical parameters that would have been even more valuable.
“I’d love to know things like age, smoking status – other things. We know that there are confounding factors like pre-existing conditions, so someone who is diabetic, has asthma,” he said. “Since it seems the virus binds to ACE2 — is this someone who has hypertension who is on ACE inhibitors? I don’t know what the other factors are that could lead to a prediction and could confound the results that they are seeing.”
Quackenbush also offered a detailed description of why it’s difficult to understand how EletrifAi’s technology works and the overall challenges presented in imaging AI:
I don’t know if they are using some algorithm that they are calling a minimal model or they are taking a small number of images, or they are taking those images and extracting a small number of features or they are taking those images and extracting all the features and then doing some kind of dimensional reduction.
This is one of the challenges of images and it is an interesting challenge and it comes back to having a small number of hospitals and small number of images. You can pull out a lot of imaging features that are correlated with each other. You can pull out thousands of features but some of them are going to be correlated. You pick the 5-most predictive features or the 10-most predictive features, and you run the risk that those 10 are really one and it’s 9-highest correlates.
But, echoing Dr. Sherman, surely now is the time to try out new things as the world is reeling from a rapacious virus striking down the old, infirm and the young, no?
“It’s easy to understand the motivation and you could even justify it that way — you want something that is going to provide some additional information to physicians,” Quackenbush said. “But when you move something experimental and laboratory-based into clinical practice, there really is a bright line you have to draw to say, ‘Look if you are going to move into clinical practice, the algorithm has to be thoroughly vetted.’ You have to provide the training data and the method itself so that others can look at it and you really have to go through this robust and rigorous process to ensure that you’re not going to be making bad predictions, that you are not going to be making predictions that are going to harm people.”