Where technology meets compassion.
OpenPredictor uses a carefully developed machine learning model to predict patient’s risk of post-operative complications, aiding medical professionals’ decision-making and helping hospitals optimise their pre-operative pathways for elective surgery.
OpenPredictor is currently being used in hospitals in the UK aimed at supporting hospitals to streamline and reduce the costs of pre-operative assessment pathways and help tackle extended waiting times and the backlog in elective surgery.
Features & Benefits
By accurately identifying each patient’s risk level for their operation, OpenPredictor empowers healthcare providers to streamline elective pre-assessment pathways and identify early opportunities to optimise patient health.
When integrated into a hospital system, clinicians can seamlessly access a patient’s report and risk level, ensuring efficient and informed decision-making.
At a click, staff can review predicted risk levels for entire cohorts of waiting patients, to help support optimal patient allocation, increased usage of elective hubs and ambulatory services, and increased productivity for pre-assessment teams.
With improved waiting list management, hospitals can make efficient use of elective hubs or ambulatory services, and better allocate patients to pre-surgical preparation initiatives, reducing last-minute cancellations.
OpenPredictor is proven to predict patient risk to an accuracy comparable to a trained clinician.
OpenPredictor is registered as a medical device able to provide clinical support in the pre-assessment pathway.
Responsible Development
The stakes involved in using medical software are very high. Throughout its development, OpenPredictor has been recognised as a beacon of good practice and safety in machine learning. As a useful framework, the development of OpenPredictor follows UK governments AI Regulatory Principles.
Safety, security and robustness:
Nothing we build is used in a medical setting until we know, and have clinically proven, that it is safe to do so. We understand the risks of using AI in healthcare and we take every precaution we can to ensure our technology remains safe.
Transparency and explainability:
We are open about how our product has been developed and what it does. When we work with healthcare providers, we work collaboratively so that they know, at every step of the way what we are doing and why we are doing it.
Build into OpenPredictor’s are explainability tools, that allow us to trace and explain the outcomes it produces. When working with different datasets and service providers, the performance of the model can change. Being able to track why, helps us keep our stakeholders informed.
Fairness:
We recognise that there are variations in data and the way it’s collected that can propagate biases when used with machine learning. Our work on understanding these and detailing where protected characteristics and the impact they have on model performance ensures that we develop models that are fair across the populations to we serve.
We monitor OpenPredictor’s machine learning models for changes in the data that can allow biases to be introduced over time and adjust how we train it to reduce healthcare-related inequities. We take proactive approach to the post-deployment surveillance of our models to identify and reduce inequities.
Accountability and governance:
As with any medical device, oversight is crucial to ensuring that OpenPredictor is used safely and in a way that benefits patients. Ensuring accountability and outcome review measures are in place is a central part of how we implement our software in healthcare practice.
We work with healthcare providers to ensure that it is used safely and with supervision according to its intended use as a clinical decision aid to ensure a chain of accountability is maintained.
Contestability and redress:
Patients are our most important stakeholders, and they should understand how our software fits into their care so that they can hold us to account. We work with healthcare providers so that our role in healthcare is always well understood and contestable by our stakeholders.
We periodically review model performance and outcomes to ensure that OpenPredictor remains beneficial to the healthcare provider.
Technical Details
OpenPredictor is a machine learning tool that has been trained on anonymised patient data to give a prediction for post-operative complications for different patients.
When a set of patient data is run through the software, OpenPredictor will measure the data against the machine learning model it has developed to produce a predicted risk level. This can then be used by trained clinicians to support their decision-making regarding patient treatment.
The software uses Azure Machine Learning Studio (AML) to train its model and Microsoft Responsible AI Tools to support understandability when analysing outputs.
The polynomial logistic regression model used by the software has been determined through years of research and testing to ensure that it deals with the specific nature of patient medical data as effectively as possible.
OpenPredictor’s user interface is designed to be as easy to use as possible. This has been done by developing the front-end tool according to the specifications of the NHS advisory toolkit.