top of page
  • Michael Milad

The AI doctor will see you now

By Michael Milad


A blue and white plastic humanoid toy robot on a blue background. It appears to be holding a stethoscope, with the earpieces against where its ears might be.
Image reference: https://www.flickr.com/photos/30478819@N08/50959554757

Artificial intelligence (AI) has recently received significant hype, with promises that “they” will drive our cars, do our work, or potentially spell the end of civilisation. Despite all this excitement, few can accurately explain what AI is. Nick Bostrum, a philosopher at the University of Oxford, categorised three major groups of AI:

  1. Narrow AI – this is the AI we are most familiar with – it uses a previously defined algorithm to continually improve, restricted to a specific function. So, whilst a narrow intelligence machine can beat the best chess player in the world, it still has an IQ of zero as it fails to do anything else.

  2. General AI – a level of AI that has not yet been reached. At this level, the machine could one day have the same cognitive abilities as a human being, able to effectively reason, argue, memorize, and solve issues.

  3. Artificial Superintelligence – a theoretical, potentially dystopian machine which exceeds the combined cognitive capacity of humanity. Figures such as Stephen Hawking, Bill Gates and Elon Musk have expressed how an artificial superintelligence could escape human control, take a treacherous turn that eventually results in the extinction of humanity. The fear of reaching this level is what is driving Musk to integrate AI with the brain through Neuralink, ensuring symbiosis with superintelligence rather than competition.

These definitions explain the multifaceted nature of AI, which is best defined by the complexity and cognitive ability of the algorithms. Narrow and general AI are most likely to revolutionise medicine in the upcoming decades, and should be on the radar of aspiring and current clinicians.


Narrow AI has already shown promise in revolutionizing medicine. Researchers at the John Radcliffe have developed an AI system which is more accurate at diagnosing heart disease than doctors in over 80% of cases. Across the Atlantic, researchers at Harvard University have developed an AI-assisted diagnostic tool that can detect potentially lethal blood infections: they trained the machine on a series of 100,000 images, garnered from 25,000 slides. The system was able to recognise bacteria with a 95% accuracy. Given that a recent workforce census determined only 3% of NHS histopathologists have enough staff to meet clinical demand, AI would aid alleviate shortages.


Cogito, a behavioural analytics company, has been using AI-powered voice recognition to analyse and improve customer service interactions across a range of industries. They have delved into the healthcare industry with their recent “Cogito Companion” App, which tracks a patient’s behaviour, speech, and interactions. It does so by monitoring a patient’s phone for both passive and active behavioural signals, such as location data that can indicate when a patient hasn’t left their home for a long period, and communication logs, that indicate they haven’t spoken to anyone for several weeks. Integrated apps such as this demonstrate the potential role of AI in personalised medicine: algorithms can monitor patients, observing behaviour as well as reminding them to take medication.

Furthermore, general intelligence could learn about the daily habits of each patient, fitting in reminders at times that will maximise adherence. This could all improve not only medical outcomes, but patient’s subjective experience of care.


However, there remains a lot of uncharted territory when it comes to a machine handling our health. For example, whilst robotic tools are currently valuable parts of surgical practice, they make take a more independent role, operating without the direct instruction of a surgeon. What happens if a mistake arises: can a patient sue a robot for malpractice? Traditionally, medical malpractice is thought to be the result of negligence on the part of the doctor. However, the concept of negligence, especially for narrow intelligence, is an awareness inherently lacked by AI. If not the robot, who takes the blame – the doctor overseeing the company manufacturing it, or the specific engineer that designed the algorithm? Another major issue is security – with medical apps likely to take a larger role in our healthcare, who will store, and control, all this data? These questions must be addressed soon if AI is to be fully integrated into care.


We must also remember that narrow AI relies on the data that we input and that abstractions which are logical to humans are not to machines. For example, dermatologists often use rulers to measure lesions that they suspect are cancerous, and the ruler then features in any photo taken of the lesion which is added to the patient’s medical record. When a series of these images are presented to the machines, they learn to associate the presence of a ruler in a photo of a malignant lesion and the resulting algorithm is more likely to say a lesion was cancerous if a ruler was present, regardless of the appearance of the lesion itself. This means the algorithm may be failing to recognise malignant skin lesions in photos when there is not a ruler. Algorithms may also inherit our bias, resulting from a lack of diversity of the input, such as patient data, used to train AI: white men still dominate within clinical and academic research, as well as accounting for most of the patients involved in clinical trials.


Despite these issues, AI will become prominent in medicine, which places the role of the physician in question.


Will doctors remain relevant when general Intelligence can answer all questions and recommend more effective treatment in a fraction of the time it takes a human? The answer is yes, but physicians will have a remarkably different duty: instead of focusing their time and attention on diagnostics and admin, doctors will be at the side of the patient, providing the human touch of medicine.


It is important to remember that humans are the ones ultimately driving the change in AI – rather than fearing the future, physicians should be prepared to build it.


31 views0 comments
bottom of page