Skip Navigation and Go To Content

Types of AI

AI in Healthcare: A Practical Guide for Clinicians & Researchers

Artificial Intelligence (AI) refers to a variety of computer-driven techniques designed to perform tasks traditionally requiring human judgment and expertise. From rule-based decision support to advanced self-learning systems, AI is reshaping how care is delivered, diagnoses are made, and research is conducted. Whether you are a nurse, physician, public health expert, or healthcare researcher, this guide offers a clear, conversational overview of major AI approaches and real-world examples to help you begin planning and implementing AI initiatives in your setting.

Supervised Learning Unsupervised Learning Reinforcement Learning Deep Learning Natural Language Processing Large Language Models Computer Vision Robotics and Automation

Supervised Learning

Supervised learning is like teaching a computer by example. You provide a labeled data set—say, thousands of X-ray images tagged as “pneumonia” or “normal”—and the model learns patterns that distinguish these classes. Once trained, the model can classify new cases automatically.

Key Considerations

  • Data & Labels: Requires sizable, high-quality labeled data.
  • Compute Needs: Moderate to high, often needing GPU-based training.
  • Interpretability: Medium—techniques like feature importance maps can help, but deep models can feel opaque.
  • Challenges: High labeling costs, overfitting when data are scarce, and ensuring performance holds up in new patient populations.

Examples and Methodology

  • Diabetic Retinopathy Screening

    A deep CNN was trained on over 100,000 expert-graded retinal fundus images from the EyePACS program and multiple Indian clinics, and validated on two hold-out sets (EyePACS-1 and Messidor-2). At a high specificity operating point, the model achieved 90.3% sensitivity and 98.1% specificity on EyePACS-1, and 87.0% sensitivity and 98.5% specificity on Messidor-2, demonstrating ophthalmologist-level performance in detecting referable diabetic retinopathy. In a subsequent randomized trial in youth (ACCESS), autonomous deployment of this algorithm at the point of care increased diabetic eye-exam completion rates from 22% to 100% within six months, closing critical care gaps in diverse pediatric populations.

    Relevant Publications:

    • Gulshan, Varun et al. “Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs.” JAMA vol. 316,22 (2016): 2402-2410. doi:10.1001/jama.2016.17216
    • Abràmoff, Michael D et al. “Pivotal trial of an autonomous AI-based diagnostic system for detection of diabetic retinopathy in primary care offices.” NPJ Digital Medicine vol. 1 39. 28 Aug. 2018, doi:10.1038/s41746-018-0040-6
    • Wolf, Risa M et al. “Autonomous artificial intelligence increases screening and follow-up for diabetic retinopathy in youth: the ACCESS randomized control trial.” Nature Communications vol. 15,1 421. 11 Jan. 2024, doi:10.1038/s41467-023-46476-z
  • Skin Lesion Classification

    Researchers fine-tuned a pre-trained CNN on a 129,000-image dermatology dataset. Transfer learning accelerated training and allowed the network to capture subtle color and texture cues that distinguish malignant from benign lesions (1).

    Relevant Publications:

    • Esteva, Andre et al. “Dermatologist-level classification of skin cancer with deep neural networks.” Nature vol. 542,7639 (2017): 115-118. doi:10.1038/nature21056
    • Escalé-Besa, Anna et al. “Exploring the potential of artificial intelligence in improving skin lesion diagnosis in primary care.” Scientific Reports vol. 13,1 4293. 15 Mar. 2023, doi:10.1038/s41598-023-31340-4
    • Sar?, Merve Okumus, and Kübra Keser. “Classification of skin diseases with deep learning based approaches.” Scientific Reports vol. 15,1 27506. 28 Jul. 2025, doi:10.1038/s41598-025-13275-x
  • ECG Arrhythmia Detection

    A hybrid deep learning model combining convolutional and recurrent layers learned to identify waveform morphologies from 90,000 single-lead ECG traces, matching cardiologist accuracy in detecting abnormal rhythms (1).

    Relevant Publications:

    • Hannun, Awni Y et al. “Cardiologist-level arrhythmia detection and classification in ambulatory electrocardiograms using a deep neural network.” Nature Medicine vol. 25,1 (2019): 65-69. doi:10.1038/s41591-018-0268-3
    • Kwon, Daehyun et al. “Deep learning-based prediction of atrial fibrillation from polar transformed time-frequency electrocardiogram.” PLoS One vol. 20,3 e0317630. 10 Mar. 2025, doi:10.1371/journal.pone.0317630
    • Fiorina, Laurent et al. “Artificial intelligence-based electrocardiogram analysis improves atrial arrhythmia detection from a smartwatch electrocardiogram.” European Heart Journal. Digital Health vol. 5,5 535-541. 6 Jul. 2024, doi:10.1093/ehjdh/ztae047

Unsupervised Learning

Unsupervised learning explores data without provided labels, revealing hidden structures—like grouping patients into meaningful subtypes or spotting unusual cases that fall outside normal patterns.

Key Considerations

  • Data: Works with unlabeled clinical measures, imaging features, or time-series data.
  • Compute Needs: Moderate; algorithms such as k-means or hierarchical clustering run quickly on standard hardware.
  • Interpretability: Low to moderate—clusters must be interpreted by domain experts.
  • Challenges: Choosing the right number of groups, validating that discovered patterns are clinically meaningful.

Examples and Methodology

  • Adult-Onset Diabetes Phenotypes

    In the Swedish ANDIS cohort of 8,980 newly diagnosed patients, k-means clustering on six baseline variables age at diagnosis, BMI, HbA1c, HOMA2-B, HOMA2-IR, and GADA autoantibody status revealed five distinct subgroups with divergent pathophysiology and rates of retinopathy, nephropathy, and cardiovascular complications; these clusters were subsequently validated in large Chinese and US cohorts, confirming their prognostic value for tailoring precision-medicine strategies.

    Relevant Publications:

    • Ahlqvist, Emma et al. “Novel subgroups of adult-onset diabetes and their association with outcomes: a data-driven cluster analysis of six variables.” The Lancet. Diabetes & Endocrinology vol. 6,5 (2018): 361-369. doi:10.1016/S2213-8587(18)30051-2
    • Zou, Xiantong et al. “Novel subgroups of patients with adult-onset diabetes in Chinese and US populations.” The Lancet. Diabetes & Endocrinology vol. 7,1 (2019): 9-11. doi:10.1016/S2213-8587(18)30316-4
  • ARDS Subphenotypes

    By applying latent class analysis to biomarker and clinical data from over 2,400 patients enrolled in the ARMA and ALVEOLI trials, researchers identified two reproducible ARDS subphenotypes “hyper-inflammatory” (higher shock and metabolic acidosis) and “hypo-inflammatory” which differed markedly in mortality (≈46% vs. 23%) and showed opposite responses to high- versus low-PEEP ventilation strategies.

    Relevant Publications:

    • Calfee, Carolyn S et al. “Subphenotypes in acute respiratory distress syndrome: latent class analysis of data from two randomised controlled trials.” The Lancet. Respiratory Medicine vol. 2,8 (2014): 611-20. doi:10.1016/S2213-2600(14)70097-9
    • Maddali, Manoj V et al. “Validation and utility of ARDS subphenotypes identified by machine-learning models using clinical data: an observational, multicohort, retrospective analysis.” The Lancet. Respiratory Medicine vol. 10,4 (2022): 367-377. doi:10.1016/S2213-2600(21)00461-6
  • Sepsis Clinical Clusters

    Leveraging hierarchical clustering of serial vital signs and laboratory trajectories from 20,000 MIMIC-III sepsis episodes, investigators defined four clinical phenotypes most prominently a “delta” cluster with escalating lactate and SOFA scores, associated with a 30% higher in-hospital mortality; these data-driven subgroups now underpin stratified trial designs and targeted sepsis interventions.

    Relevant Publications:

    • Seymour, Christopher W et al. “Derivation, Validation, and Potential Treatment Implications of Novel Clinical Phenotypes for Sepsis.” JAMA vol. 321,20 (2019): 2003-2017. doi:10.1001/jama.2019.5791

Reinforcement Learning

Reinforcement learning (RL) teaches algorithms by reward. An RL agent interacts with a patient “environment”—either simulated or historical—and receives feedback (“rewards”) based on outcomes. Over many trials, it learns policies that maximize positive reward signals (e.g., survival, stable vitals).

Key Considerations

  • Data & Environment: Requires either detailed simulations or large repositories of longitudinal patient data with outcome markers.
  • Compute Needs: High, often needing extensive simulations or batch learning on GPUs.
  • Interpretability: Low—learned policies can be opaque decision maps.
  • Challenges: Ensuring safety for real patients, translating simulated performance into practice, sample inefficiency.

Examples and Methodology

  • AI Clinician for Sepsis

    Böck et al. and Wu et al. built on the MIMIC-III ICU repository to train an off-policy, distributional Q-learning agent that represents each patient’s state - vitals, laboratory values, administered treatments and optimizes a reward function tied to survival. In retrospective simulations, the AI Clinician’s suggested fluid and vasopressor dosing policies aligned with lower predicted mortality than typical clinician choices, demonstrating superhuman performance on historical sepsis management data.

    Relevant Publications:

    • Böck, Markus et al. “Superhuman performance on sepsis MIMIC-III data by distributional reinforcement learning.” PLoS One vol. 17,11 e0275358. 3 Nov. 2022, doi:10.1371/journal.pone.0275358
    • Wu, XiaoDan et al. “A value-based deep reinforcement learning model with human expertise in optimal treatment of sepsis.” NPJ Digital Medicine vol. 6,1 5. 2 Feb. 2023, doi:10.1038/s41746-023-00755-5
  • VentAI for Mechanical Ventilation

    Liu et al. trained an RL agent on thousands of invasive mechanical ventilation episodes - encoding tidal volume, PEEP, FiO?, and patient responses to learn policies that maximize simulated patient “reward” (e.g., stable gas exchange, reduced ventilator-induced injury). Den Hengst et al. further infused ARDSnet guideline rules into the reward structure, and both studies showed that VentAI’s recommended ventilator settings outperformed retrospective clinician benchmarks in simulated outcome returns.

    Relevant Publications:

    • Liu, Siqi et al. “Reinforcement Learning to Optimize Ventilator Settings for Patients on Invasive Mechanical Ventilation: Retrospective Study.” Journal of Medical Internet Research vol. 26 e44494. 16 Oct. 2024, doi:10.2196/44494
    • den Hengst, Floris et al. “Guideline-informed reinforcement learning for mechanical ventilation in critical care.” Artificial Intelligence in Medicine vol. 147 (2024): 102742. doi:10.1016/j.artmed.2023.102742
  • RL-DITR for Diabetes Insulin Dosing

    Wang et al. and Desman et al. developed an RL framework that first constructs a patient-state model from longitudinal EHR glucose and insulin time-series in hospitalized patients with Type 2 diabetes. Using distributional RL, the agent then learns insulin dosing policies that optimize glycemic control metrics while penalizing hypoglycemia risk. In proof-of-concept trials, these learned policies achieved tighter glucose targets and eliminated severe hypoglycemia episodes compared to standard care.

    Relevant Publications:

    • Wang, Guangyu et al. “Optimized glycemic control of type 2 diabetes with reinforcement learning: a proof-of-concept trial.” Nature Medicine vol. 29,10 (2023): 2633-2642. doi:10.1038/s41591-023-02552-9
    • Desman, Jacob M et al. “A distributional reinforcement learning model for optimal glucose control after cardiac surgery.” NPJ Digital Medicine vol. 8,1 313. 27 May. 2025, doi:10.1038/s41746-025-01709-9

Deep Learning

Deep learning refers to neural networks with many layers that can extract hierarchical features—edges, shapes, textures—from raw data like images or waveforms. These models have driven breakthroughs in pattern recognition tasks once thought out of reach.

Key Considerations

  • Data & Labels: Needs very large labeled or weakly labeled data sets (images, signals).
  • Compute Needs: Very high—GPUs or specialized accelerators are standard.
  • Interpretability: Low; research in model explainability is active but still maturing.
  • Challenges: Massive data demands, training stability, generalization to new settings.

Examples and Methodology

  • Google’s Diabetic Retinopathy Algorithm

    A systematic review and meta-analysis of 18 prospective studies showed that deep-learning–based diabetic retinopathy screening algorithms including the original 10-layer CNN achieved a pooled sensitivity of 87.7% and specificity of 90.6% across diverse clinical settings. In the ACCESS randomized trial, autonomous AI eye exams increased screening completion in youth with diabetes from 22% to 100% within six months, closing critical care gaps in under-resourced populations.

    Relevant Publications:

    • Wang, Zhibin et al. “Performance of artificial intelligence in diabetic retinopathy screening: a systematic review and meta-analysis of prospective studies.” Frontiers in Endocrinology vol. 14 1197783. 13 Jun. 2023, doi:10.3389/fendo.2023.1197783
    • Wolf, Risa M et al. “Autonomous artificial intelligence increases screening and follow-up for diabetic retinopathy in youth: the ACCESS randomized control trial.” Nature Communications vol. 15,1 421. 11 Jan. 2024, doi:10.1038/s41467-023-44676-z
  • Dermatology Lesion Classification

    Manzoor et al. introduced a dual-stage deep-learning pipeline that first segments lesion boundaries in dermoscopic images, then classifies them with a DenseNet backbone achieving 92.3% overall accuracy and producing attention maps to highlight key image regions. Building on this, Arshad et al. developed a network-level fusion model combining multiple CNNs to localize and categorize over 20 lesion types, reaching a mean F1-score of 0.85 while offering saliency-based explanations for each decision.

    Relevant Publications:

    • Manzoor, Khadija et al. “Dual-stage segmentation and classification framework for skin lesion analysis using deep neural network.” Digital Health vol. 11 20552076251351858. 13 Jul. 2025, doi:10.1177/20552076251351858
    • Arshad, Mehak et al. “Multiclass skin lesion classification and localization from dermoscopic images using a novel network-level fused deep architecture and explainable artificial intelligence.” BMC Medical Informatics and Decision Making vol. 25,1 215. 1 Jul. 2025, doi:10.1186/s12911-025-03051-2
  • CheXNet Pneumonia Detection

    An et al. designed an ensemble of EfficientNetB0 and DenseNet121 augmented with attention modules, boosting pneumonia F1-score to 0.82 on the ChestX-ray14 test set - surpassing earlier single-model baselines. In a multicenter evaluation, Anderson et al. showed that radiologists aided by the FDA-cleared AI system improved overall chest-X-ray abnormality detection accuracy by 10.1% (AUC from 0.88 to 0.97), demonstrating substantial real-world assistive value.

    Relevant Publications:

    • An, Qiuyue et al. “A Deep Convolutional Neural Network for Pneumonia Detection in X-ray Images with Attention Ensemble.” Diagnostics (Basel, Switzerland) vol. 14,4 390. 11 Feb. 2024, doi:10.3390/diagnostics14040390
    • Anderson, Pamela G et al. “Deep learning improves physician accuracy in the comprehensive detection of abnormalities on chest X-rays.” Scientific Reports vol. 14,1 25151. 24 Oct. 2024, doi:10.1038/s41598-024-76068-2

Natural Language Processing (NLP)

Classic NLP pipelines break free-text notes into tokens, map phrases to medical concepts, and identify context such as negation or temporality. These processes turn narrative text into structured data for research and decision support.

Key Considerations

  • Data & Text: Large corpora of clinical notes, reports, and transcriptions.
  • Compute Needs: Low to moderate—most tasks run on standard servers.
  • Interpretability: Medium—each pipeline step can be reviewed and adjusted.
  • Challenges: Handling abbreviations, ambiguous language, domain adaptation across specialties.

Examples and Methodology

  • MedLEE

    MedLEE employs a multi-stage pipeline - first parsing radiology and pathology narratives using a domain-specific grammar, then mapping parsed phrases to UMLS concepts via a coded dictionary, and finally filtering results through semantic constraints. In a systematic review of clinical data warehouses, Bazoge et al. (2023) reported that MedLEE achieved approximately 83% recall and 89% precision when extracting findings such as “pulmonary opacity” or “hepatocellular carcinoma” from free-text reports, illustrating how rule-based parsing can reliably convert narrative text into structured, research-ready data.

    Relevant Publications:

    • Bazoge, Adrien et al. “Applying Natural Language Processing to Textual Data From Clinical Data Warehouses: Systematic Review.” JMIR Medical Informatics vol. 11 e42477. 15 Dec. 2023, doi:10.2196/42477
  • Apache cTAKES

    cTAKES chains together sentence splitting, tokenization, dictionary lookup (leveraging UMLS and SNOMED-CT), and the built-in assertion module to extract clinical entities and detect negation or uncertainty. Kim et al. demonstrated its scalability by processing over one million notes to identify housing and food insecurity, achieving a positive predictive value of 77.5% for housing-issue mentions. Lossio-Ventura et al. further benchmarked cTAKES on real-world EHR data, showing F1-scores above 0.80 for core concept recognition tasks and underscoring its versatility across specialties.

    Relevant Publications:

    • Kim, Min Hee et al. “Extracting Housing and Food Insecurity Information From Clinical Notes Using cTAKES.” Health Services Research vol. 60 Suppl.3, Suppl.3 (2025): e14440. doi:10.1111/1475-6773.14440
    • Lossio-Ventura, Juan Antonio et al. “Clinical concept recognition: Evaluation of existing systems on EHRs.” Frontiers in Artificial Intelligence vol. 5 1051724. 13 Jan. 2023, doi:10.3389/frai.2022.1051724
  • ConText

    Building on simple negation detection, ConText applies rule-based “scope windows” around trigger terms to label whether a finding is negated, historical, or pertains to someone other than the patient. Mirzapour et al. adapted this algorithm for French clinical notes - achieving F1-scores of 0.93 for negation and 0.86 for temporality, while Slater et al. reimplemented ConText logic with dependency-grammar heuristics, processing discharge summaries over 2,000 sentences per second and maintaining >90% accuracy in identifying present vs. historic mentions.

    Relevant Publications:

    • Mirzapour, Mehdi et al. “French FastConText: A publicly accessible system for detecting negation, temporality and experiencer in French clinical notes.” Journal of Biomedical Informatics vol. 117 (2021): 103733. doi:10.1016/j.jbi.2021.103733
    • Slater, Karin et al. “A fast, accurate, and generalisable heuristic-based negation detection algorithm for clinical text.” Computers in Biology and Medicine vol. 130 (2021): 104216. doi:10.1016/j.compbiomed.2021.104216

Large Language Models (LLMs)

LLMs are transformer-based models trained on massive text collections. They can perform a variety of language tasks—from generating summaries to answering questions—often with little or no task-specific training data.

Key Considerations

  • Data & Training: Billions of words from both general and medical sources.
  • Compute Needs: Very high—training and inference commonly run on GPU clusters.
  • Interpretability: Low—internal attention weights are complex to interpret.
  • Challenges: Hallucinations, bias, privacy, prompt design.

Examples and Methodology

  • Med-PaLM 2

    Med-PaLM 2 builds on PaLM2 by fine-tuning on millions of medical QA pairs from MultiMedQA and USMLE-style datasets, and employs ensemble-refinement and chain-of-retrieval prompting to improve long-form reasoning. It scored 86.5% on the MedQA benchmark, 19% above the original Med-PaLM and achieved state-of-the-art accuracy on MedMCQA, PubMedQA, and MMLU clinical topics. In head-to-head physician evaluations on 1,066 consumer medical questions, clinicians ranked Med-PaLM 2’s answers higher than human-written answers on eight of nine utility metrics (p < 0.001).

    Relevant Publications:

    • Singhal, Karan et al. “Toward expert-level medical question answering with large language models.” Nature Medicine vol. 31,3 (2025): 943-950. doi:10.1038/s41591-024-03423-7
    • Singhal, Karan et al. “Large language models encode clinical knowledge.” Nature vol. 620,7972 (2023): 172-180. doi:10.1038/s41586-023-06291-2
  • DistillNote

    DistillNote combines a retrieval-augmented generation (RAG) pipeline first retrieving top-k relevant passages from a clinical-note vector store, then prompting an Llama 2 (13B) model to summarize key findings into structured summaries. In an aged-care study on EHR malnutrition data, zero-shot DistillNote summaries reached >90% accuracy against a gold-standard dataset and, when used as features in downstream predictive models, improved AUC by 7% over non-RAG baselines.

    Relevant Publications:

    • Alkhalaf, Mohammad et al. “Applying generative AI with retrieval augmented generation to summarize and extract key clinical information from electronic health records.” Journal of Biomedical Informatics vol. 156 (2024): 104662. doi:10.1016/j.jbi.2024.104662
    • Vithanage, Dhinithi et al. “Adapting Generative Large Language Models for Information Extraction from Unstructured Electronic Health Records in Residential Aged Care: A Comparative Analysis of Training Approaches.” Journal of Healthcare Informatics Research vol. 9,2 191-219. 20 Feb. 2025, doi:10.1007/s41666-025-00190-z
  • RAG-LLM CDSS

    This system augments an LLM with real-time retrieval of clinical-guideline passages: prescriptions and patient context are used to fetch relevant guideline snippets, which the LLM then uses to flag potential prescribing errors. In simulated vignette tests in ophthalmology, the RAG-LLM framework achieved 92% precision in error identification and increased pharmacist correction rates by 35%. A hepatology implementation (ChatZOC) retrieved 300,000-document specialty corpus to answer 300 clinical questions, outperforming GPT-4 by 12% in guideline adherence (p < 0.01).

    Relevant Publications:

    • Xu, Guanhua et al. “Implementation of hepatocellular carcinoma interpretation by large language models: a retrieval augmented generation-based framework.” NPJ Digital Medicine vol. 7,1 102. 23 Apr. 2024, doi:10.1038/s41746-024-01091-3
    • Luo, Ming-jie et al. “Development and Evaluation of a Retrieval-Augmented Large Language Model Framework for Ophthalmology.” JAMA Ophthalmology vol. 142,9 (2024): 798-805. doi:10.1001/jamaophthalmol.2024.2513

Computer Vision

Computer vision applies convolutional and transformer-based networks to medical imagery—X-rays, endoscopy video, histopathology slides—automating detection, segmentation, and quantification tasks.

Key Considerations

  • Data & Annotation: Requires large sets of accurately labeled images or video frames.
  • Compute Needs: High—training on GPUs is standard; inference can be optimized for edge devices.
  • Interpretability: Low, visualization methods (heatmaps) offer partial insight.
  • Challenges: Variations in equipment, imaging protocols, and patient populations; annotation cost.

Examples and Methodology

  • Real-Time Colonoscopy Polyp Detection

    A deep segmentation CNN was seamlessly integrated into live endoscopic video feeds, where pixel-level annotation models flagged suspicious mucosal protrusions in real time. In a prospective randomized trial, this system delivered an absolute increase of 7.3 percentage points in adenoma detection rate translating to a 29% relative uplift by alerting endoscopists to subtle, flat lesions they might otherwise miss.

    Relevant Publications:

    • Su, Jing-Ran et al. “Impact of a real-time automatic quality control system on colorectal polyp and adenoma detection: a prospective randomized controlled study (with videos).” Gastrointestinal Endoscopy vol. 91,2 (2020): 415-424.e4. doi:10.1016/j.gie.2019.08.026.
    • Liu, Jing et al. “Automatic Quality Control System and Adenoma Detection Rates During Routine Colonoscopy: A Randomized Clinical Trial.” JAMA Network Open vol. 8,1 e2457241. 2 Jan. 2025, doi:10.1001/jamanetworkopen.2024.57241
  • AI-STREAM Mammography

    Leveraging a two-stage object-detection network trained on over 24,000 multi-site mammograms, AI-STREAM overlays bounding boxes on suspicious masses and microcalcifications. In a forward-looking, multicenter cohort trial, radiologists assisted by the model achieved a 13.8% higher invasive-cancer detection rate without any increase in recall rates, demonstrating how AI can boost sensitivity while preserving specificity.

    Relevant Publications:

    • Chang, Yun-Woo et al. “Artificial intelligence for breast cancer screening in mammography (AI-STREAM): preliminary analysis of a prospective multicenter cohort study.” Nature Communications vol. 16,1 2248. 6 Mar. 2025, doi:10.1038/s41467-025-57469-3
    • Kwon, Mi-Ri et al. “Screening mammography performance according to breast density: a comparison between radiologists versus standalone intelligence detection.” Breast Cancer Research vol. 26,1 68. 22 Apr. 2024, doi:10.1186/s13058-024-01821-w
  • CAMELYON16 Metastasis Detection

    A patch-wise CNN ensemble scanned digitized lymph-node whole-slide images in 256×256-pixel tiles, assigning metastasis probability scores that were aggregated into slide-level predictions. In the CAMELYON16 challenge, this approach reached an AUC of 0.994 for detecting breast-cancer metastases on par with expert pathologists, highlighting the power of weakly supervised deep learning on gigapixel histopathology datasets.

    Relevant Publications:

    • Ehteshami Bejnordi, Babak et al. “Diagnostic Assessment of Deep Learning Algorithms for Detection of Lymph Node Metastases in Women With Breast Cancer.” JAMA 318,22 (2017): 2199-2210. doi:10.1001/jama.2017.14585
    • Campanella, Gabriele et al. “Clinical-grade computational pathology using weakly supervised deep learning on whole slide images.” Nature Medicine 25,8 (2019): 1301-1309. doi:10.1038/s41591-019-0508-1

Robotics and Automation

Robotics & automation bring AI into both physical and administrative workflows—from robot-assisted surgery to software bots that automate repetitive tasks.

Key Considerations

  • Data & Control Logic: Sensor inputs, predefined task sequences, or rule sets.
  • Compute Needs: Variable—embedded controllers or cloud-based orchestration.
  • Interpretability: Medium—procedural logic can be reviewed.
  • Challenges: Safety, regulatory compliance, integration, user training.

Examples and Methodology

  • da Vinci Surgical System

    By translating surgeon hand movements into ultra–fine-scaled robotic–arm actions with integrated tremor suppression and motion scaling, the da Vinci Surgical System aims to enhance precision in minimally invasive procedures; however, Csirzó et al. found no significant difference in clinical outcomes for endometriosis surgery compared to conventional laparoscopy, underscoring the importance of procedure-specific evaluation. Conversely, Wang et al. demonstrated that obese patients undergoing da Vinci–assisted radical prostatectomy achieved perioperative, functional, and oncologic results comparable to non-robotic approaches, suggesting that patient-body-habitus won’t limit robotic efficacy in high-BMI cohorts.

    Relevant Publications:

    • Csirzó, Ádám et al. “Robot-assisted laparoscopy does not have demonstrable advantages over conventional laparoscopy in endometriosis surgery: a systematic review and meta-analysis.” Surgical Endoscopy 38,2 (2024): 529-539. doi:10.1007/s00464-023-10587-9
    • Wang, Chong-Jian et al. “Perioperative, functional, and oncologic outcomes in obese patients undergoing Da Vinci robot-assisted radical prostatectomy: a systematic review and meta-analysis.” BMC Urology vol. 24,1 207. 23 Sep. 2024, doi:10.1186/s12894-024-01595-5
  • Automated Pharmacy Dispensing

    Automated dispensing cabinets combine barcode scanning with robotic-arm retrieval to assemble medication orders; a systematic review by Shbaily et al. reported an 80% reduction in overall dispensing errors immediately after automation, with these gains maintained when integrating pharmacy support staff. In surgical and ambulatory-surgery settings, Borrelli et al. found ADC implementation reduced controlled-substance discrepancies by 16%–62.5% and medication errors by 23% up to 100% in some studies, while user-satisfaction rates exceeded 81% and labor hours decreased—demonstrating broad clinical, operational, and economic benefits.

    Relevant Publications:

    • Shbaily, Enaan M et al. “Effectiveness of Pharmacy Automation Systems Versus Traditional Systems in Hospital Settings: A Systematic Review.” Cureus 17,1 e77934. 24 Jan. 2025, doi:10.7759/cureus.77934
    • Borrelli, Eric P et al. “Appraising the clinical, operational, and economic impacts of automated medication dispensing cabinets in perioperative and surgical settings: A systematic literature review.” Journal of the American Pharmacists Association: JAPhA 64,5 (2024): 102143. doi:10.1016/j.japh.2024.102143
  • Zing Methadone-Dispensing Robots

    In opioid-treatment facilities, automated liquid-handling stations integrated with EHR-driven barcoding prepare, label, and dispense methadone under nurse oversight; Al Nemari & Waterson reported that post-automation dispensing errors fell from 1.0% to 0.24%, incomplete prescriptions decreased from 3.0% to 1.83%, and total patient-department time dropped from 17.09 to 11.81 minutes—liberating pharmacists for higher-value time. Complementing this, Takase et al. showed that robotic dispensing systems cut total dispensing errors by ~80% and nearly eliminated wrong-strength and wrong-drug incidents, underscoring their role in high-risk medication environments.

    Relevant Publications:

    • Al Nemari, Manal, and James Waterson. “The Introduction of Robotics to an Outpatient Dispensing and Medication Management Process in Saudi Arabia: Retrospective Review of a Pharmacy-led Automation Initiative.” Cureus 16,4 e97905. 11 Apr. 2024, doi:10.7759/cureus.97905
    • Takase, Tomoyuki et al. “Medication errors prevented by robotic dispensing systems.” Journal of Patient Safety & Risk Management 29,4 (2024): 213-220. doi:10.1177/2042098625020255