Artificial Intelligence Indocyanine Green (ICG) Perfusion for Colorectal Cancer Intra-operative Tissue Classification
METHODS
Intraoperative white-light and NIR-fluorescence multispectral videos of primary colorectal tumours in patients undergoing diagnostic and therapeutic intervention were used for both AI model training and testing. These contained mucosal tumours subsequently determined as being benign or malignant with their surrounding area of macroscopically normal tissue, as viewed in both white light and NIR after systemic administration of ICG (0.25 mg/kg; Diagnostic Green, Farmington Hills, MI, USA). For real-time intraoperative classification, video was captured directly on to a laptop using an Intensity Shuttle device (Blackmagic Design, Freemont, CA, USA). All patients had been referred for surgical opinion following colonoscopy owing to pathological or clinical concern of malignancy. As the available endoscopic NIR visualization system (PINPOINT® Endoscope; Novadaq/Stryker, Kalamazoo, MI, USA) is a rigid 30-cm optical insert, all tumours were located within this distance from the anal verge.
The study received Institutional Review Board approval (1/378/2092) and the protocol was registered at https://clinicaltrials.org (NCT04220242).
Computer modelling
Computer algorithms were developed to distinguish the varying patterns of ICG perfusion through cancerous, benign, and normal tissues by estimating biophysical NIR intensity time-series parameters before classification (FIG. 1 and APPENDIX S1, SUPPLEMENTARY MATERIAL)14. In brief, following video capture and tissue annotation by the surgical team, a number of regions of interest (ROI) were chosen from each surgical field of view (FOV). As much of the tumour area as possible was selected, avoiding areas of obvious surface haemorrhage or oedema. A specifically developed video-tracking algorithm was applied to compensate for movements of the hand-held camera, patient respirations, and gas pressure variance while capturing time-varying pixel intensities representing ICG brightness within the ROIs across consecutive video frames (VIDEOS S1–S4, SUPPLEMENTARY MATERIAL). The extracted NIR intensity data were then fitted to a parametric curve derived from a biophysical model, constructed to capture both the inflow behaviour and exponential decay of the ICG bolus. A supervised method was used to train machine learning-based classification models on a library of archival videos for cases with known pathology results to differentiate healthy tissue from tumour within each ROI with confidence scores 15. After training, the model then assigns the pathology label for unseen cases under the assumption that patterns hold more broadly.
Schematic of process for classifier design
Each step of the pipeline assembles a data set from the corpus of multispectral videos in order to train a supervised classification algorithm, which was then evaluated based on a leave-one-out test framework (see APPENDIX S1, SUPPLEMENTARY MATERIAL for further detail). Details of the parametric curve related to the biophysical model are given in APPENDIX S1, SUPPLEMENTARY MATERIAL. NIR, near-infrared.
Statistical analysis
Patient-level variations were observed in the data and addressed by normalizing the features of every ROI with respect to a healthy reference region in the same patient to create a more generalizable prediction model of outcomes for unseen patients. A distinction was made between ‘rate’ (for example, uptake/washout of ICG) and ‘absolute’ (for example, time delay) features (normalizing the former by a ratio between suspicious and healthy sample regions and the latter by the value difference between feature and reference value of the same feature). Given the cohort size limitations, model discrimination was based on a leave-one-out (LOO) cross-validation, studying region and patient classification accuracy in comparison to pathology results, factoring in error rates. Kolmogorov–Smirnov testing evaluated differences in feature distribution.
RESULTS
Twenty-four patient videos (11 with cancer) (TABLE 1) were studied. Numerous ROIs (total 526 (range 11–40) per patient) were selected for analysis from each video. NIR intensities were extracted by tracking ROIs within each video, focusing on the initial ICG wash-in period of 100–300 s (FIG. 2). Several filtering rules ensured that the physical parameters modelled were meaningful (see APPENDIX S1, SUPPLEMENTARY MATERIAL). The data of four patients (3 with cancer) were thereby excluded, owing to lack of parametric fit, the reasons being excessive camera movement, absence of sufficient ROI control (normal rectal mucosa) within the FOV, or atypical tissue perfusion pattern, perhaps because of previous multiple biopsies or radiotherapy. The resulting data set used for analysis had 435 ROI profiles from 20 patients (8 with cancer), each with 12 perfusion-characterizing features with balanced outcomes (198 benign, 124 cancer, 181 normal). Several classifier configurations were tested with similar classification performance. Results are reported for an ensemble of a gradient-boosting tree model (FIG. 3).
VIDEO S4, SUPPLEMENTARY MATERIAL.
Overall cohort (n = 24) | Benign group (n = 13) | Cancer group (n = 11) | |
---|---|---|---|
Age (years)* | 69 (53–93) | 74 (58–90) | 71 (53–93) |
Sex ratio (M : F) | 16 : 8 | 9 : 4 | 7 : 4 |
Pathology | – | Adenoma, high-grade dysplasia in 2 | Invasive adenocarcinoma |
Diameter of lesion (mm)* | 40 (13–120) | 45 (13–120) | 25 (9–90) |
Values are median (range). Four patients (3 with cancer) were excluded owing to lack of parametric fit; reasons for exclusion included movement of the camera, atypical perfusion pattern through the tissue, perhaps because of multiple previous biopsies (1 patient) or previous radiotherapy (1 patient), or insufficient control of the region of interest (normal rectal mucosa).
The mean LOO accuracy (percentage of ROIs correctly predicted in unseen patients) was 86.4 per cent, indicating that the software tool was discriminant. At patient-level diagnosis, the system correctly diagnosed 19 of 20 cancers (95 per cent) with a sensitivity of 100 per cent (92 per cent specificity). The one case where the system incorrectly predicted cancer may have been driven by patient-specific variations not reflected in the training set, indicating that greater patient numbers would further improve classification accuracy.
Notably, the model was applied in two patients in real-time intraprocedurally, with completion of data acquisition, AI analysis and tissue classification available within 10 min (VIDEO S5 SUPPLEMENTARY MATERIAL).
DISCUSSION
Key surgical decisions are traditionally made by human visual judgements, which assume a biologically static FOV during the time frame of observation. Targeted agents for cancer imaging currently under trial adhere rigidly to this paradigm, in the main being administered systemically before surgery, with surgery scheduled for when maximum stable contrast between tumour and other tissues exists. Often this timing is unpredictable, can take days, and false positives still occur 10. Clinical usefulness is further limited by dosing practicalities, scheduling vagaries, and patient-to-patient as well as cancer-to-cancer variance.
An alternative approach is to image and use AI to analyse the dynamic perfusion of the exogenous substance to reveal tissue-specific patterns. Reliance on fluorophore clearance pharmacokinetics is overcome by imaging early and continuously during surgery, with data interpretation by AI algorithms. Cancer vascular architecture is significantly different in neoplasia versus normal tissue, becoming increasingly disorganized with frank malignancy 16 (observable in vivowith dynamic contrast-enhanced CT 17). Within the cancer microenvironment, cell organization is chaotic, with apoptosis, angiogenesis, increased interstitial pressure, abnormal extracellular matrix, and inflammatory/immune cell infiltrates. Cumulatively these anomalies dictate agent perfusion through tissues, causing temporal variations between cancerous and normal tissues during the first-pass wash-in phase 18.
This early experiential report describes the realization of this concept using AI for the first time. The overall patient classification score supplements human expert clinical impression in this series, and has applications in both diagnosis (initial endoscopy, as well as for unforeseen lesions encountered during surgery) and therapy (indicating limits of early-stage cancers, as well as identifying metastatic deposits) as a real-time decision-support tool. Larger training sets covering wider patient variations and extension of the visualization period (to include the wash-out phase) should improve the accuracy of tissue classification by AI algorithms. Importantly, the discriminate detection of accumulating fluorescence is applicable to other fluorophores, with increased and prolonged peritumoral concentration likely to accentuate differences between tissues 19, 20. Furthermore, as the biological processes are not specific to colorectal cancer, the findings are relevant to other cancers and metastases. The next stages of this work include expanding the tissue classification from operator-selected ROIs to the entire FOV. Additionally, an AI heat-map display of the classification results to the surgical team and further fluorescence data-mining via AI for patient-specific surgery-guiding information are envisaged.
ACKNOWLEDGEMENTS
This work was supported by funding from the Irish Government Department of Business, Enterprise and Innovation’s Disruptive Technology Innovation Fund.
Data collected for this study will be available with publication, including individual participant data after de-identification, along with related documents such as study protocol and informed consent form with investigator support after approval of a methodologically sound protocol for collaboration and a signed data access agreement for up to 12 months after publication.
Disclosure. R.C. is named on a patent filed in relation to processes for visual determination of tissue biology and has received speaker fees from Stryker, consultancy fees from Touch Surgery and DistalMotion, and research funding from Intuitive. D.F.O.S. has a financial interest in patents filed and granted relating to NIR fluorophores and processes for visual determination of tissue biology. J.P.E., P.M.A., R.N., and S.Z. are full-time employees of IBM Research, a division of IBM, which provides technical products and services worldwide to government, healthcare, and life-sciences companies. The authors hold, and have filed, patents concerning technologies related to the subject matter of this paper. F.K. and H.A.K. have no conflict of interest to report.
Supplementary material
SUPPLEMENTARY MATERIAL is available at BJS online.
- Released: December 16th, 2020 03:38 PM
- Website: Methods Intraoperative white-light and NIR-fluorescence multispectral videos of primary colorectal tumours in patients undergoing diagnostic and therapeutic intervention were used for both AI model training and testing. These contained mucosal tumours subsequently determined as being benign or mali