Use of artificial intelligence in breast surgery: a narrative review
Review Article

Use of artificial intelligence in breast surgery: a narrative review

Ishith Seth1,2 ORCID logo, Bryan Lim1,2, Konrad Joseph3, Dylan Gracias4, Yi Xie1, Richard J. Ross1,2, Warren M. Rozen1,2

1Department of Plastic Surgery, Peninsula Health, Melbourne, Victoria, Australia; 2Central Clinical School at Monash University, The Alfred Centre, Melbourne, Victoria, Australia; 3Department of Surgery, Port Macquarie Base Hospital, New South Wales, Australia; 4Department of Surgery, Townsville Hospital, Queensland, Australia

Contributions: (I) Conception and design: I Seth, B Lim, K Joseph, D Gracias, Y Xie; (II) Administrative support: RJ Ross, WM Rozen; (III) Provision of study materials or patients: I Seth, B Lim, K Joseph, D Gracias, Y Xie; (IV) Collection and assembly of data: I Seth, B Lim, K Joseph, D Gracias, Y Xie; (V) Data analysis and interpretation: I Seth, B Lim, K Joseph, D Gracias, Y Xie; (VI) Manuscript writing: All authors; (VII) Final approval of manuscript: All authors.

Correspondence to: Ishith Seth, BSc, BBiomed(Hons), MD, MS. Central Clinical School at Monash University, The Alfred Centre, 99 Commercial Rd, Melbourne, Victoria, 3004, Australia; Department of Plastic Surgery, Peninsula Health, Melbourne, Victoria, Australia. Email: ishithseth1@gmail.com.

Background and Objective: We have witnessed tremendous advances in artificial intelligence (AI) technologies. Breast surgery, a subspecialty of general surgery, has notably benefited from AI technologies. This review aims to evaluate how AI has been integrated into breast surgery practices, to assess its effectiveness in improving surgical outcomes and operational efficiency, and to identify potential areas for future research and application.

Methods: Two authors independently conducted a comprehensive search of PubMed, Google Scholar, EMBASE, and Cochrane CENTRAL databases from January 1, 1950, to September 4, 2023, employing keywords pertinent to AI in conjunction with breast surgery or cancer. The search focused on English language publications, where relevance was determined through meticulous screening of titles, abstracts, and full-texts, followed by an additional review of references within these articles. The review covered a range of studies illustrating the applications of AI in breast surgery encompassing lesion diagnosis to postoperative follow-up. Publications focusing specifically on breast reconstruction were excluded.

Key Content and Findings: AI models have preoperative, intraoperative, and postoperative applications in the field of breast surgery. Using breast imaging scans and patient data, AI models have been designed to predict the risk of breast cancer and determine the need for breast cancer surgery. In addition, using breast imaging scans and histopathological slides, models were used for detecting, classifying, segmenting, grading, and staging breast tumors. Preoperative applications included patient education and the display of expected aesthetic outcomes. Models were also designed to provide intraoperative assistance for precise tumor resection and margin status assessment. As well, AI was used to predict postoperative complications, survival, and cancer recurrence.

Conclusions: Extra research is required to move AI models from the experimental stage to actual implementation in healthcare. With the rapid evolution of AI, further applications are expected in the coming years including direct performance of breast surgery. Breast surgeons should be updated with the advances in AI applications in breast surgery to provide the best care for their patients.

Keywords: Artificial intelligence (AI); breast surgery; breast imaging


Submitted Oct 09, 2023. Accepted for publication Feb 21, 2024. Published online Mar 22, 2024.

doi: 10.21037/gs-23-414


Introduction

Background

The first concept of computer systems as an imitator of human intelligence was conceived by Turing in 1950 (1). Artificial intelligence (AI) is a particular computer system or machine that can solve problems that usually require human intelligence. Early generations performed a simple algorithm of ‘if, then’ rules, but subsequent developments in technology and coding have resulted in complex systems that can operate similarly to human intelligence, including the ability to learn from past errors and cross-check results (1-3). Such capacity, coupled with fast processing times and no requirement for rest has created a formidable tool at the heart of the fourth industrial revolution.

Machine learning (ML) is a subset of AI in which the algorithm improves its performance (mode of analysis and patterns) by learning from new datasets without being explicitly re-programmed. The data used for learning may exist in the form of imported features (e.g., breast lesion density) or the form of raw data (e.g., radiological images). Deep learning (DL) is a subset of ML that involves the stacking of multiple algorithmic components into layers, each feeding into the next, operating on raw data and self-learn high-level features. DL models include convolutional, recurrent, and artificial neural networks (CNN, RNN, and ANN), generative adversarial networks (GAN), deep belief nets, and autoencoders (4-9). CNN are designed specifically to analyze and find features from images as seen in Figure 1 (10). Large language models (LLMs) are another type of AI that utilizes natural language processing methods to synthesize user inputs and generate human-like speech (11-13). They have been used to aid diagnosis, medical research, and improve hospital workflow (14-20).

Figure 1 Subsets of artificial intelligence. CNN, convolutional neural networks; RNN, recurrent neural networks; ANN, artificial neural networks; GAN, generative adversarial networks; DBN, deep belief network.

Rationale and knowledge gap

AI models are rapidly evolving and present one of the most significant developments in information processing and problem solving in health care the past 50 years (21). As widespread health data collection creates enormous volumes of information, this data must be processed by consequently more complex systems. AI models are currently applied to optimize different aspects of patients’ care including disease risk prediction, diagnosis, treatment decision-making, predicting treatment response, and predicting survival (2,4,5,22-24). By being able to operate on large volumes of data with high precision, AI models offer distinct advantages over unassisted human performance. A recent publication has successfully elucidated the applications of AI technologies within breast reconstructive procedures, where the authors highlight the promising role of AI in advancing breast reconstruction techniques (25). However, authors state refinement of AI algorithm with cross-disciplinary partnerships for prioritizing their dataset. The scope of breast surgery is much greater than reconstruction alone and further research is needed to characterize the current and prospective implementation of AI in the field.

Objective

Breast cancer is increasing in prevalence and is the leading cause of cancer death among women (26-29). Breast surgery can be used as a prototypical example for the application of AI in healthcare. It is a field comprising population health, risk prediction, diagnostic tests, medical and surgical treatments and integrated health systems and economics, all of which can directly benefit from various mechanisms of AI (30,31). We performed this review aiming to summarize the current literature findings on the application of AI in diagnosing breast lesions as well as preoperative, intraoperative, and postoperative applications of AI in breast surgery. We present this article in accordance with the Narrative Review reporting checklist (available at https://gs.amegroups.com/article/view/10.21037/gs-23-414/rc).


Methods

PubMed, Google Scholar, EMBASE, and Cochrane CENTRAL databases were searched by two authors for relevant studies using the keywords: (“artificial intelligence” [Mesh] OR “machine learning” [Mesh] OR “deep learning” [Mesh]) AND (“breast surgery” [Mesh] OR “breast cancer” [Mesh]) from January 1st, 1950 to 4th of September, 2023. Relevant English publications were included in our review without publication time constraints. Publication relevance was determined by title and abstract screening followed by a full-text screening. In addition, the reference lists of the included publications were screened for inclusion of further relevant studies. We included studies that discussed the applications of AI in different aspects of breast surgery from breast lesion diagnosis to postoperative follow-up (Table 1). Publications focusing specifically on breast reconstructions were excluded from this review.

Table 1

Search strategy for this review

Item Specification
Date of search 13/9/2023
Databases searched PubMed, Google Scholar, EMBASE, Cochrane CENTRAL
Search terms used #1 (“artificial intelligence” [Mesh] OR “machine learning” [Mesh] OR “deep learning” [Mesh]))
#2 (“breast surgery” [Mesh] OR “breast neoplasm” [Mesh])
#1 AND #2
Timeframe 1/1/1950 to 4/9/2023
Inclusion and exclusion criteria Studies that discussed any application of artificial intelligence in breast surgery were included in this review
Studies reported in a language other than English were excluded
Selection process I.S., B.L., K.J., D.G. and Y.X. conducted the selection, searched and discussed which studies were relevant until consensus was reached

Results

AI applications in breast lesion diagnosis

Recent advances in CNN-based computer vision algorithms and growing training datasets has allowed AI to be used in medical imaging and histopathology for breast pathologies (32-35). Such systems can not only create streamlined workflows for reporting clinicians but may also improve diagnostic accuracy. This is especially true in large population breast screening programs (6,7,33,34). Modern feedforward ANN utilize multilayered perceptron to analyze images by classifying them to different color channels, processing the pixel-level images using nonlinear functions, and outputting probability distributions (36). As such, these algorithms have the promise to detect lesions not easily visible to human observers.

Digital mammography (DM)

DM is a breast imaging technique that produces 2-dimensional radiographic images. This imaging modality is used for breast cancer screening because of its feasibility and efficacy in detecting asymmetries, distorted architecture, and abnormal calcifications in breast lesions. Nevertheless, DM image interpretation is difficult and needs extensive experience (37). Smaller lesions can be missed due to obscuration by the overlying breast tissue. This is encountered mostly in younger females who have high breast tissue densities due to higher concentrations of fibroglandular tissue. Therefore, DM images are taken in a mediolateral oblique view and a craniocaudal view (38).

The application of AI in DM image interpretation was introduced in the 1990s and has since evolved with the advances of DL (39-42). DL-based models such as CNNs autonomously learn to identify specific imaging features to differentiate benign breast lesions from malignant ones (43-45). Several studies have been conducted to evaluate the efficacy of AI-based systems on detecting and classifying breast lesions on DM images and have found that AI-based DM image evaluation is noninferior and may be superior to radiologists (39,40,42,46-49). A study conducted by Romero-Martín et al. evaluated the performance of DL-based systems in DM image assessment. Their findings suggest that DL-based systems have an equivalent sensitivity in detecting and classifying breast lesions when compared to the best standard (radiologists). In addition, DL methods have been shown to decrease over-investigation by decreasing breast imaging recall rates (subsequent images for evaluating a suspicious lesion) (48). Another study by Burhenne et al. detected the missed findings in 77% of false-negative mammographic images by subsequent applications of AI (50). Thus, AI applications in mammography can improve breast cancer screening programs’ efficiency with reduced need for human efforts (51,52). Moreover, AI-based models have been proven efficacious in predicting the risk of developing breast cancer in the future by utilizing data collected from DM images (53,54).

Digital breast tomosynthesis (DBT)

DBT is an X-ray-based imaging modality that takes images from different angles to create a partial tomographic 3-dimensional (3D) image, minimizing the problem of tissue superposition (55). However, the complexities associated with DBT result in difficult image interpretation, and longer reading times when compared to DM (56). This has represented another area for AI models to improve efficiency and accuracy.

When evaluated versus the best available standard (radiologists), AI-based DBT image assessment models show non-inferior efficacy in detecting and classifying breast lesions with reduced false-negative rates (39,46-48,57). AI-based DBT interpretation systems are cost-effective, as they improve radiologists’ performance and reduce DBT reading time (58,59). However, in contrast to in DM evaluation, AI-based DBT image evaluation models can result in higher recall rates for further evaluation (48). This may be because DL models can pick up trivial microcalcifications in breast tissue (60).

There exist differences in the utility of different AI models when it comes to DBT analysis. DL models that use multiple images as an input to compare DBT images show better performance in detecting and classifying breast masses when compared to those their single-view counterparts (42,61-64). This benefit extends to techniques that uses multiple views of the ipsilateral breast as the aforementioned input (64). In 2023, Ren et al. proposed a framework for a multi-view detection framework to adaptively refine single view detection scores by matching lesions between two ipsilateral screening views of each breast (65). Their framework, developed from 8,034 DBT cases, improved screening performance without significantly increasing analysis run-time. Another subset of DL, GAN, can generate new images from an input set of images. This was successfully applied in breast imaging to generate DM images from already existing DBT images. Hence, more patient data is acquired without additional radiation exposure (66).

Images imported to AI-based diagnostic models are suspected to include lesions. These images are usually extracted by hand from entire DM or DBT scans (43). AI models can be used to support radiologists in their work by preselecting suspicious lesions for subsequent assessment by radiologists (51,52). These models can even calculate the regional probability of cancers from the DM or DBT scan (38). Accordingly, complete DM and DBT scans can be used as input to DL image assessment models (67-69). In 2017, Kooi et al. trained a CNN on a dataset of 45,000 mammographic images and found it non-inferior to radiologists at triaging images, and superior to a computer aided detection model that relied on human input (43).

Ultrasound (US)

US of the breast is an imaging modality that depends on sending sound waves through the breast tissue and simultaneously detecting the backscattered waves to construct the image. Thus, US carries no risk of ionizing radiation. It is, however, an operator-dependent imaging modality that can be difficult to read. The images are displayed as they are generated, and breast US should therefore be performed by an expert for direct interpretation (69). Yet, resource constraints often prevent a radiologist’s expertise from being available at the time of imaging. This represents another opportunity for AI to reduce burden on healthcare systems.

DL was initially used in conjunction with US for classifying breast lesions into benign or malignant (68-72). Studies on breast lesion detection and classification using DL from US images have concluded a high accuracy in detecting and classifying lesions when the input is full US images, and a much higher accuracy when the input consists of US images of suspicious lesions (71,73-76). To classify US images of breast lesions, radiologists use the Breast Imaging Reporting and Data System (BI-RADS) that incorporates the probability of lesion malignancy and the recommended management (77). However, inter-observer variability can be high, and misclassification can result. DL models have been applied to effectively assist radiologists in choosing the appropriate BI-RADS class (78,79). DL systems have also been implemented for image segmentation of breast lesions (detecting the lesion size and extent) (80-82). Moreover, DL applications with US have broadened to include predicting the molecular subtype of malignant breast lesions. This was investigated for predicting triple negative, HER2 (+), and HR (+) subtypes and showed high efficacy (1,78).

AI models increase radiologists’ classification specificity in cases where the radiologist has already detected a lesion (83-85). Some lesions in the breast could, however, be missed by the radiologist (86). Another proposed method is the application of an AI system integrated into the US device where, when the US is performed, the system directly analyzes the constructed image and provides timely detection of breast lesions (87).

Another application of DL in breast US imaging is in the assessment axillary lymph nodes for malignant lesion metastases. DL models have superior accuracy when compared to radiologists in detecting suspicious axillary lymph nodes for biopsy (88). DL models have also been used to predict axillary lymph node metastasis using the features of the breast lesion without the need for axillary US images (89). It does so by aiding in extracting relevant information by retaining only the intermediate lesion position in the images (89). It also utilizes random horizontal flipping, elastic transformation, and random cropping to simulate various scenarios (89). When compared to radiologists, DL models display comparable sensitivity and specificity (90). Such models could be further improved and implemented in US imaging to reduce the time needed for axillary lymph node imaging.

Another model was designed to predict response to neoadjuvant chemotherapy (NAC) using only the initial lesion US image (91). GAN have been applied in US imaging for reconstructing high-resolution images using low-resolution ones, for reducing the required time for 3D image acquisition, and for generating US images of the breast with and without lesions for educational purposes (for radiologists and DL models) (92,93).

Magnetic resonance imaging (MRI)

MRI of the breast depends on exciting water molecules using a heavy magnetic field and short-pulsed radio waves. When water molecules fall back to their ground form, radio waves are transmitted. These radio waves are detected to create the MR image (3D image). When an intravenous contrast is administered, a 4D image is created, with time captured as a fourth dimension. It is worth mentioning that MRI is the most sensitive breast cancer imaging modality (94).

Several AI models have been applied to breast MRI for breast lesion detection, classification, and segmentation. Here, AI models also show a superior specificity and a comparable sensitivity when compared to the best standard (radiologists) (95-98). Models have also been designed and successfully applied to predict the molecular subtype of breast cancer based on MRI image data (99-103). In 2021, Liu et al. evaluated the ability of a novel CNN architecture to predict 5-year cancer recurrence after MRI imaging of breast lesions. The AI was able to identify image features relevant to prognostic outcomes and increased the accuracy of tumour classification (103).

Like their integrations with US technology, DL models have been designed for detecting axillary lymph node metastasis using MRI scans. These models have shown superior accuracy in detecting pathological axillary lymph nodes when compared to radiologists (104-106). AI models have also been used to predict the NAC treatment response of breast cancer. Some models use the pre- and post MRI scans whereas others use only the initial MRI scans (107-110). GAN have been applied in breast MRI to normalize the variations in MRI intensity and noise distribution between different brands of MRI machines (111). They have also been applied to minimize issues that arise from heterogeneous fat suppression (112).

Positron emission tomography (PET)

PET and scintigraphy scans are nuclear medicine imaging modalities that use radionuclide-attached metabolites circulating in the body. When radionuclides decay, photons are emitted, the detection of which can be used to construct 3D PET and 2D scintigraphy images. Thus, nuclear medicine scans represent the metabolic activity of tissues rather than anatomical structure alone (112).

In breast cancer, PET scans are used for cancer staging. DL has been used to assist radiologists in detecting axillary lymph node metastasis on PET scans (113). In 2021, Li et al. found that AI assistance considerably improved the diagnostic accuracies of clinicians in a retrospective trial involving 414 pre-procedure PET scans of the axilla from patients with biopsy-proven breast cancer (113). The sensitivity of the radiologists was improved but their specificity remained unaffected. CNN have been similarly applied to detect distant breast cancer metastases from scintigraphy scans, displaying high accuracy (114). Another use of DL in conjunction with PET scans is the evaluation of the tumor burden on the whole body as measured by the metabolic tumor volume. However, DL models have not achieved satisfying sensitivity in this application (115). In 2020, Choi et al. have investigated the applicability of DL in predicting tumor response to NAC using PET scans as input. Their results showed improved performance in comparison with the conventional predictors (116).

Thermal imaging

AI was also applied in other proposed imaging modalities including thermal imaging. On digital infrared imaging, thermal activity is increased in the breast tissues surrounding the malignant tumor. DL models have demonstrated high accuracy in detecting breast tumours from digital infrared images (117). The benefit of DL integration with thermal imaging extends to forecast modelling, where DL has been successfully applied to predict personal breast cancer risk (118).

Pathology

The gold standard for diagnosing breast cancer is biopsy evaluation by pathology (119). This allows for classifying and grading breast cancer as well as detecting lymph node metastasis, planning for treatment, evaluating resection margins status, and predicting patients’ prognosis (120-122). However, pathological evaluation of microscopic biopsies carries the risk of inter-observer variability.

Applying AI models in analyzing microscopic images can assist pathologists in achieving faster, more precise, and reproducible breast cancer diagnosis (123,124). By reducing the workload on pathologists, AI integration can help compensate for resource strain within healthcare systems (12,125,126). In 2022, Cheng et al. applied CNN and RNN models in pathological classifications of breast fibroepithelial lesions into benign fibroadenomas and phylloid tumors. These models could accurately differentiate between and classify lesion types using images of the whole slide (127). AI-based models have also exhibited promising performance in applications to assess the risk of breast ductal carcinoma in situ (DCIS) invasion (128-130).

Preoperative applications of AI in breast surgery

Decision-making in cancer treatments is complex as it involves a diversity of data that need to be considered (131). Moreover, with the advances in medicine, new therapeutic options are proposed. Given the large amount of data for consideration and the rapid updates in the field, AI assistance in treatment decision-making would reduce the burden on clinicians and help them revise their treatment decisions (132,133). Bouaud et al. designed a decision support system that is based on guidelines to provide a complete patient care plan. In their study, they investigated the performance of this system in making treatment decisions for breast cancer patients. Clinicians changed their treatment decisions after reviewing the decision support system recommendations in 17% of the cases. The changed decisions were beneficial in 75% of these cases (134). In 2019, Xu et al. have also compared the decisions of their designed decision support system to the decisions of oncologists. The compared decisions were not concordant in 45% of the assessed cases. This nonconcordance was caused by variations in the clinical judgment in 21% of the cases, greater oncologists’ adherence to the guidelines in 15%, and inaccessibility to the suggested treatment by the system in 5% (135). Another decision-making support system evaluation was conducted by Xu et al. in 2020 for breast cancer patients. Their support system resulted in treatment decision change by the physician in 5% of the patients and thus higher concordance with breast cancer treatment guidelines. In 63% of these cases, physicians changed their decisions because of considering the treatment option recommended by the system. Other reasons for treatment decision changes included highlighting certain patient factors by the system in 23% of the cases, and the system logic for decision making in 13% of the cases (136). Applying ML in decision making would allow surgeons with low operational volume to take decisions similar to the most experienced surgeons, as ML models learn and gain experience with each input (137).

The preferred management option for early-stage breast cancer is conservative breast surgery with sentinel lymph node biopsy and subsequent radiotherapy (138-140). However, some patients experience complete cure from neoadjuvant systemic treatment (NAST). For such patients, it may be reasonable to adopt a “watch-and-wait” approach before starting therapeutic surgery (138). For that reason, precise detection of the patient’s response to NAST is necessary to avoid subjecting the patient to unnecessary surgery. At the same time, precise detection is crucial to eliminate the risk of missing residual malignant foci. AI-models have been successfully applied in this area to detect responses to NAST using MRI images and pathological specimens. Thereafter, AI models were designed to evaluate patients’ responses to NAST by combining patients’ imaging and biopsy findings with patient data. These models showed high accuracy in excluding residual malignant foci in the breast and axilla following NAST and determining eligibility for breast surgery (141-145).

An extra application of AI models is for educating breast cancer patients before breast surgery. A randomized control trial aimed at evaluating the ability of an AI model to educate women about the expected aesthetic outcomes following locoregional breast cancer surgery is currently being carried out. The model is expected to improve women’s satisfaction with breast surgery, raise their psychological status, and reduce the need for subsequent plastic surgeries (146). A ML model was also applied in predicting the financial burden of breast cancer surgery. The investigated model showed high prediction accuracy (147).

Intraoperative applications of AI in breast surgery

In breast-conserving surgery, ensuring clear margins is crucial to prevent the recurrence of breast cancer. Malignant foci in the resection margins necessitate subsequent re-excision surgery (148). Hence, intraoperative evaluation of resection margins is of significant value (149,150). Laser Raman spectroscopy (LRS) is an optical imaging technique that generates a biochemical tissue signature by detecting the vibration in the molecular bonds. Thus, microcalcifications as well as immortalized and transformed cancer tissues can be detected (151-156). In 2021, Kothari et al. developed a ML model that was integrated with LRS to evaluate resection margins intraoperatively in vivo. This model could rapidly generate multiple models of tissue classification and directly calculate the probability of malignancy in the margins (157). Applying this type of system in breast conservative therapy could improve resection margin precision and reduce the need for re-excision surgeries.

Postoperative applications of AI in breast surgery

Lymphedema is a devastating condition that can occur immediately following axillary procedures, such as mastectomy with axillary clearance, or up to 20 years thereafter. This condition can present with a variety of symptoms (158). In 2018, Fu et al. designed ML models that assesses the occurrence of lymphedema following breast surgery based on symptoms reported by the patients. The designed model was tested and proved high accuracy (159). LLMs, like ChatGPT, are currently the most discussed AI tool to utilize in medicine, including breast surgery. Lukac et al. concluded that while it has potential, its current version is incapable of providing suitable recommendations for patients with primary breast cancer (160). Another possible devastating complication from axillary clearances is injury to the long thoracic, thoracodorsal, or intercostobrachial nerve, which sometimes must be sacrificed (161-163). AI could potentially be used to determine certain characteristics of breast tumors and axillary lymphadenopathy, making it safer to encroach more delicate structures like neurovascular bundles. They could also theoretically be employed to further study patient anatomy from pre-operative scans, which can be used to help predict the risk of nerve injury intra-operatively. During the writing of this manuscript however, the authors were unable to find dedicated studies to this topic.

Applications of AI in predicting breast surgery outcomes

van Egdom et al. designed an ML model that uses patient data and breast cancer characteristics to predict patient-reported outcomes postoperatively. However, when investigated, the model could not find a relationship between the input variables for predicting postoperative patient-reported outcomes (164). ML has, however, been used to effectively predict complications in the abdominal flap donor site following autologous breast surgery. Using these predictions, surgeons can tailor their operative techniques to achieve better outcomes and minimize the burden postoperatively (165).

About 15% of women with breast cancer experience severe pain postoperatively, which can last for years (166,167). Early identification of women’s susceptibility to developing postoperative pain would allow for early initiation of medical and psychological treatment for those in need and avoidance of unnecessary interventions for those less susceptible (168,169). Using ML technology, Lötsch et al. designed and evaluated a system for predicting persistent pain following breast surgery. The model showed high accuracy in predicting postoperative persistent pain and a much higher negative predictive value (170). Another ML predictive model designed by Sipilä et al. showed high negative predictive value but low accuracy (171). In 2020, Juwara et al. designed an ML-derived model for predicting neuropathic pain following breast surgery. The model was superior to the traditional prediction model in predicting postoperative neuropathic pain (172).

Identifying women with high risk for recurrence would aid in providing the necessary follow-up and preventing potentially deadly disease progression. Lou et al. designed an ML-derived model that could accurately predict the risk of breast cancer recurrence within ten years following breast surgery (173). Other prediction models can provide high accuracy in predicting breast cancer recurrence after three and five years of breast surgery (174,175).

AI has been applied in predicting survival and mortality following breast cancer surgery as well. Huang et al. designed and evaluated an ANN model to predict the five-year mortality following surgery for breast cancer. The designed model showed greater accuracy when compared to conventional prediction methods such as the Nottingham prognostic index and breast cancer-specific survival (176-178). An additional ML model was developed by Moncada-Torres in 2021 to predict women’s survival after undergoing breast cancer surgery. The model was similarly accurate as conventional prediction methods, if not superior (179).


Discussion

AI technologies are rapidly evolving and gaining interest, and their applications in healthcare are broadening to improve patients’ outcomes (180). Models based on AI have the feature of learning from data, and hence, their performance gets improved. Breast surgery for benign or malignant breast lesions has markedly benefited from the advances in AI (4,5,12,13). These systems can rapidly process vast amounts of data and update the saved data, as well as their ability to logically operate with complex rules and decision trees. Thus, AI outperforms human cognitive functions and could assist healthcare providers in a diversity of tasks related to breast surgery from breast lesions detection and diagnosis to postoperative detection of breast surgery complications. As well, AI models assisted in predicting patient’s response to therapy and postoperative breast appearance, cancer recurrence, and patient’s survival (11,132,133,181,182). Most AI models currently approved by the Food and Drug Administration are designed to assist in breast lesion diagnosis through imaging and histopathological evaluation. Various models have been designed to assist in detecting and classifying breast lesions, describing breast tumor microenvironment and molecular subtype, predicting the risk of breast cancer, as well as predicting and evaluating treatment response. An AI-based model has been applied in US breast imaging to predict malignant lesion response to NAC using features of the lesion US before versus after one or two courses of NAC. In addition, some AI models have the capacity for reconstructing or even generating breast images (4,5,13,14). Our search revealed AI applications aimed at supporting oncologists in treatment decision-making and predicting postoperative outcomes (162,172,173,176,183).

Despite the notable breakthrough of AI technologies, some limitations are encountered. Highlighting these drawbacks is essential for making improvements in the models. As AI models’ performance improves when more data are imported, the size of datasets used for learning matters. For some models, large datasets are not available (as for breast US imaging). Thus, these models are not trained enough and subsequently do not achieve a satisfactory performance. To overcome this shortcoming, data could be shared across medical centers. This solution cannot always be pursued because of patients’ privacy policies, privatized health systems like the USA, and ethical laws regarding the transfer of sensitive patient information (184). Alternative solutions including federated learning and transfer learning are proposed. Federated learning implies sharing the algorithm after learning from data, but patients’ data remains within the medical center. Transfer learning refers to learning from different datasets (e.g., US models can learn from DM images) (35,185). Special care must always be taken when data are imported to train AI models. Poor datasets could lead to inaccuracies (e.g., including wrong diagnosis of tumor and inter-observer variability) and various biases could lead to patient population underrepresentation. For these reasons, large multi-central multi-reader datasets are preferred for training AI models (186). Prediction models that provide clinicians with justification for their prediction provide more comprehensive assistance (187,188). However, it was evident from the results of our search that not all AI models are effective in establishing relationships between variables and predicting outcomes. As computing powers and data availability increase, prediction AI models are recommended to incorporate multi-dimensional predictors for stronger prediction evidence. When patients’ physical examination and lab data are incorporated with their disease characteristics, the model can get a holistic picture and thus improve its performance. When an AI decision support model was compared to oncologists in terms of adherence to breast cancer treatment guidelines, oncologists showed better adherence. However, this was owing to the multiple input and factors driving the algorithm. The investigated algorithm was designed to take decisions not only based on breast cancer treatment guidelines, but also on some selected literature and information from textbooks (134). Finally, medicolegal dilemmas surround the application of AI in medical practice. Whether final decisions could be made by AI models and who would take the responsibility for wrong decisions are questions yet to be answered. This endorses the need for a regulatory body for AI applications in medicine. As well, if AI is proposed to replace humans, ethical issues of job losses would be encountered. It should be noted that articles with the specific focus of breast reconstruction, an important part of the recuperation process post-mastectomy, were not included in this review. The applications of AI in this domain have been elucidated in prior research. This theme was therefore excluded to maintain our objective of addressing current knowledge gaps.

Further improvements in AI are anticipated and AI models are desired to move from the experimental phase to actual implementation in healthcare. In breast lesion biopsy, future applications of AI might allow for identifying a few deformed cells within normal breast tissue. Regarding breast surgery, AI’s possible preoperative applications involve surgical planning. The models could be used in anatomical data analysis for recommending individualized optimal approaches for breast surgeries. Moreover, future intraoperative applications of AI might include assistance in timely image analysis for precise tumor resection and intraoperative decision-making. AI-integrated robotic models, akin to the DaVinci system, that directly perform breast surgery or assist surgeons could also be introduced in the future (3,189,190). Postoperatively, AI could be applied in patient monitoring and follow-up for early detection of breast surgery complications or breast cancer recurrence. As uptake of these technologies increases within healthcare systems, the implications for training new clinicians involved in the surgical management of breast lesions must be considered. Healthcare education in the era of increasing AI integration will be a major topic for research in the coming years. Breast surgeons should be updated with the recent advances and applications of AI in their field to provide the best care for their patients (191,192).


Conclusions

AI algorithms are increasingly applied in all aspects of breast surgery. Different AI models were designed and evaluated to assist in breast tumor detection, classification, segmentation, staging, and grading. Preoperatively, AI models were applied in determining the need for breast cancer surgery and educating women. Intraoperatively, they enhanced surgical precision in tumor resection. Postoperatively, AI was able to predict breast surgery complications, survival, and cancer recurrence. However, more research is required to move AI from the experimental phase to widespread implementation in healthcare. Improved, novel applications of AI are already in development, and breast surgeons should stay updated to provide the best care for their patients.


Acknowledgments

Funding: None.


Footnote

Reporting Checklist: The authors have completed the Narrative Review reporting checklist. Available at https://gs.amegroups.com/article/view/10.21037/gs-23-414/rc

Peer Review File: Available at https://gs.amegroups.com/article/view/10.21037/gs-23-414/prf

Conflicts of Interest: All authors have completed the ICMJE uniform disclosure form (available at https://gs.amegroups.com/article/view/10.21037/gs-23-414/coif). I.S. serves as an unpaid editorial board member of Gland Surgery from September 2023 to August 2025. W.M.R. serves as an unpaid Associate Editor of Gland Surgery from March 2023 to February 2028. The other authors have no conflicts of interest to declare.

Ethical Statement: The authors are accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.

Open Access Statement: This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the non-commercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). See: https://creativecommons.org/licenses/by-nc-nd/4.0/.


References

  1. Kaul V, Enslin S, Gross SA. History of artificial intelligence in medicine. Gastrointest Endosc 2020;92:807-12. [Crossref] [PubMed]
  2. Hamamoto R, Suvarna K, Yamada M, et al. Application of Artificial Intelligence Technology in Oncology: Towards the Establishment of Precision Medicine. Cancers (Basel) 2020;12:3532. [Crossref] [PubMed]
  3. Bini SA. Artificial Intelligence, Machine Learning, Deep Learning, and Cognitive Computing: What Do These Terms Mean and How Will They Impact Health Care? J Arthroplasty 2018;33:2358-61. [Crossref] [PubMed]
  4. Bhinder B, Gilvary C, Madhukar NS, et al. Artificial Intelligence in Cancer Research and Precision Medicine. Cancer Discov 2021;11:900-15. [Crossref] [PubMed]
  5. Kann BH, Hosny A, Aerts HJWL. Artificial intelligence for clinical oncology. Cancer Cell 2021;39:916-27. [Crossref] [PubMed]
  6. Litjens G, Kooi T, Bejnordi BE, et al. A survey on deep learning in medical image analysis. Med Image Anal 2017;42:60-88. [Crossref] [PubMed]
  7. Krizhevsky A, Sutskever I, Hinton G. ImageNet Classification with Deep Convolutional Neural Networks. Neural Information Processing Systems 2012;25.
  8. Choi RY, Coyner AS, Kalpathy-Cramer J, et al. Introduction to Machine Learning, Neural Networks, and Deep Learning. Transl Vis Sci Technol 2020;9:14. [Crossref] [PubMed]
  9. Jiang Y, Yang M, Wang S, et al. Emerging role of deep learning-based artificial intelligence in tumor pathology. Cancer Commun (Lond) 2020;40:154-66.
  10. Cheng PM, Montagnon E, Yamashita R, et al. Deep Learning: An Update for Radiologists. Radiographics 2021;41:1427-45. [Crossref] [PubMed]
  11. Marshall IJ, Wallace BC. Toward systematic review automation: a practical guide to using machine learning tools in research synthesis. Syst Rev 2019;8:163. [Crossref] [PubMed]
  12. Reading Turchioe M, Volodarskiy A, Pathak J, et al. Systematic review of current natural language processing methods and applications in cardiology. Heart 2022;108:909-16. [Crossref] [PubMed]
  13. Idnay B, Dreisbach C, Weng C, Schnall R. A systematic review on natural language processing systems for eligibility prescreening in clinical research. J Am Med Inform Assoc 2021;29:197-206. [Crossref] [PubMed]
  14. Jacob J. ChatGPT: Friend or Foe?-Utility in Trauma Triage. Indian J Crit Care Med 2023;27:563-6. [Crossref] [PubMed]
  15. Totlis T, Natsis K, Filos D, et al. The potential role of ChatGPT and artificial intelligence in anatomy education: a conversation with ChatGPT. Surg Radiol Anat 2023;45:1321-9. [Crossref] [PubMed]
  16. Lim B, Seth I, Bulloch G, et al. Evaluating the efficacy of major language models in providing guidance for hand trauma nerve laceration patients: a case study on Google’s AI BARD, Bing AI, and ChatGPT. Plast Aesthet Res 2023;10:43.
  17. MadadiYDelsozMLaoPAChatGPT Assisting Diagnosis of Neuro-ophthalmology Diseases Based on Case Reports. Preprint. medRxiv. 2023;2023.09.13.23295508.
  18. Mahuli SA, Rai A, Mahuli AV, et al. Application ChatGPT in conducting systematic reviews and meta-analyses. Br Dent J 2023;235:90-2. [Crossref] [PubMed]
  19. Bagde H, Dhopte A, Alam MK, et al. A systematic review and meta-analysis on ChatGPT and its utilization in medical and dental research. Heliyon 2023;9:e23050. [Crossref] [PubMed]
  20. Garg RK, Urs VL, Agarwal AA, et al. Exploring the role of ChatGPT in patient care (diagnosis and treatment) and medical research: A systematic review. Health Promot Perspect 2023;13:183-91. [Crossref] [PubMed]
  21. Helm JM, Swiergosz AM, Haeberle HS, et al. Machine Learning and Artificial Intelligence: Definitions, Applications, and Future Directions. Curr Rev Musculoskelet Med 2020;13:69-76. [Crossref] [PubMed]
  22. Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med 2019;25:44-56. [Crossref] [PubMed]
  23. Huynh E, Hosny A, Guthier C, et al. Artificial intelligence in radiation oncology. Nat Rev Clin Oncol 2020;17:771-81. [Crossref] [PubMed]
  24. Benzekry S. Artificial Intelligence and Mechanistic Modeling for Clinical Decision Making in Oncology. Clin Pharmacol Ther 2020;108:471-86. [Crossref] [PubMed]
  25. Seth I, Bulloch G, Joseph K, et al. Use of Artificial Intelligence in the Advancement of Breast Surgery and Implications for Breast Reconstruction: A Narrative Review. J Clin Med 2023;12:5143. [Crossref] [PubMed]
  26. Bray F, Ren JS, Masuyer E, et al. Global estimates of cancer prevalence for 27 sites in the adult population in 2008. Int J Cancer 2013;132:1133-45. [Crossref] [PubMed]
  27. Ferlay J, Soerjomataram I, Dikshit R, et al. Cancer incidence and mortality worldwide: sources, methods and major patterns in GLOBOCAN 2012. Int J Cancer 2015;136:E359-86. [Crossref] [PubMed]
  28. Global Burden of Disease Cancer Collaboration. Global, Regional, and National Cancer Incidence, Mortality, Years of Life Lost, Years Lived With Disability, and Disability-Adjusted Life-years for 32 Cancer Groups, 1990 to 2015: A Systematic Analysis for the Global Burden of Disease Study. JAMA Oncol 2017;3:524-48. [Crossref] [PubMed]
  29. Sung H, Ferlay J, Siegel RL, et al. Global Cancer Statistics 2020: GLOBOCAN Estimates of Incidence and Mortality Worldwide for 36 Cancers in 185 Countries. CA Cancer J Clin 2021;71:209-49. [Crossref] [PubMed]
  30. Mok CW, Lai HW. Evolution of minimal access breast surgery. Gland Surg 2019;8:784-93. [Crossref] [PubMed]
  31. Chartrand G, Cheng PM, Vorontsov E, et al. Deep Learning: A Primer for Radiologists. Radiographics 2017;37:2113-31. [Crossref] [PubMed]
  32. Trister AD, Buist DSM, Lee CI. Will Machine Learning Tip the Balance in Breast Cancer Screening? JAMA Oncol 2017;3:1463-4. [Crossref] [PubMed]
  33. Wing P, Langelier MH. Workforce shortages in breast imaging: impact on mammography utilization. AJR Am J Roentgenol 2009;192:370-8. [Crossref] [PubMed]
  34. Suk HI, Lee SW, Shen D, et al. Hierarchical feature representation and multimodal fusion with deep learning for AD/MCI diagnosis. Neuroimage 2014;101:569-82. [Crossref] [PubMed]
  35. Suk HI, Wee CY, Lee SW, et al. State-space model with deep learning for functional dynamics estimation in resting-state fMRI. Neuroimage 2016;129:292-307. [Crossref] [PubMed]
  36. Rawat W, Wang Z. Deep Convolutional Neural Networks for Image Classification: A Comprehensive Review. Neural Comput 2017;29:2352-449. [Crossref] [PubMed]
  37. Miglioretti DL, Gard CC, Carney PA, et al. When radiologists perform best: the learning curve in screening mammogram interpretation. Radiology 2009;253:632-40. [Crossref] [PubMed]
  38. Mainprize JG, Alonzo-Proulx O, Jong RA, et al. Quantifying masking in clinical mammograms via local detectability of simulated lesions. Med Phys 2016;43:1249-58. [Crossref] [PubMed]
  39. Rodriguez-Ruiz A, Lång K, Gubern-Merida A, et al. Stand-Alone Artificial Intelligence for Breast Cancer Detection in Mammography: Comparison With 101 Radiologists. J Natl Cancer Inst 2019;111:916-22. [Crossref] [PubMed]
  40. Rodríguez-Ruiz A, Krupinski E, Mordang JJ, et al. Detection of Breast Cancer with Mammography: Effect of an Artificial Intelligence Support System. Radiology 2019;290:305-14. [Crossref] [PubMed]
  41. Chan HP, Doi K, Vyborny CJ, et al. Improvement in radiologists' detection of clustered microcalcifications on mammograms. The potential of computer-aided diagnosis. Invest Radiol 1990;25:1102-10. [Crossref] [PubMed]
  42. Wu N, Phang J, Park J, et al. Deep Neural Networks Improve Radiologists’ Performance in Breast Cancer Screening. IEEE transactions on medical imaging 2020;39:1184-94. [Crossref] [PubMed]
  43. Kooi T, Litjens G, van Ginneken B, et al. Large scale deep learning for computer aided detection of mammographic lesions. Med Image Anal 2017;35:303-12. [Crossref] [PubMed]
  44. Samala R, Chan HP, Hadjiiski L, et al. Deep-learning convolution neural network for computer-aided detection of microcalcifications in digital breast tomosynthesis. SPIE Medical Imaging 2016. 97850Y p.
  45. Huynh BQ, Li H, Giger ML. Digital mammographic tumor classification using transfer learning from deep convolutional neural networks. J Med Imaging (Bellingham) 2016;3:034501. [Crossref] [PubMed]
  46. McKinney SM, Sieniek M, Godbole V, et al. International evaluation of an AI system for breast cancer screening. Nature 2020;577:89-94. [Crossref] [PubMed]
  47. Kim HE, Kim HH, Han BK, et al. Changes in cancer detection and false-positive recall in mammography using artificial intelligence: a retrospective, multireader study. Lancet Digit Health 2020;2:e138-48. [Crossref] [PubMed]
  48. Romero-Martín S, Elías-Cabot E, Raya-Povedano JL, et al. Stand-Alone Use of Artificial Intelligence for Digital Mammography and Digital Breast Tomosynthesis Screening: A Retrospective Evaluation. Radiology 2022;302:535-42. [Crossref] [PubMed]
  49. Oh KE, Vasandani N, Anwar A. Radiomics to Differentiate Malignant and Benign Breast Lesions: A Systematic Review and Diagnostic Test Accuracy Meta-Analysis. Cureus 2023;15:e49015. [Crossref] [PubMed]
  50. Warren Burhenne LJ, Wood SA, D'Orsi CJ, et al. Potential contribution of computer-aided detection to the sensitivity of screening mammography. Radiology 2000;215:554-62. [Crossref] [PubMed]
  51. Yala A, Schuster T, Miles R, et al. A Deep Learning Model to Triage Screening Mammograms: A Simulation Study. Radiology 2019;293:38-46. [Crossref] [PubMed]
  52. Rodriguez-Ruiz A, Lång K, Gubern-Merida A, et al. Can we reduce the workload of mammographic screening by automatic identification of normal exams with artificial intelligence? A feasibility study. Eur Radiol 2019;29:4825-32. [Crossref] [PubMed]
  53. Dembrower K, Liu Y, Azizpour H, et al. Comparison of a Deep Learning Risk Score and Standard Mammographic Density Score for Breast Cancer Risk Prediction. Radiology 2020;294:265-72. [Crossref] [PubMed]
  54. Yala A, Lehman C, Schuster T, et al. A Deep Learning Mammography-based Model for Improved Breast Cancer Risk Prediction. Radiology 2019;292:60-6. [Crossref] [PubMed]
  55. Sechopoulos I. A review of breast tomosynthesis. Part I. The image acquisition process. Med Phys 2013;40:014301. [Crossref] [PubMed]
  56. Skaane P, Bandos AI, Gullien R, et al. Prospective trial comparing full-field digital mammography (FFDM) versus combined FFDM and tomosynthesis in a population-based screening programme using independent double reading with arbitration. Eur Radiol 2013;23:2061-71. [Crossref] [PubMed]
  57. Zackrisson S, Lång K, Rosso A, et al. One-view breast tomosynthesis versus two-view mammography in the Malmö Breast Tomosynthesis Screening Trial (MBTST): a prospective, population-based, diagnostic accuracy study. Lancet Oncol 2018;19:1493-503. [Crossref] [PubMed]
  58. Conant EF, Toledano AY, Periaswamy S, et al. Improving Accuracy and Efficiency with Concurrent Use of Artificial Intelligence for Digital Breast Tomosynthesis. Radiol Artif Intell 2019;1:e180096. [Crossref] [PubMed]
  59. Chae EY, Kim HH, Jeong JW, et al. Decrease in interpretation time for both novice and experienced readers using a concurrent computer-aided detection system for digital breast tomosynthesis. Eur Radiol 2019;29:2518-25. [Crossref] [PubMed]
  60. Zhang F, Luo L, Sun X, et al. Cascaded Generative and Discriminative Learning for Microcalcification Detection in Breast Mammograms. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 2019, pp. 12570-8.
  61. Kim D, Kim ST, Ro Y. Latent feature representation with 3-D multi-view deep convolutional neural network for bilateral analysis in digital breast tomosynthesis. 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Shanghai, China, 2016, pp. 927-931.
  62. Kooi T, Karssemeijer N. Classifying symmetrical differences and temporal change for the detection of malignant masses in mammography using deep neural networks. J Med Imaging (Bellingham) 2017;4:044501. [Crossref] [PubMed]
  63. Loizidou K, Skouroumouni G, Pitris C, et al. Digital subtraction of temporally sequential mammograms for improved detection and classification of microcalcifications. Eur Radiol Exp 2021;5:40. [Crossref] [PubMed]
  64. Yang Z, Cao Z, Zhang Y, et al. MommiNet-v2: Mammographic multi-view mass identification networks. Med Image Anal 2021;73:102204. [Crossref] [PubMed]
  65. Ren Y, Liu X, Ge J, et al. Ipsilateral Lesion Detection Refinement for Tomosynthesis. IEEE Trans Med Imaging 2023;42:3080-90. [Crossref] [PubMed]
  66. Jiang G, Wei J, Xu Y, et al. Synthesis of Mammogram From Digital Breast Tomosynthesis Using Deep Convolutional Neural Network With Gradient Guided cGANs. IEEE Trans Med Imaging 2021;40:2080-91. [Crossref] [PubMed]
  67. Lotter W, Sorensen G, Cox D. A Multi-Scale CNN and Curriculum Learning Strategy for Mammogram Classification. In: Cardoso M, et al. Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support. DLMIA ML-CDS 2017 2017. Lecture Notes in Computer Science.
  68. Al-Masni MA, Al-Antari MA, Park JM, et al. Simultaneous detection and classification of breast masses in digital mammograms via a deep learning YOLO-based CAD system. Comput Methods Programs Biomed 2018;157:85-94. [Crossref] [PubMed]
  69. Zhang X, Zhang Y, Han EY, et al. Classification of Whole Mammogram and Tomosynthesis Images Using Deep Convolutional Neural Networks. IEEE Trans Nanobioscience 2018;17:237-42. [Crossref] [PubMed]
  70. Shi J, Zhou S, Liu X, et al. Stacked Deep Polynomial Network Based Representation Learning for Tumor Classification with Small Ultrasound Image Dataset. Neurocomputing. 2016;194:87-94.
  71. Han S, Kang HK, Jeong JY, et al. A deep learning framework for supporting the classification of breast lesions in ultrasound images. Phys Med Biol 2017;62:7714-28. [Crossref] [PubMed]
  72. Byra M, Galperin M, Ojeda-Fournier H, et al. Breast mass classification in sonography with transfer learning using a deep convolutional neural network and color conversion. Med Phys 2019;46:746-55. [Crossref] [PubMed]
  73. Becker AS, Mueller M, Stoffel E, et al. Classification of breast cancer in ultrasound imaging using a generic deep learning analysis software: a pilot study. Br J Radiol 2018;91:20170576. [Crossref] [PubMed]
  74. Tanaka H, Chiu SW, Watanabe T, et al. Computer-aided diagnosis system for breast ultrasound images using deep learning. Phys Med Biol 2019;64:235013. [Crossref] [PubMed]
  75. Cao Z, Duan L, Yang G, et al. An experimental study on breast lesion detection and classification from ultrasound images using deep learning architectures. BMC Med Imaging 2019;19:51. [Crossref] [PubMed]
  76. Fujioka T, Kubota K, Mori M, et al. Distinction between benign and malignant breast masses at breast ultrasound using deep learning method with convolutional neural network. Jpn J Radiol 2019;37:466-72. [Crossref] [PubMed]
  77. D’Orsie CJ, Mendelson EB, Morris EA. ACR BI-RADS Atlas, Breast Imaging Reporting and Data System. Reston, VA: American College of Radiology. 2013.
  78. Ciritsis A, Rossi C, Eberhard M, et al. Automatic classification of ultrasound breast lesions using a deep convolutional neural network mimicking human decision-making. Eur Radiol 2019;29:5458-68. [Crossref] [PubMed]
  79. Huang Y, Han L, Dou H, et al. Two-stage CNNs for computerized BI-RADS categorization in breast ultrasound images. Biomed Eng Online 2019;18:8. [Crossref] [PubMed]
  80. Wang K, Liang S, Zhong S, et al. Breast ultrasound image segmentation: A coarse-to-fine fusion convolutional neural network. Med Phys 2021;48:4262-78. [Crossref] [PubMed]
  81. Lei Y, He X, Yao J, et al. Breast tumor segmentation in 3D automatic breast ultrasound using Mask scoring R-CNN. Med Phys 2021;48:204-14. [Crossref] [PubMed]
  82. Pan P, Chen H, Li Y, et al. Tumor segmentation in automated whole breast ultrasound using bidirectional LSTM neural network and attention mechanism. Ultrasonics 2021;110:106271. [Crossref] [PubMed]
  83. Choi JS, Han BK, Ko ES, et al. Effect of a Deep Learning Framework-Based Computer-Aided Diagnosis System on the Diagnostic Performance of Radiologists in Differentiating between Malignant and Benign Masses on Breast Ultrasonography. Korean J Radiol 2019;20:749-58. [Crossref] [PubMed]
  84. Park HJ, Kim SM, La Yun B, et al. A computer-aided diagnosis system using artificial intelligence for the diagnosis and characterization of breast masses on ultrasound: Added value for the inexperienced breast radiologist. Medicine (Baltimore) 2019;98:e14146. [Crossref] [PubMed]
  85. Xiao M, Zhao C, Zhu Q, et al. An investigation of the classification accuracy of a deep learning framework-based computer-aided diagnosis system in different pathological types of breast lesions. J Thorac Dis 2019;11:5023-31. [Crossref] [PubMed]
  86. Yap MH, Pons G, Marti J, et al. Automated Breast Ultrasound Lesions Detection Using Convolutional Neural Networks. IEEE J Biomed Health Inform 2018;22:1218-26. [Crossref] [PubMed]
  87. Zhang X, Lin X, Zhang Z, et al. Artificial Intelligence Medical Ultrasound Equipment: Application of Breast Lesions Detection. Ultrason Imaging 2020;42:191-202. [Crossref] [PubMed]
  88. Coronado-Gutiérrez D, Santamaría G, Ganau S, et al. Quantitative Ultrasound Image Analysis of Axillary Lymph Nodes to Diagnose Metastatic Involvement in Breast Cancer. Ultrasound Med Biol 2019;45:2932-41. [Crossref] [PubMed]
  89. Li WB, Du ZC, Liu YJ, et al. Prediction of axillary lymph node metastasis in early breast cancer patients with ultrasonic videos based deep learning. Front Oncol 2023;13:1219838. [Crossref] [PubMed]
  90. Zhou LQ, Wu XL, Huang SY, et al. Lymph Node Metastasis Prediction from Primary Breast Cancer US Images Using Deep Learning. Radiology 2020;294:19-28. [Crossref] [PubMed]
  91. Byra M, Dobruch-Sobczak K, Klimonda Z, et al. Early Prediction of Response to Neoadjuvant Chemotherapy in Breast Cancer Sonography Using Siamese Convolutional Neural Networks. IEEE J Biomed Health Inform 2021;25:797-805. [Crossref] [PubMed]
  92. Dai X, Lei Y, Wang T, et al. Self-supervised learning for accelerated 3D high-resolution ultrasound imaging. Med Phys 2021;48:3916-26. [Crossref] [PubMed]
  93. Fujioka T, Kubota K, Mori M, et al. Virtual Interpolation Images of Tumor Development and Growth on Breast Ultrasound Image Synthesis With Deep Convolutional Generative Adversarial Networks. J Ultrasound Med 2021;40:61-9. [Crossref] [PubMed]
  94. Lehman CD, Schnall MD. Imaging in breast cancer: magnetic resonance imaging. Breast Cancer Res 2005;7:215-9. [Crossref] [PubMed]
  95. Fujioka T, Yashima Y, Oyama J, et al. Deep-learning approach with convolutional neural network for classification of maximum intensity projections of dynamic contrast-enhanced breast magnetic resonance imaging. Magn Reson Imaging 2021;75:1-8. [Crossref] [PubMed]
  96. Truhn D, Schrading S, Haarburger C, et al. Radiomic versus Convolutional Neural Networks Analysis for Classification of Contrast-enhancing Lesions at Multiparametric Breast MRI. Radiology 2019;290:290-7. [Crossref] [PubMed]
  97. Zhou J, Luo LY, Dou Q, et al. Weakly supervised 3D deep learning for breast cancer classification and localization of the lesions in MR images. J Magn Reson Imaging 2019;50:1144-51. Correction appears in J Magn Reson Imaging 2023;58:1315. [Crossref] [PubMed]
  98. Dalmiş MU, Gubern-Mérida A, Vreemann S, et al. Artificial Intelligence-Based Classification of Breast Lesions Imaged With a Multiparametric Breast MRI Protocol With Ultrafast DCE-MRI, T2, and DWI. Invest Radiol 2019;54:325-32. [Crossref] [PubMed]
  99. Ha R, Mutasa S, Karcich J, et al. Predicting Breast Cancer Molecular Subtype with MRI Dataset Utilizing Convolutional Neural Network Algorithm. J Digit Imaging 2019;32:276-82. [Crossref] [PubMed]
  100. Zhu Z, Albadawy E, Saha A, et al. Deep learning for identifying radiogenomic associations in breast cancer. Comput Biol Med 2019;109:85-90. [Crossref] [PubMed]
  101. Zhang Y, Chen JH, Lin Y, et al. Prediction of breast cancer molecular subtypes on DCE-MRI using convolutional neural network with transfer learning between two centers. Eur Radiol 2021;31:2559-67. [Crossref] [PubMed]
  102. Sun R, Meng Z, Hou X, et al. Prediction of breast cancer molecular subtypes using DCE-MRI based on CNNs combined with ensemble learning. Phys Med Biol 2021;
  103. Liu G, Mitra D, Jones EF, et al. Mask-Guided Convolutional Neural Network for Breast Tumor Prognostic Outcome Prediction on 3D DCE-MR Images. J Digit Imaging 2021;34:630-6. [Crossref] [PubMed]
  104. Ha R, Chang P, Karcich J, et al. Axillary Lymph Node Evaluation Utilizing Convolutional Neural Networks Using MRI Dataset. J Digit Imaging 2018;31:851-6. [Crossref] [PubMed]
  105. Ren T, Cattell R, Duanmu H, et al. Convolutional Neural Network Detection of Axillary Lymph Node Metastasis Using Standard Clinical Breast MRI. Clin Breast Cancer 2020;20:e301-8. [Crossref] [PubMed]
  106. Ren T, Lin S, Huang P, et al. Convolutional Neural Network of Multiparametric MRI Accurately Detects Axillary Lymph Node Metastasis in Breast Cancer Patients With Pre Neoadjuvant Chemotherapy. Clin Breast Cancer 2022;22:170-7. [Crossref] [PubMed]
  107. BramanNEl AdouiMVulchiMDeep learning-based prediction of response to HER2-targeted neoadjuvant chemotherapy from pre-treatment dynamic breast MRI: A multi-institutional validation study2020. Available online: https://arxiv.org/abs/2001.08570
  108. Huynh B, Antropova N, Giger M. Comparison of breast DCE-MRI contrast time points for predicting response to neoadjuvant chemotherapy using deep convolutional neural network features with transfer learning. Proc. SPIE 10134, Medical Imaging 2017: Computer-Aided Diagnosis, 101340U (3 March 2017).
  109. Qu YH, Zhu HT, Cao K, et al. Prediction of pathological complete response to neoadjuvant chemotherapy in breast cancer using a deep learning (DL) method. Thorac Cancer 2020;11:651-8. [Crossref] [PubMed]
  110. El Adoui M, Drisis S, Benjelloun M. Predict Breast Tumor Response to Chemotherapy Using a 3D Deep Learning Architecture Applied to DCE-MRI Data. In: Rojas I, Valenzuela O, Rojas F, Ortuño F., (eds). Bioinformatics and Biomedical Engineering. IWBBIO 2019. 2019. p. 33-40.
  111. Modanwal G, Vellal A, Mazurowski MA. Normalization of breast MRIs using cycle-consistent generative adversarial networks. Comput Methods Programs Biomed 2021;208:106225. [Crossref] [PubMed]
  112. Mori M, Fujioka T, Katsuta L, et al. Feasibility of new fat suppression for breast MRI using pix2pix. Jpn J Radiol 2020;38:1075-81. [Crossref] [PubMed]
  113. Li Z, Kitajima K, Hirata K, et al. Preliminary study of AI-assisted diagnosis using FDG-PET/CT for axillary lymph node metastasis in patients with breast cancer. EJNMMI Res 2021;11:10. [Crossref] [PubMed]
  114. Papandrianos N, Papageorgiou E, Anagnostis A, et al. A deep-learning approach for diagnosis of metastatic breast cancer in bones from whole-body scans. Appl Sci 2020;10:997.
  115. Weber M, Kersting D, Umutlu L, et al. Just another “Clever Hans”? Neural networks and FDG PET-CT to predict the outcome of patients with breast cancer. Eur J Nucl Med Mol Imaging 2021;48:3141-50. [Crossref] [PubMed]
  116. Choi JH, Kim HA, Kim W, et al. Early prediction of neoadjuvant chemotherapy response for advanced breast cancer using PET/MRI image deep learning. Sci Rep 2020;10:21149. [Crossref] [PubMed]
  117. Mambou SJ, Maresova P, Krejcar O, et al. Breast Cancer Detection Using Infrared Thermal Imaging and a Deep Learning Model. Sensors (Basel) 2018;18:2799. [Crossref] [PubMed]
  118. Kakileti ST, Madhu HJ, Manjunath G, et al. Personalized risk prediction for breast cancer pre-screening using artificial intelligence and thermal radiomics. Artif Intell Med 2020;105:101854. [Crossref] [PubMed]
  119. Akram M, Iqbal M, Daniyal M, et al. Awareness and current knowledge of breast cancer. Biol Res 2017;50:33. [Crossref] [PubMed]
  120. Gradishar WJ, Anderson BO, Abraham J, et al. Breast Cancer, Version 3.2020, NCCN Clinical Practice Guidelines in Oncology. J Natl Compr Canc Netw 2020;18:452-78. [Crossref] [PubMed]
  121. Solanki M, Visscher D. Pathology of breast cancer in the last half century. Hum Pathol 2020;95:137-48. [Crossref] [PubMed]
  122. Cardoso F, Senkus E, Costa A, et al. 4th ESO-ESMO International Consensus Guidelines for Advanced Breast Cancer (ABC 4)†. Ann Oncol 2018;29:1634-57. [Crossref] [PubMed]
  123. Gavrielides MA, Gallas BD, Lenz P, et al. Observer variability in the interpretation of HER2/neu immunohistochemical expression with unaided and computer-aided digital microscopy. Arch Pathol Lab Med 2011;135:233-42. [Crossref] [PubMed]
  124. Gavrielides MA, Conway C, O'Flaherty N, et al. Observer performance in the use of digital and optical microscopy for the interpretation of tissue-based biomarkers. Anal Cell Pathol (Amst) 2014;2014:157308. [Crossref] [PubMed]
  125. Xing Fuyong, Xie Yuanpu, Su Hai, et al. Deep Learning in Microscopy Image Analysis: A Survey. IEEE Trans Neural Netw Learn Syst 2018;29:4550-68. [Crossref] [PubMed]
  126. Metter DM, Colgan TJ, Leung ST, et al. Trends in the US and Canadian Pathologist Workforces From 2007 to 2017. JAMA Netw Open 2019;2:e194337. [Crossref] [PubMed]
  127. Cheng CL, Md Nasir ND, Ng GJZ, et al. Artificial intelligence modelling in differentiating core biopsies of fibroadenoma from phyllodes tumor. Lab Invest 2022;102:245-52. [Crossref] [PubMed]
  128. Hayward MK, Weaver VM. Improving DCIS diagnosis and predictive outcome by applying artificial intelligence. Biochim Biophys Acta Rev Cancer 2021;1876:188555. [Crossref] [PubMed]
  129. Silverstein MJ, Lagios MD, Craig PH, et al. A prognostic index for ductal carcinoma in situ of the breast. Cancer 1996;77:2267-74. [Crossref] [PubMed]
  130. Badve S, A'Hern RP, Ward AM, et al. Prediction of local recurrence of ductal carcinoma in situ of the breast using five histological classifications: a comparative study with long follow-up. Hum Pathol 1998;29:915-23. [Crossref] [PubMed]
  131. Kanevsky J, Corban J, Gaster R, et al. Big Data and Machine Learning in Plastic Surgery: A New Frontier in Surgical Innovation. Plast Reconstr Surg 2016;137:890e-7e. [Crossref] [PubMed]
  132. Pusic M, Ansermino JM. Clinical decision support systems. BCMJ 2004;46:236-9.
  133. Seidman AD. Computer-assisted decision support in medical oncology: We need it now; 2016. Available online: http://www.ascopost.com/issues/april-10-2016/computer-assisteddecision-support-in-medical-oncology-we-need-it-now/.
  134. Bouaud J, Pelayo S, Lamy JB, et al. Implementation of an ontological reasoning to support the guideline-based management of primary breast cancer patients in the DESIREE project. Artif Intell Med 2020;108:101922. [Crossref] [PubMed]
  135. Xu F, Sepúlveda MJ, Jiang Z, et al. Artificial Intelligence Treatment Decision Support For Complex Breast Cancer Among Oncologists With Varying Expertise. JCO Clin Cancer Inform 2019;3:1-15. [Crossref] [PubMed]
  136. Xu F, Sepúlveda MJ, Jiang Z, et al. Effect of an Artificial Intelligence Clinical Decision Support System on Treatment Decisions for Complex Breast Cancer. JCO Clin Cancer Inform 2020;4:824-38. [Crossref] [PubMed]
  137. Loftus TJ, Filiberto AC, Li Y, et al. Decision analysis and reinforcement learning in surgical decision-making. Surgery 2020;168:253-66. [Crossref] [PubMed]
  138. Heil J, Kuerer HM, Pfob A, et al. Eliminating the breast cancer surgery paradigm after neoadjuvant systemic therapy: current evidence and future challenges. Ann Oncol 2020;31:61-71. [Crossref] [PubMed]
  139. Giordano SH, Elias AD, Gradishar WJ. NCCN Guidelines Updates: Breast Cancer. J Natl Compr Canc Netw 2018;16:605-10. [Crossref] [PubMed]
  140. Cardoso F, Kyriakides S, Ohno S, et al. Early breast cancer: ESMO Clinical Practice Guidelines for diagnosis, treatment and follow-up†. Ann Oncol 2019;30:1194-220. [Crossref] [PubMed]
  141. Pfob A, Sidey-Gibbons C, Lee HB, et al. Identification of breast cancer patients with pathologic complete response in the breast after neoadjuvant systemic treatment by an intelligent vacuum-assisted biopsy. Eur J Cancer 2021;143:134-46. [Crossref] [PubMed]
  142. Pfob A, Sidey-Gibbons C, Barr RG, et al. The importance of multi-modal imaging and clinical information for humans and AI-based algorithms to classify breast masses (INSPiRED 003): an international, multicenter analysis. Eur Radiol 2022;32:4101-15. [Crossref] [PubMed]
  143. Pesapane F, Agazzi GM, Rotili A, et al. Prediction of the Pathological Response to Neoadjuvant Chemotherapy in Breast Cancer Patients With MRI-Radiomics: A Systematic Review and Meta-analysis. Curr Probl Cancer 2022;46:100883. [Crossref] [PubMed]
  144. Liang X, Yu X, Gao T. Machine learning with magnetic resonance imaging for prediction of response to neoadjuvant chemotherapy in breast cancer: A systematic review and meta-analysis. Eur J Radiol 2022;150:110247. [Crossref] [PubMed]
  145. Pfob A, Sidey-Gibbons C, Rauch G, et al. Intelligent Vacuum-Assisted Biopsy to Identify Breast Cancer Patients With Pathologic Complete Response (ypT0 and ypN0) After Neoadjuvant Systemic Treatment for Omission of Breast and Axillary Surgery. J Clin Oncol 2022;40:1903-15. [Crossref] [PubMed]
  146. Kaidar-Person O, Antunes M, Cardoso JS, et al. Evaluating the ability of an artificial-intelligence cloud-based platform designed to provide information prior to locoregional therapy for breast cancer in improving patient's satisfaction with therapy: The CINDERELLA trial. PLoS One 2023;18:e0289365. [Crossref] [PubMed]
  147. Sidey-Gibbons C, Pfob A, Asaad M, et al. Development of Machine Learning Algorithms for the Prediction of Financial Toxicity in Localized Breast Cancer Following Surgical Treatment. JCO Clin Cancer Inform 2021;5:338-47. [Crossref] [PubMed]
  148. Fisher S, Yasui Y, Dabbs K, et al. Re-excision and survival following breast conserving surgery in early stage breast cancer patients: a population-based study. BMC Health Serv Res 2018;18:94. [Crossref] [PubMed]
  149. Kahlert S, Kolben TM, Schmoeckel E, et al. Prognostic impact of residual disease in simultaneous additional excision specimens after one-step breast conserving therapy with negative final margin status in primary breast cancer. Eur J Surg Oncol 2018;44:1318-23. [Crossref] [PubMed]
  150. Nayyar A, Gallagher KK, McGuire KP. Definition and Management of Positive Margins for Invasive Breast Cancer. Surg Clin North Am 2018;98:761-71. [Crossref] [PubMed]
  151. Baker R, Matousek P, Ronayne KL, et al. Depth profiling of calcifications in breast tissue using picosecond Kerr-gated Raman spectroscopy. Analyst 2007;132:48-53. [Crossref] [PubMed]
  152. Haka A, Shafer-Peltier K, Fitzmaurice M, et al. Distinguishing type II microcalcifications in benign and malignant breast lesions using Raman spectroscopy. Modern Pathology. 2002;15:36A.
  153. Haka AS, Shafer-Peltier KE, Fitzmaurice M, et al. Identifying microcalcifications in benign and malignant breast lesions by probing differences in their chemical composition using Raman spectroscopy. Cancer Res 2002;62:5375-80.
  154. Kerssens MM, Matousek P, Rogers K, et al. Towards a safe non-invasive method for evaluating the carbonate substitution levels of hydroxyapatite (HAP) in micro-calcifications found in breast tissue. Analyst 2010;135:3156-61. [Crossref] [PubMed]
  155. Sathyavathi R, Saha A, Soares JS, et al. Raman spectroscopic sensing of carbonate intercalation in breast microcalcifications at stereotactic biopsy. Sci Rep 2015;5:9907. [Crossref] [PubMed]
  156. Chaturvedi D, Balaji SA, Bn VK, et al. Different Phases of Breast Cancer Cells: Raman Study of Immortalized, Transformed, and Invasive Cells. Biosensors (Basel) 2016;6:57. [Crossref] [PubMed]
  157. Kothari R, Jones V, Mena D, et al. Raman spectroscopy and artificial intelligence to predict the Bayesian probability of breast cancer. Sci Rep 2021;11:6482. [Crossref] [PubMed]
  158. Fu MR, Conley YP, Axelrod D, et al. Precision assessment of heterogeneity of lymphedema phenotype, genotypes and risk prediction. Breast 2016;29:231-40. [Crossref] [PubMed]
  159. Fu MR, Wang Y, Li C, et al. Machine learning for detection of lymphedema among breast cancer survivors. Mhealth 2018;4:17. [Crossref] [PubMed]
  160. Lukac S, Dayan D, Fink V, et al. Evaluating ChatGPT as an adjunct for the multidisciplinary tumor board decision-making in primary breast cancer cases. Arch Gynecol Obstet 2023;308:1831-44. [Crossref] [PubMed]
  161. Belmonte R, Monleon S, Bofill N, et al. Long thoracic nerve injury in breast cancer patients treated with axillary lymph node dissection. Support Care Cancer 2015;23:169-75. [Crossref] [PubMed]
  162. Zin T, Maw M, Oo S, et al. How I do it: Simple and effortless approach to identify thoracodorsal nerve on axillary clearance procedure. Ecancermedicalscience 2012;6:255. [Crossref] [PubMed]
  163. Maycock LA, Dillon P, Dixon JM. Morbidity related to intercostobrachial nerve damage following axillary surgery for breast cancer. The Breast 1998;7:209-12.
  164. van Egdom LSE, Pusic A, Verhoef C, et al. Machine learning with PROs in breast cancer surgery; caution: Collecting PROs at baseline is crucial. Breast J 2020;26:1213-5. [Crossref] [PubMed]
  165. Myung Y, Jeon S, Heo C, et al. Validating machine learning approaches for prediction of donor related complication in microsurgical breast reconstruction: a retrospective cohort study. Sci Rep 2021;11:5615. [Crossref] [PubMed]
  166. Paice JA, Portenoy R, Lacchetti C, et al. Management of Chronic Pain in Survivors of Adult Cancers: American Society of Clinical Oncology Clinical Practice Guideline. J Clin Oncol 2016;34:3325-45. [Crossref] [PubMed]
  167. Mejdahl MK, Andersen KG, Gärtner R, et al. Persistent pain and sensory disturbances after treatment for breast cancer: six year nationwide follow-up study. BMJ 2013;346:f1865. [Crossref] [PubMed]
  168. Dableh LJ, Yashpal K, Henry JL. Neuropathic pain as a process: reversal of chronification in an animal model. J Pain Res 2011;4:315-23. [Crossref] [PubMed]
  169. Pergolizzi JV, Gharibo C, Ho KY. Treatment Considerations for Cancer Pain: A Global Perspective. Pain Pract 2015;15:778-92. [Crossref] [PubMed]
  170. Lötsch J, Sipilä R, Tasmuth T, et al. Machine-learning-derived classifier predicts absence of persistent pain after breast cancer surgery with high accuracy. Breast Cancer Res Treat 2018;171:399-411. [Crossref] [PubMed]
  171. Sipilä R, Estlander AM, Tasmuth T, et al. Development of a screening instrument for risk factors of persistent pain after breast cancer surgery. Br J Cancer 2012;107:1459-66. [Crossref] [PubMed]
  172. Juwara L, Arora N, Gornitsky M, et al. Identifying predictive factors for neuropathic pain after breast cancer surgery using machine learning. Int J Med Inform 2020;141:104170.
  173. Lou SJ, Hou MF, Chang HT, et al. Machine Learning Algorithms to Predict Recurrence within 10 Years after Breast Cancer Surgery: A Prospective Cohort Study. Cancers (Basel) 2020;12:3817. [Crossref] [PubMed]
  174. Mosayebi A, Mojaradi B, Bonyadi Naeini A, et al. Modeling and comparing data mining algorithms for prediction of recurrence of breast cancer. PLoS One 2020;15:e0237658. [Crossref] [PubMed]
  175. Kim W, Kim KS, Park RW. Nomogram of Naive Bayesian Model for Recurrence Prediction of Breast Cancer. Healthc Inform Res 2016;22:89-94. [Crossref] [PubMed]
  176. Huang SH, Loh JK, Tsai JT, et al. Predictive model for 5-year mortality after breast cancer surgery in Taiwan residents. Chin J Cancer 2017;36:23. [Crossref] [PubMed]
  177. Peng Y, Hu T, Cheng L, et al. Evaluating and Balancing the Risk of Breast Cancer-Specific Death and Other Cause-Specific Death in Elderly Breast Cancer Patients. Front Oncol 2021;11:578880. [Crossref] [PubMed]
  178. Al jarroudi O, Zaimi A, Brahmi SA, et al. Nottingham Prognostic Index is an Applicable Prognostic Tool in Non-Metastatic Triple-Negative Breast Cancer. Asian Pac J Cancer Prev 2019;20:59-63.
  179. Moncada-Torres A, van Maaren MC, Hendriks MP, et al. Explainable machine learning can outperform Cox regression predictions and provide insights in breast cancer survival. Sci Rep 2021;11:6968. [Crossref] [PubMed]
  180. Panch T, Szolovits P, Atun R. Artificial intelligence, machine learning and health systems. J Glob Health 2018;8:020303. [Crossref] [PubMed]
  181. Shanafelt TD, Gradishar WJ, Kosty M, et al. Burnout and career satisfaction among US oncologists. J Clin Oncol 2014;32:678-86. [Crossref] [PubMed]
  182. Kleiner S, Wallace JE. Oncologist burnout and compassion fatigue: investigating time pressure at work as a predictor and the mediating role of work-family conflict. BMC Health Serv Res 2017;17:639. [Crossref] [PubMed]
  183. Ferreira MF, Savoy JN, Markey MK. Teaching cross-cultural design thinking for healthcare. Breast 2020;50:1-10. [Crossref] [PubMed]
  184. Chassang G. The impact of the EU general data protection regulation on scientific research. Ecancermedicalscience 2017;11:709. [Crossref] [PubMed]
  185. Rieke N, Hancox J, Li W, et al. The future of digital health with federated learning. NPJ Digit Med 2020;3:119. [Crossref] [PubMed]
  186. Zhu VZ, Tuggle CT, Au AF. Promise and Limitations of Big Data Research in Plastic Surgery. Ann Plast Surg 2016;76:453-8. [Crossref] [PubMed]
  187. Roscher R, Bohn B, Duarte M, et al. Explainable Machine Learning for Scientific Insights and Discoveries. IEEE Access 2020;8:42200-16.
  188. Lundberg SM, Nair B, Vavilala MS, et al. Explainable machine-learning predictions for the prevention of hypoxaemia during surgery. Nat Biomed Eng 2018;2:749-60. [Crossref] [PubMed]
  189. Reinisch A, Liese J, Padberg W, et al. Robotic operations in urgent general surgery: a systematic review. J Robot Surg 2023;17:275-90. [Crossref] [PubMed]
  190. Nehme J, Neville JJ, Bahsoun AN. The use of robotics in plastic and reconstructive surgery: A systematic review. JPRAS Open 2017;13:1-10.
  191. Tang A, Tam R, Cadrin-Chênevert A, et al. Canadian Association of Radiologists White Paper on Artificial Intelligence in Radiology. Can Assoc Radiol J 2018;69:120-35. [Crossref] [PubMed]
  192. Coiera E. The Price of Artificial Intelligence. Yearb Med Inform 2019;28:14-5. [Crossref] [PubMed]
Cite this article as: Seth I, Lim B, Joseph K, Gracias D, Xie Y, Ross RJ, Rozen WM. Use of artificial intelligence in breast surgery: a narrative review. Gland Surg 2024;13(3):395-411. doi: 10.21037/gs-23-414

Download Citation