Capstone Projects

BUS-CAD Completed Capstone Projects

As of December 2022, six graduate students in the Masters of Science in Data Science program have completed capstone projects.

Computer-Aided Diagnosis (CAD) of Breast Cancer from Ultrasound Image

Adam Silberfein, August 2021.Link to pdf file

Abstract: Breast ultrasound has long been one the most preferred modalities of breast cancer detection due to its relatively low cost, ease of access, and noninvasive nature. Advancements in imaging technology and improvements in technique and standardization have led to corresponding increases in accuracy, but the potential for an additional boost exists from the application of artificial intelligence and machine learning. This paper explores how the potential of a quantitative model and associated machine learning prediction can assist radiologists currently using what is primarily a qualitative model to assign a BI-RADS score and characterize breast lesions as benign or malignant. The steps taken involve having an expert radiologist annotate a publicly available data set of 135 breast ultrasound images according to an established rubric, using image processing techniques to convert certain highly predictive features of that rubric to a numerical representation, and then feeding that quantitative representation into a machine learning model to predict malignancy of a validation set of images. The resulting model confirms that not only were lesion shape and circumscription important features for classification, but so was internal echotexture, which is much more difficult to differentiate to the naked eye. The model will be used to classify future larger data sets provided by our client, either in its current manifestation or after retraining on a subset of those new images.

Classification of Breast Lesions using Deep Learning

Justin Hall, December 2021. Link to pdf file

Abstract: Breast ultrasounds are a standard modality used to find cancerous lesions and provide early care for patients. However, this modality is sensitive to user dependency. The discrepancies between radiologists’ readings for the same image may lead to inadequate patient care and inconsistent outcomes. As breast cancer continues to affect people around the globe, a new approach is needed to detect cancerous lesions. To provide a system for standardized classifications of these images, we examine the use of state-of-the-art convolutional neural network architectures to improve patient care. Our analysis showed that while all models showed similar AUC on our validation dataset of 121 images, DenseNet-201 maximized the F1-score. To provide confidence in our predictions, we then use local interpretable model-agnostic explanations (LIME) to find what image regions are essential to our model. Our model identified areas of lesion consistent with areas identified by trained radiologists suggesting that this model shows promise in aiding radiologists resulting in better care for patients.

AI-Based Breast Cancer Classification Using Ultrasound Images

Suriya Mohan, May 2022, Link to pdf file

Abstract: Breast cancer is the most frequently diagnosed cancer in women, and it is the leading cause of cancer-related deaths in women. Identifying the tumor in the early stage is essential to increase the survival rate of the patients. The breast ultrasound images combined with biopsy are clinically proven to be an effective method of diagnosing tumors. This project utilizes Artificial Intelligence (AI), such as deep learning models, to segment and classify the tumor in the ultrasound images. A segmentation deep learning model is utilized to detect the tumor in the ultrasound image, identify the region of interest (ROI), and draw a bounding box around the tumor. The radiologist confirms the ROI. Later the ROI is sent to the classification model to categorize cancer as benign or malignant. The radiologist can use the results from these models to make informed decisions, thereby reducing the manual errors involved in the current process. The AI technology assistance and radiologist expertise can lead to a process that is less prone to errors. The models were trained on augmented data set of 452 malignant images and 706 benign images. The accuracy of this model is 76.2%, precision 80%, and F1 score of 72.7%.

Computer-Aided Detection (CAD) System for Breast Ultrasound Lesion Interpretation: An Explainable Deep Learning Approach

Josh Jarvey, August 2022, Link to pdf file

Abstract: According to the World Health Organization, breast cancer is now the most prevalent type of cancer diagnosed around the world. In 2020, it accounted for the deaths of roughly 685,000 women alone. Breast cancer incidence and mortality rates are expected to continue to rise in the coming decades, primarily in low to middle-income counties, due in part to an adoption of a more western lifestyle coupled with misconceptions about the nature and curability of the disease. Therefore, in partnership between the University of Wisconsin – La Crosse and Mayo Clinic Health Systems, the purpose of this study is to assess the feasibility of utilizing deep learning to aid radiologists with the interpretation of lesions discovered in breast ultrasound (BUS) images during routine clinical screenings. Based on a review of the literature on medical imaging and computer-assisted detection (CAD) systems for BUS interpretation, a multitask learning model using a pre-trained state-of-the-art convolutional neural network (CNN) was developed and trained using various image augmentation techniques known to increase performance. The research found that the best model identified from this study performed on par with that of a trained radiologist in its ability to predict lesion pathology. However, no definitive conclusions could be drawn about the model’s multitask performance due in part to the limited data available. Further research is needed as more data is made available from the Mayo Clinic, and alternative explainability methods may need to be explored.

Semantic Segmentation for Medical Ultrasound Imaging

Florin Andrei, December 2022, Link to pdf file

Abstract: Breast cancer is the type of cancer with the highest prevalence nowadays, around the world. According to the World Health Organization, millions of cases are diagnosed each year, and in the year 2020 it has caused approximately 685,000 deaths worldwide. Forecasts indicate that the number of cases and the number of deaths may increase for the foreseeable future. Within a larger joint project by the University of Wisconsin - La Cross, and the Mayo Clinic Health Systems, this study has developed deep learning techniques that could be used to improve the diagnostic process for breast cancer. There are two main ways that deep learning technology could be applied: by providing real-time visual cues to radiologists while exploring lesions using ultrasound, and by creating predictions that become inputs for other models, in so-called ensemble methods, where multiple models work together to predict characteristics of the lesion. Based on a review of the literature on medical imaging and image segmentation techniques, several semantic segmentation models were created and trained on breast ultrasound imaging datasets. The research found that the best models performed at a level that would be expected from state-of-the-art image segmentation techniques, adjusted for the additional challenges raised by ultrasound imaging. Further work is needed within the parent project to integrate the models trained within this study with software usable by radiologists, and to explore the performance of ensemble methods where multiple models, including models trained in this study, work together to make predictions on ultrasound images that would ultimately lead to better diagnostic results and better patient outcomes.

Computer-Aided Diagnosis (CAD) of Breast Cancer: Methods of Model Explainability

Teresa Bodart, December 2022, Link to pdf file

Abstract: The International Agency for Research on Cancer announced that in 2020 female breast cancer became the most diagnosed cancer worldwide and the most common cause of cancer-related death in women. Still, breast cancer generally has a good prognosis with timely detection and appropriate treatment. Recently, computer-aided diagnosis (CAD) systems have shown promising results in using artificial intelligence (AI) to detect malignant lesions in breast ultrasound (US) imaging. When working with AI in a clinical setting, however, the American College of Radiology advocates for radiologist understanding of the algorithms in use. Accordingly, this study contributes to an ongoing collaboration between the University of Wisconsin-La Crosse and Mayo Clinic Enterprise by investigating three methods of AI explainability for the CAD software in development. Class activation maps, saliency maps, and attention map-enhanced class activation maps were compared to determine the most useful technique for visualizing regions in the US used by the models to determine pathology. The evaluation showed that saliency maps are the most promising method for visually explaining breast US classification. However, the small dataset and simplified model architecture used in this study mean that further research is necessary before fully implementing this method within the greater collaboration.

Computer-Aided Decision Software for Breast Ultrasound Lesions Business Plan

Elsa Braun, May 2023 Link to pdf file

Abstract: Our company, Ultrasound.AI, is launching a hybrid deep learning algorithm. that will collaborate with radiologists in order to forecast the presence of breast cancer using ultrasound imagery. Our algorithm is unique in its ability to reduce radiologist time spent reviewing a patient’s images, thus improving throughput. Our algorithm can preview all images in a patient study and prioritize them for the radiologist. The Ultrasound.AI algorithm can match the accuracy of radiologists while reducing the need for unwarranted breast biopsies. Our algorithm uniquely offers transparency to the radiologist. A heatmap process allows the radiologist to measure how the algorithm arrived at its breast cancer prediction. With our next update, the Ultrasound.AI algorithm in collaboration with partner, Visage Imaging, will further save radiologist time by generating standardized diagnosis and treatment data (BI-RADS report) for the patient’s electronic medical record.

Organizing Ultrasound Imaging Data for Enhanced Breast Cancer Diagnosis with Deep Learning Models

Pranali Shendekar, May 2023 Link to pdf file

Abstract: This capstone project aimed to develop a software tool to manage and process ultrasound imaging data to train deep-learning models for predicting the presence of cancerous lesions in breast tissue. The project involved collecting, managing, updating, and summarizing study and annotation data in batches as they become available. The study data included ultrasound images, patient information, biopsy results, BI-RADS scores, and metadata about the images and ultrasound equipment. In addition, the annotation data had additional labels and visual outlines of the lesions, which were provided retrospectively by trained radiologists.

The software tool was designed to extract information from the metadata and text annotations accompanying each image and add it to the master index file. Additionally, the tool had to be able to copy the images to the correct locations in the collection and ensure that the image and patient IDs were distinct from those of previously added data. The tool also allowed for the possibility of overwriting existing data, adding additional columns to the index file, and incorporating corrections. Furthermore, the tool included additional columns that may be collected later from the annotation data.

Python code was also developed to enable simple filtering of the image data and to write data downloaders allowing retrieving the image data from the collection. Finally, to show the efficiency of the database, the dataset was tested on a simple deep learning ResNet50 pre-trained model, and the PyTorch data loaders were used to pass data to the deep learning model.

Overall, this project contributed to developing a useful tool that can aid in the early detection of breast cancer, leading to better patient outcomes. The software tool developed in this project can be used in further research studies in medical imaging, deep learning, and clinical settings to support radiologists in diagnosing breast cancer.