2022 Physics Residency Alumna
My clinical work focuses on adaptive and MR-guided radiotherapy through my work with the ViewRay MRIdian system. This system gives us the opportunity, via MR imaging, to adapt the patient’s radiation plan to their anatomy on the day of treatment and enables us to visualize the anatomy throughout treatment. In addition to my clinical work on the ViewRay system, I have a strong interest in increasing automation throughout the clinic to streamline workflows. My research has primarily focused on patient safety and quality management, including analysis of trends and investigation of workflow inefficiencies.
I also participate in the educational mission of the department. I teach lectures for our MD resident physics class, as well as lectures for our radiation therapy students. In addition to lectures, I have conducted labs for the radiation therapy students, giving them the opportunity to get hands on experience with physics equipment and learn more about what physicists do to support the clinic.
Residency, University of Wisconsin–Madison, Therapeutic Medical Physics (2022)
PhD, University of Chicago, Medical Physics (2020)
BS, University of Wisconsin–Madison, Nuclear Engineering (2015)
Assistant Professor , Department of Human Oncology (2022)
Selected Honors and Awards
Lawrence H. Lanzl Medical Physics Graduate Fellowship Award – University of Chicago (2019)
American Association of Physicists in Medicine Expanding Horizons Travel Grant (2018)
Boards, Advisory Committees and Professional Organizations
American Association of Physicists in Medicine (AAPM) Member (2017 to Pres.)
Society for Imaging Informatics in Medicine (SIIM) Member (2019 to Pres.)
MR-Guided and Adaptive Radiotherapy, Patient Safety and Quality Management, Automation
The impact of COVID-19 on a high-volume incident learning system: A retrospective analysis Journal of applied clinical medical physics
Jacqmin DJ, Crosby SM
2022 Jul;23(7):e13653. doi: 10.1002/acm2.13653. Epub 2022 May 26.
PURPOSE: The purpose of this work was to assess how the coronavirus disease 2019 (COVID-19) pandemic impacted our incident learning system data and communicate the impact of a major exogenous event on radiation oncology clinical practice.
METHODS: Trends in our electronic incident reporting system were analyzed to ascertain the impact of the COVID-19 pandemic, including any direct clinical changes. Incident reports submitted in the 18 months prior to the pandemic (September 14, 2018 to March 13, 2020) and reports submitted during the first 18 months of the pandemic (March 14, 2020 to September 13, 2021) were compared. The incident reports include several data elements that were evaluated for trends between the two time periods, and statistical analysis was performed to compare the proportions of reports.
RESULTS: In the 18 months prior to COVID-19, 192 reports were submitted per 1000 planning tasks (n = 832 total). In the first 18 months of the pandemic, 147 reports per 1000 planning tasks were submitted (n = 601 total), a decrease of 23.4%. Statistical analysis revealed that there were no significant changes among the data elements between the pre- and during COVID-19 time periods. An analysis of the free-text narratives in the reports found that phrases related to pretreatment imaging were common before COVID-19 but not during. Conversely, phrases related to intravenous contrast, consent for computed tomography, and adaptive radiotherapy became common during COVID-19.
CONCLUSIONS: The data elements captured by our incident learning system were stable after the onset of the COVID-19 pandemic, with no statistically significant findings after correction for multiple comparisons. A trend toward fewer reports submitted for low-risk issues was observed. The methods used in the work can be generalized to events with a large-scale impact on the clinic or to monitor an incident learning system to drive future improvement activities.
PMID:35616007 | PMC:PMC9278685 | DOI:10.1002/acm2.13653
View details for PubMedID 35616007
Anatomic Point-Based Lung Region with Zone Identification for Radiologist Annotation and Machine Learning for Chest Radiographs Journal of digital imaging
Li F, Armato SG, Engelmann R, Rhines T, Crosby J, Lan L, Giger ML, MacMahon H
2021 Aug;34(4):922-931. doi: 10.1007/s10278-021-00494-7. Epub 2021 Jul 29.
Our objective is to investigate the reliability and usefulness of anatomic point-based lung zone segmentation on chest radiographs (CXRs) as a reference standard framework and to evaluate the accuracy of automated point placement. Two hundred frontal CXRs were presented to two radiologists who identified five anatomic points: two at the lung apices, one at the top of the aortic arch, and two at the costophrenic angles. Of these 1000 anatomic points, 161 (16.1%) were obscured (mostly by pleural effusions). Observer variations were investigated. Eight anatomic zones then were automatically generated from the manually placed anatomic points, and a prototype algorithm was developed using the point-based lung zone segmentation to detect cardiomegaly and levels of diaphragm and pleural effusions. A trained U-Net neural network was used to automatically place these five points within 379 CXRs of an independent database. Intra- and inter-observer variation in mean distance between corresponding anatomic points was larger for obscured points (8.7 mm and 20 mm, respectively) than for visible points (4.3 mm and 7.6 mm, respectively). The computer algorithm using the point-based lung zone segmentation could diagnostically measure the cardiothoracic ratio and diaphragm position or pleural effusion. The mean distance between corresponding points placed by the radiologist and by the neural network was 6.2 mm. The network identified 95% of the radiologist-indicated points with only 3% of network-identified points being false-positives. In conclusion, a reliable anatomic point-based lung segmentation method for CXRs has been developed with expected utility for establishing reference standards for machine learning applications.
PMID:34327625 | PMC:PMC8455736 | DOI:10.1007/s10278-021-00494-7
View details for PubMedID 34327625
Deep convolutional neural networks in the classification of dual-energy thoracic radiographic views for efficient workflow: analysis on over 6500 clinical radiographs Journal of medical imaging (Bellingham, Wash.)
Crosby J, Rhines T, Li F, MacMahon H, Giger M
2020 Jan;7(1):016501. doi: 10.1117/1.JMI.7.1.016501. Epub 2020 Feb 3.
DICOM header information is frequently used to classify medical image types; however, if a header is missing fields or contains incorrect data, the utility is limited. To expedite image classification, we trained convolutional neural networks (CNNs) in two classification tasks for thoracic radiographic views obtained from dual-energy studies: (a) distinguishing between frontal, lateral, soft tissue, and bone images and (b) distinguishing between posteroanterior (PA) or anteroposterior (AP) chest radiographs. CNNs with AlexNet architecture were trained from scratch. 1910 manually classified radiographs were used for training the network to accomplish task (a), then tested with an independent test set (3757 images). Frontal radiographs from the two datasets were combined to train a network to accomplish task (b); tested using an independent test set of 1000 radiographs. ROC analysis was performed for each trained CNN with area under the curve (AUC) as a performance metric. Classification between frontal images (AP/PA) and other image types yielded an AUC of 0.997 [95% confidence interval (CI): 0.996, 0.998]. Classification between PA and AP radiographs resulted in an AUC of 0.973 (95% CI: 0.961, 0.981). CNNs were able to rapidly classify thoracic radiographs with high accuracy, thus potentially contributing to effective and efficient workflow.
PMID:32042858 | PMC:PMC6995870 | DOI:10.1117/1.JMI.7.1.016501
View details for PubMedID 32042858
Single-institution report of setup margins of voluntary deep-inspiration breath-hold (DIBH) whole breast radiotherapy implemented with real-time surface imaging Journal of applied clinical medical physics
Xiao A, Crosby J, Malin M, Kang H, Washington M, Hasan Y, Chmura SJ, Al-Hallaq HA
2018 Jul;19(4):205-213. doi: 10.1002/acm2.12368. Epub 2018 Jun 22.
PURPOSE: We calculated setup margins for whole breast radiotherapy during voluntary deep-inspiration breath-hold (vDIBH) using real-time surface imaging (SI).
METHODS AND MATERIALS: Patients (n = 58) with a 27-to-31 split between right- and left-sided cancers were analyzed. Treatment beams were gated using AlignRT by registering the whole breast region-of-interest to the surface generated from the simulation CT scan. AlignRT recorded (three-dimensional) 3D displacements and the beam-on-state every 0.3 s. Means and standard deviations of the displacements during vDIBH for each fraction were used to calculate setup margins. Intra-DIBH stability and the intrafraction reproducibility were estimated from the medians of the 5th to 95th percentile range of the translations in each breath-hold and fraction, respectively.
RESULTS: A total of 7269 breath-holds were detected over 1305 fractions in which a median dose of 200 cGy was delivered. Each fraction was monitored for 5.95 ± 2.44 min. Calculated setup margins were 4.8 mm (A/P), 4.9 mm (S/I), and 6.4 mm (L/R). The intra-DIBH stability and the intrafraction reproducibility were ≤0.7 mm and ≤2.2 mm, respectively. The isotropic margin according to SI (9.2 mm) was comparable to other institutions' calculations that relied on x-ray imaging and/or spirometry for patients with left-sided cancer (9.8-11.0 mm). Likewise, intra-DIBH variability and intrafraction reproducibility of breast surface measured with SI agreed with spirometry-based positioning to within 1.2 and 0.36 mm, respectively.
CONCLUSIONS: We demonstrated that intra-DIBH variability, intrafraction reproducibility, and setup margins are similar to those reported by peer studies who utilized spirometry-based positioning.
PMID:29935001 | PMC:PMC6036385 | DOI:10.1002/acm2.12368
View details for PubMedID 29935001