Dates
Monday, March 21, 2022 - 01:00pm to Monday, March 21, 2022 - 02:00pm
Location
Zoom
Event Description

All are invited to Parmida Ghahremani's doctoral defense on Monday 03/21/2022 at 1pm on zoom:

https://stonybrook.zoom.us/j/95932054492?pwd=eGtSRXEzbTJhbEdXR0dZdVNSSGFoUT09

 

Title: Deep Learning for Segmentation, Classification, and Visualization in Optical Microscopy

 

Abstract: This dissertation presents deep learning approaches for detecting, segmenting, and classifying objects in optical microscopy images as well as data collection methods, including crowdsourcing and image synthesis, for collecting data for training and testing purposes. To this end, we present three main frameworks for three types of optical microscopy data, along with four data synthesis and collection techniques.

First, we introduce NeuroConstruct, a novel end-to-end application for the segmentation, registration, and visualization of brain volumes imaged using wide-field microscopy. NeuroConstruct offers a Segmentation Toolbox including various annotation functions and an automatic convolutional neuronal network-based neurites segmentation module. To visualize neurites in a given volume, NeuroConstruct offers a hybrid rendering. For a complete reconstruction of the 3D neurites, we present a Registration Toolbox that provides automatic coarse-to-fine alignment of serially sectioned samples.

In recent years, deep convolutional neural networks have shown tremendous success in solving many biomedical tasks. However, the development of deep convolutional networks requires access to large quantities of high-quality annotated images. As image annotation is a tedious task for biomedical experts, recruiting non-expert crowd workers can be economical and efficient in providing a rich dataset of annotated images. We first present CrowdDeep, a novel technique to improve the segmentation accuracy of deep learning models trained with expert annotation on hematoxylin and eosin (H&E) slides using the crowd-annotated data.

Reporting biomarkers assessed by routine immunohistochemical (IHC) staining of tissue is broadly used in diagnostic pathology laboratories for patient care. To date, clinical reporting is predominantly qualitative or semi-quantitative. By creating a multitask deep learning framework called DeepLIIF, we present a single-step solution to nuclear segmentation and quantitative single-cell IHC scoring. Leveraging a unique de novo dataset of co-registered IHC and multiplex immunofluorescence (mpIF) data generated from the same tissue section, we simultaneously segment and translate low-cost, and prevalent IHC slides to more expensive-yet-informative mpIF images. To expand the dataset size, we present a style transfer model using an attention and normalization module to transfer the style of an IHC image to a pair of hematoxylin-mpIF marker data.

Event Title
Parmida Ghahremani, Ph.D. Thesis Defense