Go to main content
Formats
Format
BibTeX
MARCXML
TextMARC
MARC
DataCite
DublinCore
EndNote
NLM
RefWorks
RIS

Files

Abstract

Biomedical image segmentation is crucial for accurate diagnostics, treatment planning, and biological research. However, biomedical images pose unique challenges due to their diverse modalities, complex biological structures, and the scarcity or high cost associated with creating large annotated datasets. Although convolutional neural networks (CNNs) achieve state-of-the-art accuracy, their effectiveness is often limited by poor generalizability, as they usually require extensive labeled data and domain-specific adaptations. Self-supervised and unsupervised methods offer valuable alternatives in scenarios where annotated data is unavailable or when domain-specific knowledge and cues can be leveraged to guide segmentation. These methods are particularly beneficial in biomedical contexts, as they utilize unlabeled or weakly labeled data to bypass the dependence on extensive annotations. Nonetheless, these approaches frequently encounter challenges related to segmentation accuracy, robustness, and domain-specific variability, limiting their effectiveness in broader clinical and research applications. Recently, foundation models, specifically the Segment Anything Model (SAM), have shown promise in overcoming many of these limitations. Trained on large and diverse datasets, SAM's capability for generalized segmentation via zero-shot learning suggests potential applicability across various biomedical imaging modalities and domains. However, SAM requires fine-tuning and adaptation to achieve reliable performance within the specialized domain of biomedical images. This dissertation addresses critical gaps across these segmentation paradigms. Chapter 1 introduces a supervised segmentation pipeline designed explicitly for comprehensive 3D cell instance segmentation, tracking, and motility classification of \textit{Toxoplasma gondii}, emphasizing accuracy, robustness, and usability. Chapters 2 and 3 explore self-supervised and minimally supervised segmentation methods: Chapter 2 proposes a self-supervised approach leveraging motion-derived pseudo-labels for efficient cilia segmentation without manual annotations, while Chapter 3 investigates minimally supervised contrastive learning to enhance generalization and accuracy across biomedical imaging modalities. Lastly, Chapter 4 examines the adaptation of SAM to biomedical segmentation tasks, demonstrating techniques to fine-tune foundation models, enabling their effective use in biomedical domains.

Details

PDF

Statistics

from
to
Export
Download Full History