Go to main content
Formats
Format
BibTeX
MARCXML
TextMARC
MARC
DataCite
DublinCore
EndNote
NLM
RefWorks
RIS

Files

Abstract

This thesis explores two uses of deep neural networks to perform quasi-semantic visual tasks in two domains: creating convincing color correction for raw video stills, and discovering images semantically similar to a search image. First, two methods are compared to determine which works better for color grading. Method A, using a retrained classification network, works very effectively but is not easily extensible. Method B, using a trained conditional Generative Adversarial Network, works extremely well, though it softens images a small amount and can create artifacts in the corrected images. Of the two methods, cGAN is chosen as the best option for future research. Second, we repurpose and retrain a classification network to create a histogram of output classifications used to recall images that are similar to a query image. This method works effectively, discovering images that match or nearly match the query image with a high degree of precision.

Details

PDF

Statistics

from
to
Export
Download Full History