Files
Abstract
Physical computing infrastructure, data gathering, and algorithms have recently had significant advances to extract information from images and videos. The growth has been especially outstanding in image captioning and video captioning. However, most of the advancements in video captioning still take place in short videos.
In this dissertation, we caption longer videos only by using the keyframes, which are a small subset of the total video frames. Instead of processing tens of thousands of frames, only a few frames are processed depending on the number of keyframes. There is a trade-off between the computation of many frames and the speed of the captioning process. The approach in this research is to allow the user to specify the trade-off between execution time and accuracy. We apply this system to make titles for videos. If we can generate reasonably meaningful titles for videos, the search engines could use them as metadata. For example, we search for videos that contain a woman with a red dress and sunglasses. Generating titles would significantly help search engines to search for videos. An additional novel application involves processing a video and directly generating captions describing a person's activities during a period without being constrained to time or location. The proposed model could be assistive technology to foster and facilitate physical activities. This framework could potentially help people manage their activities to reduce the health risks of an inactive lifestyle. Our work could be a healthcare application used by physicians or the public.
We demonstrate that these models and procedures and the interactions they enable are a path towards Artificial Intelligence. Our contribution lies in designing hybrid deep learning architectures to apply in long videos by captioning video keyframes. We consider the technology and the methodology that we have developed as steps toward the applications discussed in this dissertation. Consequently, the system we developed would open up doors for Generative Adversarial Networks to generate videos; such a thing does not presently exist. However, this is a step toward that goal.
In this dissertation, we caption longer videos only by using the keyframes, which are a small subset of the total video frames. Instead of processing tens of thousands of frames, only a few frames are processed depending on the number of keyframes. There is a trade-off between the computation of many frames and the speed of the captioning process. The approach in this research is to allow the user to specify the trade-off between execution time and accuracy. We apply this system to make titles for videos. If we can generate reasonably meaningful titles for videos, the search engines could use them as metadata. For example, we search for videos that contain a woman with a red dress and sunglasses. Generating titles would significantly help search engines to search for videos. An additional novel application involves processing a video and directly generating captions describing a person's activities during a period without being constrained to time or location. The proposed model could be assistive technology to foster and facilitate physical activities. This framework could potentially help people manage their activities to reduce the health risks of an inactive lifestyle. Our work could be a healthcare application used by physicians or the public.
We demonstrate that these models and procedures and the interactions they enable are a path towards Artificial Intelligence. Our contribution lies in designing hybrid deep learning architectures to apply in long videos by captioning video keyframes. We consider the technology and the methodology that we have developed as steps toward the applications discussed in this dissertation. Consequently, the system we developed would open up doors for Generative Adversarial Networks to generate videos; such a thing does not presently exist. However, this is a step toward that goal.