Files
Abstract
In the past decade, there has been rapid development of Ubiquitous Computing involvement in our daily lives. Smartphones, smartwatches, and smart homes are examples of this technological explosion. These smart devices are usually equipped with sensors that can be utilized to serve numerous purposes. In recent years, there has been a rising interest in recognizing human activity by using mobile sensors. These Human Activity Recognition (HAR) systems can be offline (delayed feedback) or online (immediate feedback). Much research to date focused on recognizing activities with repetitive patterns. However, there is a wide range of activities that do not contain repetitive patterns, and these activities have not yet been explored.
In this dissertation, we explore two distinct types of non-repetitive activities: stationary activities and non-stationary activities. Each of these types of activities have different characteristics that pose unique challenges. First, we develop an offline approach utilizing Machine Learning to recognize prayer activity as an example of a stationary activity. From accelerometer data collected from 20 subjects, we extract 90 features from the Time and Frequency domains and compare the performance of eight Machine Learning algorithms.
Second, we propose an offline approach to recognize soccer as an example of a non-stationary activity. We succeed in recognizing 5 different soccer actions. We utilize Fast Hadamard Transform in lieu of Fast Fourier Transform to decrease computational cost. In addition, we show that the recognition task can be achieved using two accelerometer axes instead of three axes. We successfully achieve high accuracy of 88% when using a single classifier and 90% by combining multiple classifiers. In order to prove the feasibility of recognizing non-stationary activities in real time, we examine three different Time Series algorithms: Time Series Forest, Fast Shapelets, and Bag-of-SFA-Symbols in conjunction with other factors that might affect the classification performance. Additionally, we introduce a novel collaborative model in a majority voting mechanism to further enhance the performance of the system. Our results show that choosing the right parameters can reduce the training time drastically without forfeiting the level of accuracy. Our collaborative model outperforms the single model to reach 84% in accuracy with a decrease in the training time by one order of magnitude.
In this dissertation, we explore two distinct types of non-repetitive activities: stationary activities and non-stationary activities. Each of these types of activities have different characteristics that pose unique challenges. First, we develop an offline approach utilizing Machine Learning to recognize prayer activity as an example of a stationary activity. From accelerometer data collected from 20 subjects, we extract 90 features from the Time and Frequency domains and compare the performance of eight Machine Learning algorithms.
Second, we propose an offline approach to recognize soccer as an example of a non-stationary activity. We succeed in recognizing 5 different soccer actions. We utilize Fast Hadamard Transform in lieu of Fast Fourier Transform to decrease computational cost. In addition, we show that the recognition task can be achieved using two accelerometer axes instead of three axes. We successfully achieve high accuracy of 88% when using a single classifier and 90% by combining multiple classifiers. In order to prove the feasibility of recognizing non-stationary activities in real time, we examine three different Time Series algorithms: Time Series Forest, Fast Shapelets, and Bag-of-SFA-Symbols in conjunction with other factors that might affect the classification performance. Additionally, we introduce a novel collaborative model in a majority voting mechanism to further enhance the performance of the system. Our results show that choosing the right parameters can reduce the training time drastically without forfeiting the level of accuracy. Our collaborative model outperforms the single model to reach 84% in accuracy with a decrease in the training time by one order of magnitude.