Loading...
Hand-Crafted Features with Simple Deep Learning Architectures for Human Activity Recognition
Albadawi, Yaman Sufian
Albadawi, Yaman Sufian
Description
A Master of Science thesis in Computer Engineering by Yaman Sufian Albadawi entitled, “Hand-Crafted Features with Simple Deep Learning Architectures for Human Activity Recognition”, submitted in June 2024. Thesis advisor is Dr. Tamer Shanableh. Soft copy is available (Thesis, Completion Certificate, Approval Signatures, and AUS Archives Consent Form).
Abstract
With the growth in the wearable device market, wearable sensor-based human activity recognition systems have been gaining increasing interest in research because of their rising demands in many areas. This research presents a novel sensor-based human activity recognition system that utilizes a hand-crafted feature extraction technique associated with a deep learning method for classification. In this work, we divide the sensor sequences time-wise into non-overlapping 2D segments. We then compute statistical features from each 2D segment using two approaches; the first approach computes features from the raw sensor readings, while the second approach applies time-series differencing to sensor readings prior to feature calculations. Applying time-series differencing to 2D segments helps identify the underlying structure and dynamics of the sensor reading across time. We also experiment with two selection methods, including stepwise regression and selecting KBest to select useful features in an attempt to create a more representative model of the extracted features. Also, we investigate the effect of adding a one-dimensional convolutional layer and an attention layer to the deep learning network on the model performance. We experiment with different numbers of 2D segments of sensor reading sequences. We also report results with and without the use of different components of the proposed system. The proposed feature extraction method is integrated with an existing transformer designed for human activity recognition. All of these arrangements are tested with different deep-learning architectures. Several experiments are performed on four benchmark datasets: mHealth, USC-HAD, UCI-HAR, and DSA. The experimental results revealed that the proposed system outperforms the human activity recognition rates and F1-scores reported in the most recent studies. Specifically, we report recognition rates of 99.17%, 81.07%, 99.44%, and 94.03% for the four datasets, respectively.