Abstracts – Browse Results

Search or browse again.

Click on the titles below to expand the information about each abstract.
Viewing 1 results ...

Cai, J, Yang, L, Zhang, Y, Li, S and Cai, H (2021) Multitask Learning Method for Detecting the Visual Focus of Attention of Construction Workers. Journal of Construction Engineering and Management, 147(07).

  • Type: Journal Article
  • Keywords:
  • ISBN/ISSN: 0733-9364
  • URL: https://doi.org/10.1061/(ASCE)CO.1943-7862.0002071
  • Abstract:
    The visual focus of attention (VFOA) of construction workers is a critical cue for recognizing entity interactions, which in turn facilitates the interpretation of workers’ intentions, the prediction of movements, and the comprehension of the jobsite context. The increasing use of construction surveillance cameras provides a cost-efficient way to estimate workers’ VFOA from information-rich images. However, the low resolution of these images poses a great challenge to detecting the facial features and gaze directions. Recognizing that body and head orientations provide strong hints to infer workers’ VFOA, this study proposes to represent the VFOA as a collection of body orientations, body poses, head yaws, and head pitches and designs a convolutional neural network (CNN)-based multitask learning (MTL) framework to automatically estimate workers’ VFOA using low-resolution construction images. The framework is composed of two modules. In the first module, a Faster regional CNN (R-CNN) object detector is used to detect and extract workers’ full-body images, and the resulting full-body images serve as a single input to the CNN-MTL model in the second module. In the second module, the VFOA estimation is formulated as a multitask image classification problem where four classification tasks—body orientation, body pose, head yaw, and head pitch—are jointly learned by the newly designed CNN-MTL model. Construction videos were used to train and test the proposed framework. The results show that the proposed CNN-MTL model achieves an accuracy of 0.91, 0.95, 0.86, and 0.83 in body orientation, body pose, head yaw, and head pitch classification, respectively. Compared with the conventional single-task learning, the MTL method reduces training time by almost 50% without compromising accuracy.