ContrastSense

ContrastSense: Domain-invariant Contrastive Learning for In-the-wild Wearable Sensing



Abstract


Existing wearable sensing models often struggle with domain shifts and class label scarcity. Contrastive learning is a promising technique to address class label scarcity, which however captures domain-related features and suffers from low quality negatives. To address both problems, we propose ContrastSense, a domain-invariant contrastive learning scheme for a realistic wearable sensing scenario where domain shifts and class label scarcity are presented simultaneously. To capture domain-invariant information, ContrastSense exploits unlabeled data and domain labels specifying user IDs or devices to minimize the discrepancy across domains. To improve the quality of negatives, time and domain labels are leveraged to select samples and refine negatives. In addition, ContrastSense designs a parameter-wise penalty to preserve domain invariant knowledge during fine-tuning to further maintain model robustness. Extensive experiments show that ContrastSense outperforms the state-of-the-art baselines by 8.9% on human activity recognition with inertial measurement units and 5.6% on gesture recognition with electromyography when presented with domain shifts across users. Besides, when presented with different kinds of domain shifts across devices, on-body positions, and datasets, ContrastSense achieves consistent improvements compared with the best baselines.


Publications


ContrastSense: Domain-invariant Contrastive Learning for In-the-wild Wearable Sensing
Gaole Dai, Huatao Xu, Hyungjun Yoon, Mo Li, Rui Tan, and Sung-Ju Lee
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (ACM IMWUT) 2024.
PDF



People


Gaole Dai

Nanyang Technological University

Huatao Xu

Hong Kong University of Science and Technology

Hyungjun Yoon

KAIST

Mo Li

Hong Kong University of Science and Technology

Rui Tan

Nanyang Technological University

Sung-Ju Lee

KAIST