Predicting Mind-Wandering with Facial Videos in Online Lectures


The importance of online education has been brought to the forefront due to COVID. Understanding students' attentional states are crucial for lecturers, but this could be more difficult in online settings than in physical classrooms. Existing methods that gauge online students' attention status typically require specialized sensors such as eye-trackers and thus are not easily deployable to every student in real-world settings. To tackle this problem, we utilize facial video from student webcams for attention state prediction in online lectures. We conduct an experiment in the wild with 37 participants, resulting in a dataset consisting of 15 hours of lecture-taking students' facial recordings with corresponding 1,100 attentional state probings. We present PAFE (Predicting Attention with Facial Expression), a facial-video-based framework for attentional state prediction that focuses on the vision-based representation of traditional physiological mind-wandering features related to partial drowsiness, emotion, and gaze. Our model only requires a single camera and outperforms gaze-only baselines.


Predicting Mind-Wandering with Facial Videos in Online Lectures
Taeckyung Lee, Dain Kim, Sooyoung Park, Dongwhi Kim, and Sung-Ju Lee
International Workshop on Computer Vision for Physiological Measurement (CVPM '22)

Demo: Real-Time Attention State Visualization of Online Classes
Taeckyung Lee, Hye-Young Chung, Sooyoung Park, Dongwhi Kim, and Sung-Ju Lee
Proceedings of ACM MobiSys 2022 (Demo).


Taeckyung Lee


Dain Kim


Hye-Young Chung

Hanyang University

Sooyoung Park


Dongwhi Kim


Sung-Ju Lee



Please be aware that as the dataset contains privacy-sensitive data, we at the moment offer the dataset access primarily for academic research purposes. You may still request access if you wish to use the dataset for purposes other than academic research. In that case, however, your request can be subject to a more rigorous review, and we may ask for additional information.
For the same reason, we will not consider requests submitted under personal email addresses such as Gmail. This is to protect the privacy of participants who contributed to our dataset, so please submit a request with a verifiable academic or company email address. Also, providing detailed information about yourself and your purpose in requesting our dataset will help us decide whether to grant or deny access.
Please send us your full name and your justification for the dataset inquiry via email to the first author (taeckyung (at)

Source Codes

Data Collection Tool


Taeckyung Lee, Dain Kim, Sooyoung Park, Dongwhi Kim, Sung-Ju Lee; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2022, pp. 2104-2113