Abstract Xia Zhou 29 May 2017

From IMC wiki
Jump to: navigation, search

The ability to sense what we do and how we behave is crucial to help detect diseases, diagnose early symptoms of health issues, and foster healthier lifestyles. Existing sensing technologies, however, have significant drawbacks. They either are intrusive — we have to constantly carry or wear sensing devices (e.g., Apple Watch, Fitbit), or present serious privacy risks by capturing raw images, or are vulnerable to electromagnetic interference.

In this talk, I will present a radically different approach to unobtrusive human sensing, which exploits the ubiquitous light around us as a sensing medium that senses and responds to what we do, without requiring any on-body devices nor any cameras. I will first present our efforts on reconstructing a 3D human skeleton in real time using purely the light around us. Empowered by Visible Light Communication (VLC), our system uses shadows created by a human body from blocked light to reconstruct the 3D skeleton. I will then present our ongoing efforts on fine-grained sensing with light, including reconstructing 3D hand gestures and tracking user gaze. I will conclude with future directions.


Bio: Xia Zhou is an Assistant Professor in the Department of Computer Science at Dartmouth College. She received her PhD at UC Santa Barbara in 2013. Her general research interests are in mobile systems and wireless networking. Her work has won awards in top conferences such as MobiCom, MobiSys, UbiComp, and SIGMETRICS. She is a recipient of the Sloan Research Fellowship in 2017, NSF CAREER Award in 2016, and Google Faculty Research Award in 2014.