Source: Yalcin Sonat via Alamy Stock Photo
The ambient light sensors typically employed in smart devices for adjusting screen brightness could capture images of user interactions and could pose a unique privacy threat, according to researchers at MIT's robotics program.
The academic research team developed a computational imaging algorithm to illustrate the potential risk, highlighting the previously overlooked capability of these sensors to covertly record user gestures.
Unlike cameras, the sensors do not require native or third-party applications to seek permission for their use, making them vulnerable to exploitation.
The researchers demonstrated that ambient light sensors can clandestinely capture users' touch interactions, such as scrolling and swiping, even during video playback.
The process involves an inversion technique, collecting low-bitrate light variations blocked by the user's hand on the screen.
Yang Liu, a PhD at the MIT Electrical Engineering & Computer Science Department (EECS) and CSAIL, explains these sensors could pose an imaging privacy threat by providing that information to hackers monitoring smart devices.
"The ambient light sensor needs an adequate level of light intensity for a successful recovery of a hand interaction image," he explains. "The permission-free and always-on nature of ambient light sensors posing such imaging capability impact privacy as people are not aware that non-imaging devices could have such potential risk."
Ambient Smartphone Sensors: Additional Security Concerns
He adds that one potential security implication besides eavesdropping touch gestures is revealing partial facial information.
"One additional piece of information is color," he explains. "Most smart devices today are equipped with multi-channel ambient light sensors for automatic color temperature adjustmen — this directly contributes to color image recovery for imaging privacy threats."
The trend of consumer electronics pursuing larger and brighter screens can also impact this threat surface by making the imaging privacy threat more acute.
"Additional artificial intelligence- and [large language model] LLM-powered computational imaging developments might also make imaging with as few as one bit of information per measurement possible, and completely change our current 'optimistic' privacy conclusions," Liu cautions.
A Solution: Restricting Information Rates
Liu explains that software-side mitigation measures would help restrict the permission and information rate of ambient light sensors.
"Specifically, for operating system providers, they should add permission controls to those 'innocent' sensors, at a similar or slightly lower level than cameras," he says.
To balance sensor functionality with the potential privacy risk, Liu says the speed of ambient light sensors should be further reduced to 1-5 Hz and the quantization level to 10-50 lux.
"This would reduce the information rate by to two to three orders of magnitude and any imaging privacy threats would be unlikely," he says.
IoT Cyber Threats Snowball
From the perspective of Bud Broomhead, CEO at Viakoo, the discovery is not cause for great alarm, and he noted the capture of one frame of hand gestures every 3.3 minutes — the result in MIT testing — provides virtually no incentive to a threat actor to perform a very sophisticated and time-consuming exploit.
"However, it is a reminder that all digitally connected devices can have exploitable vulnerabilities and need attention to their security," he says. "It's reminiscent of when security researchers find new ways to attack air-gapped systems through mechanisms like blinking lights on the NIC card [PDF] — interesting in theory but not a threat to most people."
John Bambenek, president at Bambenek Consulting, says this should be a reminder for consumers and businesses to check their devices and apps for what information is being collected and how it's being used.
"We only recently got the transparency tools to even check that," he says. "Researchers and academics will hopefully continue to do this kind of work to figure out where the gaps are between the transparency tools and what is possible."
He points out that attackers and other malicious individuals are constantly looking for ways to target users, and that these less obvious cyberattack paths could be attractive to some.
"Unfortunately, that also includes tech companies who have a voracious appetite for data to feed their new AI algorithms," Bambenek says.
The threat extends beyond cameras to patterns made by physical gestures — a team of researchers at Cornell University recently published research detailing an AI model trained on smartphone typing records, which exhibited a 95% accuracy in stealing passwords.
As researchers discover additional flaws in IoT devices and operating systems — all of which are connected through increasingly complex networks, there has been a renewed emphasis on secure by design principles to ensure defense is more deeply integrated into software.