Scientists have developed a new artificial eye inspired by the adaptive vision of animals and popularized in science fiction films such as Terminator. The technology uses a liquid metal pupil that automatically changes shape and size in response to light, potentially helping robots, autonomous vehicles and advanced machines see more clearly in rapidly changing environments.
Researchers from the University of North Carolina at Chapel Hill, Westlake University and other institutions introduced the concept in a study published in the journal Science Robotics. Their goal was to address a common challenge of modern image processing systems: cameras and sensors often have problems when lighting conditions suddenly change, for example when transitioning from darkness to bright sunlight.
Unlike biological eyes, many computer vision systems rely heavily on software processing to compensate for overexposure or low light
These methods can be slow, energy intensive and sometimes unreliable. Instead, the new system is directly inspired by nature by recreating the pupillary light reflex, the automatic process that allows human and animal pupils to instantly adapt to changing lighting conditions.
At the center of the technology is a liquid metal pupil made of eutectic gallium-indium (EGaIn). This material is embedded in flexible microchannels and is controlled by electrochemical signals. When bright light hits the artificial retina, it produces electrical impulses that trigger the liquid metal to contract, reducing the amount of light entering the system. As the environment becomes darker, the pupil dilates again to capture more light.
The researchers also designed the system so that the pupil can change its shape, not just its size. In addition to circular pupils like those found in humans, the device can recreate shapes seen in animals such as cats, frogs, sheep or squid, which can help adapt vision systems to different environments.
The artificial eye consists of three main components
First, it is a hemispherical artificial retina consisting of light-sensitive photodetectors arranged in a curved structure. Second, there are liquid metal “neurons” that convert light signals into electrical impulses. Third, there is the adaptive liquid metal pupil, which adjusts the aperture based on these signals. Together, these elements form a closed system that mimics how biological eyes regulate light exposure.
Initial tests suggest that the approach could significantly improve computer vision. In one experiment, image recognition accuracy in bright light increased from about 68 percent to over 83 percent when the adaptive pupil system was activated.
This improvement is important because vision is one of the most important skills for new technologies such as robots, drones and self-driving cars. These systems must operate in unpredictable real-world conditions where lighting can change quickly – for example, from dark tunnels to bright daylight.
A hardware-based solution like the liquid metal pupil could reduce the need for complex image processing algorithms while improving speed and energy efficiency. This makes the technology particularly promising for mobile systems where power consumption and processing speed are crucial.
The potential applications go beyond robotics and autonomous vehicles
Researchers say the technology could also improve security cameras, medical imaging devices, drones and neuromorphic computing systems that attempt to replicate biological brain functions.
The artificial eye is currently still a proof-of-concept prototype, but the team is already working on refining the design. Future work will focus on miniaturizing the liquid metal actuators and photodetectors, improving energy efficiency, and integrating the system into real devices.
Researchers also plan to expand the system with additional sensing capabilities, including color and multispectral imaging, and possibly combine it with tactile or motion sensors to create machines with more comprehensive sensing.
If these developments are successful, the liquid metal pupil could represent an important step towards machines that see the world more like humans – and animals – and allow robots and vehicles to navigate complex environments far more consciously.




