AI systems have long been treated like sealed black boxes, particularly in areas such as facial recognition and autonomous driving. New research suggests the protection isn’t as robust as thought.
A team led by KAIST shows that AI systems can be reverse-engineered remotely using emissions emitted during normal operation, without direct intervention. Instead, the approach is listening.
Using a small antenna, the researchers recorded weak electromagnetic traces from GPUs and reconstructed the structure of the system. It sounds like a heist, but the results are durable and the security implications are immediate.
This is how the side channel works
The system, called ModelSpy, collects the electromagnetic output generated as GPUs process AI workloads. These traces are subtle, but follow patterns linked to the arrangement of the architecture.
By analyzing these patterns, the team arrived at important details, including layer construction and parameter selection. Tests showed that core structures could be identified with an accuracy of up to 97.6 percent.
The structure makes the whole thing unsettling. The antenna fits in a pocket and requires no physical access. It worked from up to six meters away, even through walls and across multiple GPU types. The computation itself becomes a side channel, revealing the design of the system without a traditional violation.
Why this changes AI security
This takes AI security into more unfamiliar territory. Most defenses focus on software exploits or network access. ModelSpy instead targets the physical byproducts of computation.
Even isolated systems could leak sensitive information if hardware emissions are not controlled. For companies, this architecture often represents key intellectual property, making it a direct business risk.
The paper describes this as a cyber-physical challenge, where defending AI now encompasses both digital protections and the environment, raising the bar for what protection actually means.
What the defense looks like now
The team also outlined ways to reduce risk, including adding electromagnetic noise and adjusting calculation execution to make patterns harder to interpret
These corrections point to a broader change. Securing AI may require hardware-level customizations, not just software updates, making deployment difficult for industries already tied to existing systems.
The research was recognized at a major security conference and shows how seriously this threat is being taken. The next exposure may not be about a break-in at all, but simply an observation of what systems inadvertently reveal.




