Tuesday, April 21, 2026
Google search engine
HomeTechnologyYou don't want to trust Meta's new Muse Spark AI with health...

You don’t want to trust Meta’s new Muse Spark AI with health advice

Meta’s new Muse Spark may be touted as a smarter AI model, but based on early testing, it sounds like the kind of AI you really wouldn’t want to have anywhere near serious medical decisions.

The current WIRED report was about the experience with Muse Spark. Meta’s health-focused AI model in the Meta AI app did not show promising results. The chatbot reportedly asked users to upload raw medical data such as lab reports, blood glucose readings, and blood pressure logs, then offered to help them analyze patterns and trends.

This all sounds pretty useful until you realize two immediate concerns. They transmit very sensitive data and whether the AI ​​is even trustworthy enough to interpret it.

What went wrong during the first tests?

The first problem is difficult to ignore. At a time when life already feels too transparent, Muse Spark takes an even more curious approach. It’s not surprising to share the information necessary for an accurate diagnosis, but handing over your personal health data to a chatbot for advice doesn’t seem to pose a privacy risk.

Unlike data shared with a doctor or hospital, the information entered into a chatbot is not automatically subject to the same expectations or protections that humans assume are in place. This is not a professionally vetted opinion, and that makes the idea shaky. AI is presented as a helpful tool, but the environment around it is still much more like a consumer product than a proper medical product.

That’s not even the worst part

Aside from the typical privacy risks associated with sharing personal information with a tech giant, you’d at least expect a usable answer. However, the more serious problem appeared to be the quality of advice. In WIRED testing, the chatbot reportedly created an extremely low-calorie meal plan after being asked about weight loss and aggressive intermittent fasting.

While the bot showed some of the risks along the way, a warning doesn’t mean much if the model then helps the user do the dangerous thing anyway. This is currently the real problem with many AI health tools. They can sound cautious, informed and even-tempered until they start to confirm bad assumptions. This polished tone can give false advice with confidence, making failure more dangerous.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments