The government is under renewed pressure to toughen online safety laws after rejecting key recommendations to curb the viral spread of misinformation, despite agreeing with most MPs’ findings about the scale of the problem.
The Science, Innovation and Technology Committee today published the government’s and Ofcom’s responses to its July report, which concluded that the Online Safety Act (OSA) fails to tackle algorithmic amplification of false content and exposes users to fast-spreading misinformation – much of it amplified by generative AI.
Both the government and Ofcom accepted the committee’s assessment that misinformation poses significant risks, but ministers declined to adopt several key recommendations, including calls to expand online safety legislation to explicitly cover generative AI platforms. The committee argued that such platforms are capable of disseminating large amounts of false content and that they should be regulated in line with other high-risk online services.
The government rejected this proposal, insisting that AI-generated content already falls under the OSA – a position that contradicts Ofcom’s previous testimony to the committee, in which the regulator said the legal status of generative AI was “not entirely clear” and suggested further work was needed.
MPs also warned that misinformation cannot be meaningfully tackled without addressing the digital advertising business models that encourage social media companies to promote harmful content. The government acknowledged the link between advertising and amplification but refused to commit to reform, instead saying the issue would be kept “under control”.
Committee chairwoman Dame Chi Onwurah MP criticized the government’s reluctance to take action. “If the government and Ofcom agree with our conclusions, why should we shy away from adopting our recommendations?” she said. “The argument that the OSA already covers generative AI does not convince the committee. The technology is evolving much faster than the legislation and clearly more needs to be done.”
She added that the failure to address the monetization of harmful content leaves a major loophole: “How do we stop this without addressing the ad-based models that incentivize platforms to algorithmically amplify misinformation?”
Onwurah warned that complacency poses a real risk to public safety. “It is only a matter of time before the summer unrest fueled by misinformation is repeated in 2024,” she said. “The government must urgently close the gaps in online safety law before further damage is done.”




