EU Turbocharges Censorship of Conservatives with AI Regulation

By Stefano Gennarini, J.D.

Image from https://www.vpnsrus.com/

NEW YORK, July 18 (C-Fam) New EU “safety and security” standards will require tech companies to censor and moderate content on general-purpose artificial intelligence (AI) models to prevent “hate” and “discrimination.”

The EU Commission General-Purpose AI Code of Practice requires AI developers and tech companies to ensure that general-purpose AI models are “safe,” including through censorship of content that is “hateful, radicalizing, or false.”

The new regulation has the potential to turbocharge censorship and social control on all major tech platforms. Along with the notorious EU Digital Services Act, the regulation is expected to lead to new self-imposed automated AI censorship tools across all major technology platforms.

A section of the newly published standards highlights “harmful manipulation” as a specific major risk and appears to define it by reference to populist political narratives against EU transgender policies and mass immigration programs.

“Harmful manipulation” is defined in the regulation as “the strategic distortion of human behavior or beliefs by targeting large populations or high-stakes decision-makers through persuasion, deception, or personalized targeting.” This, the regulation explains, “could undermine democratic processes and fundamental rights, including exploitation based on protected characteristics.” Protected characteristics in the EU context is understood widely to refer to issues such as migration status or sexual orientation and gender identity.

The regulation requires tech companies to first identify a broad set of potential “systemic risks” under the categories of public health, safety, public security, fundamental rights, and society as a whole. Other specific dangers identified in the regulation include “misalignment with human values (e.g. disregard for fundamental rights)” and “discriminatory bias.” Once such risks are identified, AI developers and tech companies must then analyze and mitigate any potential systemic risks by “monitoring and filtering the model’s inputs and/or outputs.”

The standards are a “voluntary tool” designed to show that tech companies comply with the EU’s legislation on artificial intelligence, known as the AI Act. Even though they are only voluntary, companies that adopt the standards will be deemed to comply with the AI Act. “This will reduce their administrative burden and give them more legal certainty than if they proved compliance through other methods,” The EU Commission says.

The standards are predominantly prospective. They are not so much geared toward addressing existing problems in AI models as much as ensuring that future models comply with the standards by design so that the output of AI models conforms to the standards.

The new regulation is in addition to already exhausting censorship measures tech companies are required to adopt under the EU Digital Services Act. The EU Commission already required large tech companies to censor through the Code of Conduct on Disinformation. When it as first adopted in 2018 it was only a voluntary regulation, but it is now binding under the 2025 EU Digital Services Act adopted earlier this year.

The EU Digital Services Act requires large online platforms to censor content in line with the priorities of the EU Commission. The disinformation regulation expressly requires tech companies to censor and promote official EU propaganda through content moderation, demonetization, fact-checking, and counter-information.

U.S. Vice President J.D. Vance criticized the censorship rules of the EU Digital Services Act at an AI Summit in Paris in February this year.

“We feel strongly that AI must remain free from ideological bias,” he said, “and that American AI will not be co-opted into a tool for authoritarian censorship.” He also warned against the EU AI Act and its possible effect on innovation.