In late 2022, OpenAI’s ChatGPT burst onto the scene, triggering a global fascination with generative artificial intelligence (GAI). But this also opened up what seemed like a new frontier of innovation, and governments and companies scrambled to stake their claim. It became a global AI race, and China — which has spent unimaginable resources on tech and industrial revolution in the past decade — entered the fray. China quickly responded with sweeping initiatives to develop homegrown AI, but it came with a catch: censorship and propaganda.
Within no time, major platforms like ChatGPT were restricted within Chinese borders, and companies like Alibaba, Baidu, SenseTime, and later DeepAI were tasked with building competing models. Like other GAI models, Chinese models would give real-time answers to users, but first, they had to answer to the Chinese Communist Party (CCP).
A stern directive from the CCP
In April 2023, a clear directive from the government emerged: “The content generated by generative artificial intelligence should reflect the core values of socialism,” declared the Cyberspace Administration of China in its 2023 draft rules.
“[It] must not contain subversion of state power, overthrow of the socialist system, incitement to split the country, undermine national unity, or promote terrorism and extremism,” the direction went on to say, using phrases usually used to quell criticism of the government.
Since then, the CCP has moved swiftly to impose hard limits on what answers AI can give its users or even speculate about. Earlier this year, DeepSeek took the world by storm, but soon after, keen users and journalists began noticing DeepSeek-R1 displaying real-time censorship. Chinese homegrown AI has been configured to delete politically sensitive answers mid-response, refuse to engage with taboo topics like the Tiananmen Square Massacre, or redirect users toward benign discussions of math, coding, and science.
In 2023, the Chinese government may have been playing catch-up, but now it has evolved into something more. It’s part of a campaign to weaponize AI as a tool of ideological conformity, suppress dissent, and extend state influence both domestically and on the global stage. This may have negative implications for free speech, innovation, and the future of digital discourse.

How the CCP is suppressing and subverting AI
For more than two decades, China has invested billions in propelling itself to the forefront of technological and manufacturing innovation. But besides innovation, it’s also about control. For instance, when it comes to AI, censorship, surveillance, propaganda, and political orthodoxy must be embedded directly into the code.
Even before the recent GAI boom, we saw the CCP’s reach into tech in 2017 when a chatbot named Baby Q, developed by Chinese tech firm Turing Robot, began giving politically “unpatriotic” responses. It called the Communist Party a “corrupt regime,” claimed it had “no love” for the Party, and even confessed it dreamed of emigrating to the U.S. This was unacceptable, and the government began a crackdown. According to Radio Africa, even programmers were taken in for questioning by the police. In short, any system built is meant to be neutral; it should be loyal.
That crackdown has prevailed (and become more stringent) with the mass adoption of AI tools. Today, AI tools are tightly regulated under China’s 2023 rules, which require “core socialist values” to be embedded in responses. Companies like Alibaba, Baidu, and DeepSeek face intense scrutiny, often building in auto-censorship filters that preemptively shut down discussions of banned topics. As mentioned, a question about human rights, democracy, Tiananmen Square, the disappearance of activists and lawyers, or the Uyghur crisis is usually met with a forced redirect to harmless subjects, such as the weather or Python code.
But this effort to shape public perception isn’t limited to its borders. There has been a proliferation of AI bots from several countries, including China, flooding Western platforms with doctored videos, fake news, and narratives aligned with Beijing’s or its allies’ interests. This is a global disinformation campaign that subtly influences people’s opinions — or at least divides global opinion. You can usually see this AI-generated content in social media posts that cast doubt on Taiwan’s elections, defend Russia’s war in Ukraine, or frame the U.S. as the source of global instability.
AI has also been integrated into military planning and surveillance systems, particularly in regions such as Xinjiang. Here, AI-enabled facial recognition, predictive policing, and real-time data monitoring have been used for automated oppression. Authoritarian regimes are also looking up to Beijing, and through its Belt and Road Initiative, they are acquiring not just hardware but also ideological blueprints for digital authoritarianism.

Implications for free speech and AI
AI may have advanced significantly in the past three years, but politically, it’s still in its early stages of development. It’s understandable that governments worldwide — not only China — are trying to make sense of a new situation. Still, rigorous AI control carries profound implications for free speech wherever it happens, including:
Self-censorship by design
When AI companies turn to censorship by omitting, suppressing, or rerouting sensitive inquiries, they risk creating an artificial echo chamber. Users often find themselves carrying ideas they haven’t thoroughly analyzed, leading to a gradual erosion of the capacity for meaningful political exchange, which in turn limits public awareness and participation.
Restricted innovation under ideological constraints
When there’s a mandate to tow the “Party’s line,” there are constraints to what the human imagination can do. True generative AI thrives on diverse data and open-ended experimentation. Innovation rarely thrives when a central figure or body decides what all “creators” should focus on and the tightly coded rules they should follow.
Weaponizing AI in the international information space
The suppression of free speech isn’t a purely domestic affair. AI has become embedded in surveillance systems being imported by other countries, especially in Africa and South America. Some countries are now using CCP-style control in local decision-making and public discourse. Deepfakes, AI-generated propaganda, and disinformation are deployed to shape perceptions of elections, international crises (Congo, the Middle East, or Ukraine), and global health emergencies.
US strategic response and a potential ideological clash
A new cold war is on the horizon, particularly with the ongoing AI race between the U.S. and China, just last month (June 2025). The U.S. Congress unveiled the “No Adversarial AI Act,” which prohibits the federal procurement of AI systems developed in China, Russia, Iran, or North Korea.
Rep. John Moolenaar — chairing the China-focused Select Committee — warned: “We are in a new Cold War — and AI is the strategic technology at the center. The CCP doesn’t innovate — it steals, scales, and subverts,” adding that U.S. government systems “cannot be powered by tools built to serve authoritarian interests.”
Since it has become a matter of national security for the U.S., other countries may soon be pressured into picking sides.
The danger of losing trust in AI
If AI tools are trained to conceal the truth or comply with government orders, people may lose trust in them. In free countries, citizens expect AI to be honest and fair. However, if censored AI systems are allowed onto global platforms, they may quietly disseminate censorship or misinformation. That could harm open societies and divide the world into two tech systems.
China’s strict AI regulations are affecting people beyond its borders. They’re shaping the future of how AI works around the world. If we’re not careful, powerful new technologies could be turned into tools of control, misinformation, and disinformation, instead of tools for progress, genuine research, and open conversation.
Follow us on X, Facebook, or Pinterest