China joins the global push for AI content regulation

News Room

Many international entities are pushing for better regulation of AI-generated content on the internet– and China’s government is the latest to reign in the use of the quickly developing technology.

According to Bloomberg, several government ministries have joined with the Chinese internet watchdog Cyberspace Administration of China (CAC) to announce a new mandate that will require internet users to identify any AI-generated content as such in a description or metadata encoding.

This effort is intended to prevent China’s internet from becoming saturated with fake content and harmful disinformation. The mandate is set to take effect in September and will be regulated at the internet service provider level, the South China Morning Post noted.

“The Labeling Law will help users identify disinformation and hold service suppliers responsible for labeling their content. This is to reduce the abuse of AI-generated content,” the CAC wrote in a statement, as translated by Bloomberg.

China isn’t the only government entity that has gotten serious about taking charge of AI-generated content online. The European Union established the AI Act in 2024, as the “first-ever legal framework on AI, which addresses the risks of AI and positions Europe to play a leading role globally.”

In action, users will have to make clear their intent to share AI-generated content, and those who attempt to edit published AI content labels could be subject to penalties by their internet service providers, the South China Morning Post added.

However, Futurism noted that as AI content becomes more realistic, it can become harder for real and fake content to be accurately identified.

While former President Joe Biden previously established an executive order in 2023 promoting the user of safe, secure, and trustworthy AI, current President Donald Trump has since repealed that order.

Even so, several large tech companies, including Google, Meta, Anthropic, Amazon, and OpenAI, among others also signed a pledge in 2023, stating their commitment to responsible AI with watermarking systems for their technologies. As of now, there is no word on where the companies stand on that pledge.

While battling AI-generated content has been a persistent issue since the industry popularized, recent news has indicated that users on X and Reddit have been using Google’s Gemini 2.0 Flash model to remove watermarks from copyright-protected images. This brings up some ethical and potentially legal issues for those experimenting with the trick. It is a reminder of why AI safeguards are a very good idea.






Read the full article here

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *