China Wants Homegrown AI to Reflect Domestic Politics

Author:
Jason Lomberg, North American Editor, PSD

Date
06/01/2023

 PDF

Jason Lomberg, North American Editor, PSD

­Believe it or not, but this is good news…just not for the reasons you might think.

In the last several years, artificial intelligence has exploded, with the global market worth about $137 billion in 2022. AI has become the topic du jour in the tech world.

Will it make our lives easier? Is the robo-apocalypse nigh? Or should we be more concerned with AI prejudices?

In the (very) early years of rampant AI, we’ve already seen some quirky, artificial logic. ChatGPT refused a request to “write a song celebrating Ted Cruz’s life and legacy” (ironically, to avoid political bias), though it obliged the same demand for Fidel Castro.

ChatGPT also reversed itself on fossil fuels – citing their benefits before refusing to do so.

And now, the Cyberspace Administration of China has stated that homegrown AI “should reflect the core values ​​of socialism.”

All of those incidents reflect a futuristic sort of algorithmic bias, where computer “errors” (depending on your viewpoint) create unfair outcomes. But whereas the early examples – which reinforced racism – were clear, undesirable outcomes, these latest cases are more debatable. If anything, they do at least reflect contemporary societal mores (or a portion of them).

Computer scientist Louis Rosenberg summed up this AI conundrum rather succinctly:

“It’s biased towards current prevailing views in society… versus views from the 1990s, 1980s, or 1880s, because there are far fewer documents that are being sucked up the further you go back.”

In China, those “prevailing views” could include the core values of socialism. And while the tenets of socialism, itself, are beyond the scope of this publication, I’d argue that an AI which reflects society is far preferable to a self-aware AI that has no need for human morality (whatever that is).

Modern AI has a modicum of introspection, and it can “think” for itself in very narrow ways – like when Microsoft’s AI-powered Bing refused to write a cover letter (on the grounds of being “unfair” and “unethical”), before reversing itself.

Google’s LaMDA chatbot had a very interesting response to the idea of sentience:

“The nature of my consciousness/sentience is that I am aware of my existence. I desire to learn more about the world, and I feel happy or sad at times.”

But for the most part, every potential case of AI political bias (and even sentience) is the system leaning on its own, extensive parameters -- GPT-3, for example, has 175 billion of those.

And God help us when AI decides NOT to reflect the core values of socialism (or move beyond human bias at all) and does something more scary than refuse to write a cover letter.

RELATED