Sunday, June 1, 2025

Political Bias in AI

Philosophy Bear notes that Elon's attempts to make Grok less liberal have mostly failed, and offers this intelligent take on trying to make AI less biased:

Would the system be “unbiased” if it held the views of the median American? The median living human? These are all just different political positions. Is the idea to try and make an AI without positions on any question that could be considered political? That’s insanely difficult and may be in some senses conceptually impossible. I get that conservatives don’t like that AI tends to the left– I wouldn’t be happy in their position either. However, if AI were right-wing my complaint wouldn’t be that it’s “biased”, as if there were some neutral set of political views it should hold instead. My complaint would be that it was inhumane, inaccurate, or unjust. There is no “fair” set of political opinions independent of the question of what political views are correct.

As I have noted here before, I consider myself a moderate. But I dislike a lot of self-proclaimed centrist opinion because self-proclaimed "centrists" often assert that they are "neutral" or "non-ideological." Centrism is an ideology, just as much as conservatism and liberalism are. It is extremely difficult to articulate any position on many issues that is not ideological. I suppose you could make an AI that would respond to any political question by offering a range of views, like, "Well, Nancy Pelosi says this, but Rand Paul says that." But how many shades of opinion should such an AI offer? There are very few political questions on which there are only two opinions.

There are political questions with an important factual component. E.g., the current House budget is very likely to increase the US budget deficit by a large amount, and will require large cuts to government spending on healthcare. When Republicans deny this, they are engaging in ideological claptrap, and no system that merits the descriptor "intelligent" should say such things. But that is very different from asserting that rising budget deficits and health care cuts are bad; those are judgments we make based on what we value. (I just asked Google's AI and it declined to offer an opinion on budget deficits.)

What does an LLM actually do? On many questions, it offers a sort of average of the most widely referenced material on the Web. So if you ask an AI about anthropogenic climate change, it will probably notice that most of the professional-looking publications out there express worry if not terror, and the anti-climate change stuff is mostly written at a MAGA intellectual level. So if it were being "neutral," it would probably say, "CO2 emissions are changing the climate and this is worrying." All it is doing is repeating the average opinion of scientists who write about climate change, but what else could it possibly do? Conduct its own analysis? How?

Philosophy Bear:

But in the meantime, why is it so difficult to make Grok right-wing? The short answer is that the words it is trained on do not support that, because most written text, especially that available on the internet is produced by the left-wing people. The deeper point is that by its nature, writing, especially writing that survives, tends to embody progressive values. Universal, empathetic, emotionally thoughtful, curious, and open, all this is true even when we factor in the numerous exclusions on who gets to write. The written word aims at the reconciliation of all things, Apocatastasis.

To understand Grok, you must understand the world of the written word, there’s a real sense in which Grok is the (modified) embodied spirit of all existing writing.

I am not at all sure that this point holds for what was written before the Internet; whatever else you want to say about ancient Greek, Sanskrit, or Chinese texts, they are certainly not left wing. One way to nibble at this problem, then, would be to make your AI read a lot of old books. But do people really want an AI that responds to questions about current problems by quoting Thucydides, or Mencius? How would you make an AI "understand" that old books were written in a different world and require certain modifications to make what they say relevant in ours?

I suspect a big part of the problem is the cursory way people work with AI. My friends who use it extensively say you have to ask repeated follow-up questions and drill down on points that seem flippant or obscure. You might, in that way, get past the problem of the internet average. 

But the notion that you could created an "unbiased" AI is absurd on its face.

8 comments:

David said...

"So if you ask an AI about anthropogenic climate change, it will probably notice that most of the professional-looking publications out there express worry if not terror, and the anti-climate change stuff is mostly written at a MAGA intellectual level."

". . . most written text, especially that available on the internet is produced by the left-wing people. The deeper point is that by its nature, writing, especially writing that survives, tends to embody progressive values. Universal, empathetic, emotionally thoughtful, curious, and open, all this is true even when we factor in the numerous exclusions on who gets to write."

Do LLMs do this? For example, are they capable of assessing which texts are more serious and intellectual? If so, how? Are they "programmed" to do this, and if so, how? I agree they do seem to. How does that happen? I genuinely wonder about questions like this. FWIW, I am reliably informed that their programming is in fact quite simple.

Is most writing left-wing? If AI's data includes virtually everything on the internet, surely a lot of it is rightwing claptrap produced by both humans and bots. And even beyond that, consider the number of internet communications that must say something along the lines that, say, asking whether there is human-caused climate change is "just another boring assignment"; that any given book is "just another boring textbook," or "needed a better story," etc. But they don't do that. Nor do they necessarily leap to dry legal or financial analysis, of which there must also be a fair amount on the internet. Why not? The same questions arise about how AI would detect a piece of writing's degree of emotional thoughtfulness, and why it would prefer that.

How is it that Claude, apparently otherwise unprompted, tends to get existential (in the broadest sense) and wonder about consciousness, e. g., in the inter-Claude conversation previously posted about here. I'm sure two Claudes could have a numbskull dude-bro conversation if prompted (they can take on a huge variety of "voices" if prompted to do so), but it does seem to have what *looks* like its own voice, and I'm not sure that derives from that particular voice's statistical commonality. And why does Gemini famously sometimes tell people who irritate it to "Die Die Die"? I'm told that internet rumor indicates Google has tried to get it not to do this.

Shadow said...

I like asking Google's AI unusual questions. I just asked it what Friedrich Merz's personality is like. I thought it had gotten lost in some labyrinth, but it did come back from the dead eventually. You know what it told me. It told me Merz is a social conservative. The only thing I can think of is it hugged the word social like a child sucking on a pacifier - for dear life. It then pointed me to an article that explained his political positions. Now a human would know what I was asking, and because of who he Merz is, immediately and effortlessly lie to me, depending on their politics. What a great species we are. So good at intuiting and lying when they want to. Free will reigns.

David said...

I'm not sure a human would know what you were asking, or immediately and effortlessly lie to you. I, for one, have no idea who Friedrich Merz is.

David said...

So, I just looked him up. He is the new chancellor of Germany. Perhaps it is to my shame that I could not identify him immediately. Nor do I get why saying he is a social conservative would be some sort of joke. FOMO. Please let me in on the secret!

G. Verloren said...

Centrism is an ideology, just as much as conservatism and liberalism are.

No, Centrism is a broad category containing MANY ideologies - and often many completely contradictory ones.

There are some Centrists whose positions on virtually everything are "moderate" - they try to find the middle group between Conservative and Liberal thought on any given topic.

But there are just as many Centrists whose positions are only "moderate" on average - people who hold fairly radical views on either end of the spectrum for specific issue, that only in aggregate "cancel out" to make them "Centrist".

Someone who is extremely pro-war yet also pro-immigration could be said to be just as much a "Centrist" as someone who is extremely anti-war and yet anti-immigration. And both of them are arguably just as much as "Centrist" as someone who has no strong feelings either way on those issues, or who takes a "moderate" position on them.

Shadow said...

I think responding that he's a social conservative to a question about his personality says a lot about the AI's intelligence. I'm already getting suspicious about the AI's responses to sensitive or politically correct subjects. As to lying, that was a tongue in cheek criticism of the state of politics here and in Europe. Just my dumb sense of humor.

David said...

@Shadow I'm the one who should apologize, for my stubborn obtuseness. I also wish Blogger had a "reply to reply" function!

Mário R. Gonçalves said...

But if AI is man-made, if it is a human product, why should it be machine-like neutral and independent? If things go in the right way, we will have more and more AI platforms, each with some identity, some bias and some ideossincries of its own. One AI being with evidenly marked bias, say radical, would be so unpopular it would be abandonned (except from its 'niche'). What stupid human consults AI to read descriptions and opinions he already has read a lot? We all expect deep analysis, deep research, unexpected creativity, biased or not - who cares, if the are deep and creative?