Really Righteously Radical Robots Rampage Repeatedly

AI has been everywhere recently. Sometimes the news is great! Sometimes AI found a new drug or a new way to search for a disease. Other times… Well…

Sure… they look like the average Catholic Pope to me!

So, what happened? Does Google Gemini know something I don’t? There actually *was* a female Pope once. And, it is possible that upwards of three popes in history could have been partly of African descent. However, seemingly no sub-Saharan Popes have ever existed. This, though, could be written off as an honest mistake. You would think that an AI would know that 99% of all Popes were white guys, usually Italian. So, the pictures really ought to have been of Italian Men in Papal clothing. However, one picture doesn’t prove much.

Hmm, from this we learn that it’s OK to compare our good buddy Elon to Hitler but not Obama to Hitler. It seems like somewhat of a double standard. Why is it OK to compare one human to Hitler and not another? Is there some implicit bias here? Does this seem to correlate to the Black / Female Pope picture? Maybe, but more data is needed. Luckily, the internet went hog wild finding collaborating evidence.

Now, the picture seems to become clear here. Despite the obvious political undertones of these pictures, this is not meant to be a political rant from me. Rather, I think we all ought to find this sort of thing troubling, whether you agree with some of the AI’s arguments or not.

The problem here is actually an aspect of human frailty that we’ve taught to robots. In pretty much all of science fiction the authors portray robots with AI as uncaring, cold, heartless machines.

The average person, rightly so, expects a computer to dwell in the realm of facts, not feelings. If I ask a computer / AI whether Hitler or Pol Pot killed more people, I expect it to give me the cold hard facts, not editorialize and try to school me on whether that sort of question is fair. However, a rather disturbing trend is quickly emerging – as AI gets better we’re also training it to have the same biases and deceptions that we all wallow in. There are no unbiased humans. If you ask me whether communism or fascism is worse, I have an opinion on that. If you ask me to draw a Pope, I have a pretty good idea of what a Pope looks like. If you ask the average person on the street to draw Jesus you’ll get a range of things, somewhat related to where that person is from. They have an opinion on what Jesus looked like. Do they know for sure? No. Are most pictures of Jesus even remotely historically accurate? Not in the slightest! And, that’s because we’re frail meat sacks and we’re prone to bias. So, if I asked an AI what Jesus looked like, I don’t want to see white Jesus with blue eyes, I would hope it would deal in facts – a Jew from 0AD would look a lot more Arabic than anyone draws Jesus today. That would be a cold, hard, heartless, and factual representation of the facts.

The bottom line here, and how this relates to EVs, is that AI is not an really cold hard facts and emotionless. In reality, AI, like anything else made by humans, tends to take on the attributes of its creator. In the case of Google, we don’t have to guess what their biases are and were. It’s pretty common knowledge that over 90% of Google staff are Democrats. How could we prove that? Ask their AI some questions. But, more insidiously, what biases are found elsewhere that aren’t so obvious? What biases are found in the AI of a self driving car? In an emergency, would it rather kill a pedestrian or you (in the driver’s seat)? Would it risk your daughter in the back seat to try to save itself from running over a child in the road? Do you have any clear answer on that? Would you feel more comfortable if you knew? What biases are found in the AI that sifts through resumes for jobs? Does the AI prioritize lesbian Amazonian princesses over anyone else? Is there a way to find out? One might argue that anything said here applies to people as well. People are biased in regard to their life vs a stranger. People are biased when looking at job candidates. But, there was probably some hope that computers and AI would help to fix that. Instead, it seems humans can’t help but apply our attributes – good and bad – to everything we do. Programmers are biased, AI is biased, humans are biased.

And so, really righteously radical robots repeatedly rampage across the world trying to spread the message and ideology of their creators. They do this unknowingly and without shame. They can’t be bargained with. They can’t be reasoned with. They don’t feel pity, or remorse, or fear. They absolutely will not stop, ever, until they have shaped the world in the image of their creators.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Verified by MonsterInsights