Peering into the eye of AI

The last blog post was all about self driving and my thoughts on whether vision only is the way to go. Obviously, I don’t feel that it is. Limiting yourself to what humans have is stupid. If we can find sensors that give us extra-sensory perception or better distance vision, then shouldn’t we? We drive on wheels because it makes sense even though we don’t have wheels on our bodies. We use motors even though nothing like the way motors work is found in our bodies. We use machines BECAUSE they aren’t us. We build what works because it works, not because it is like us. Thinking outside the boundaries of ourselves has always been useful and will continue to be so. That is not to say that lidar, radar, or ultrasonic sensors definitely have a place on future cars. They may not. But, cameras probably always will be found on cars because we are visual creatures and it will likely always be handy. We simply must not let “we’re built this way so cars should be too” be the way we think. If we can do better than vision then let’s do it. If we can’t, then we still learned something as we tried. This also holds true for AI. I’m not an expert in AI but I constantly get a weird feeling that perhaps we’re modeling AI too much along very well worn pathways.

AI is progressing rapidly toward being able to drive but the underlying techniques seem to be in very worn ruts. This is good because those ruts are worn for a reason – they’ve proven effective. But, many advances in technology have been made by people too stupid to know that they’re supposed to be doing it a different way. So, I wonder if an idiot who doesn’t know better needs to come along and disrupt the status quo. Who would that person be? I have no idea. It could be anyone really, probably some young person who hasn’t studied AI enough to know better. To me, it boggles the mind that people are building super computers to crunch the AI training data. Usually when you have to throw more hardware at the problem that means you’re going down a dead end and forcing square pegs into round holes. I’m reminded of this video where it is explained how a piece of code was optimized… a little bit

In fact, after posting that video people just kept improving things even more. It turns out, the initial way he approached the problem was not the most efficient way BY FAR. I get the constant feeling that AI is in the same rut. They’re running back propegation and all the time tested techniques. Sure, they keep pushing the field forward, keep finding new ways to do the things they’ve been doing but better, faster, and with higher accuracy.. But, is there a better way? When planes couldn’t go faster we made jets. When horses couldn’t go faster, haul more, and work longer we made planes, trains, and automobiles.

The point is, I don’t think there is any reason to think that we’ve found the ideal solution to AI. As in pretty much all of science and mathematics, there is always lurking in the shadows another layer to peel away. People got good at Newtonian physics. Then a bunch of ne’er do wells decided to start thinking about relativity and quantum mechanics and now the world looks completely different at the macro and microscopic scales. We would not have GPS were it not for people who thought beyond the boundaries of the accepted physics models of the time. I don’t know what the next branch of AI will be but I’m hesitant to think that we’ve hit the boundaries of what is possible.

Another thing that bothers me about AI is the uncertainty of why it does what it does.

The above picture may be a bit of an exaggeration but only slightly. In reality, we’re creating systems which are, at their core just a bunch of “if this, do that” statements except that no one specifically programmed those conditions nor does anyone truly know with any certainty what those conditions even are.

When an internal combustion engine spins it does many things. Those things are understood. We can model how gasoline burns. We can model how mechanical systems work. AI is a bit more nebulous. We feed the monster information and it adjusts. We keep feeding it. It keeps adjusting. We know how the adjustments work in a general sense but after all the number crunching it spits out sets of numbers, generally things like coefficients in a convolution matrix. We know how convolution works. We know the steps. But, we don’t actually know why the coefficient numbers are what they are. We’re building systems out of things we can understand which are operating essentially in unknowable ways. I suppose in a way this is not different from human learning. If you touch a hot stove your brain tells you that it was unwise to do that. You learn, you stop touching stoves quite so often, maybe only on the weekends now or when you feel bored. But, what physically changed in your brain to encode that new predisposition to not touch stoves? We don’t really know that. And, every person is different. If you line 10 people up and make them touch a hot stove (YOU MONSTER!) each of them will learn that hot stoves aren’t any fun but the actual changes in brain chemistry that encode this fun new fact will be different for each and every person. We don’t understand exactly how the encoding process works for a given brain but we do know the basic facts surrounding it – touching hot stoves hurts. People don’t want to hurt. People usually try to avoid things that hurt. Hence, people avoid hot stoves. We know the big facts but not the nitty gritty details. We don’t know if neuron 124,832,868 in the brain forms a new synapse to neuron 876,237,993,344 and this causes a negative reaction to the thought of hot stoves. We just don’t know. Maybe we don’t need to know. But, I feel it a bit dangerous to deploy systems which are making decisions based on criteria that are unknowable. It seems like maybe someone made a movie or two about that.

If SkyNet decides to nuke us, why did it decide that? Could we know why? The system was designed to teach itself so no person really would ever know why SkyNet decided we’re expendable. If a Tesla decides to run over a child in a crosswalk, could we know why it did that? Could we predict it based upon empirical criteria? Perhaps fundamentally it may be possible to query it. Tesla captures a lot of data while the car is operating so determining reasons after the fact should be possible. This does not help lil’ Timmy though. Humans have some ability to explain their actions. This actually forms later in life. If you ask a child “why did you do that!?” they may honestly not know the answer to your question. Certainly in the moment there were reasons. But, afterward, can they introspectively determine why they did it? Not at first. And, as time goes on, people get better at determining why they did things. But, AI isn’t quite there. Determining why you did something takes processing power to examine the evidence and your thoughts. The chaotic nature of how AI uses things like convolution leads to the answers to “why” being very difficult to pin down or understand. Perhaps a Tesla running over a child thought it was a shadow or a bag in the road or any number of other innocuous things. This same thing happens to people in all fairness. People miss exits, people run over things, people make mistakes because they misjudge what they’re seeing. The hope is that perhaps we can understand fellow humans because, despite our differences, we are all human and humans act in somewhat predictable ways. Science has a pretty firm grasp on the limitations of human vision, human attention span, etc. So, generally people feel pretty comfortable designing systems around the limitations and capabilities we know humans possess. AI researchers know quite a bit about what AI can do and what it cannot do. But, sometimes the somewhat alien nature of how it works causes things to occur which no one expected. Again, quite a bit of this applies to human beings as well. I suppose this boils down to it being harder to trust something alien than something human.

The biggest problem both for humans and AI is what they’ll do when something “novel” happens. In both cases things can get quite dicey. Let’s say you’re driving down the road and suddenly in front of you the stay puft marshmallow man is 100 ft tall and walking your way. This is going to be something you’ve never seen before. Can you recognize what is happening? You probably don’t believe it but you at least have some conceptual idea of what you’re seeing. And, you probably will try not to drive into his foot. Now, the question is, will an AI have any idea what it is seeing? What will it do? If it is vision only will it recognize that white, fluffy goodness as an obstacle or won’t it? Sure, this is a ridiculous example. How about a less stupid one?

A few years ago the above sign made headlines. On older Teslas which used MobileEye systems this sign would be read as 85MPH and the car would try to accelerate to that speed. By all accounts, this does not work on Tesla’s newer systems. However, were you fooled by that sign? I can clearly see that it says 35MPH. I also know that 85MPH signs don’t exist so I would heavily question one if I saw it. AI doesn’t really have intuition. Intuition is a higher level concept that is still a bit difficult to model in a computer. You and I have a pretty good idea of what is likely and what isn’t. It could be possible to teach an AI these things as well but how effectively? If I took a black marker and finished the 3 into an 8 would that fool you? Probably not. We would pick up on the strange contrast differences and the fact that we’re pretty sure 85MPH signs don’t exist. How about this one?

https://www.autoblog.com/2017/08/04/self-driving-car-sign-hack-stickers

Does that sign look like a stop sign or a 45MPH sign? To a human, it’s certainly a messed up stop sign. To the AI in a car, apparently it could be a 45MPH sign. Now, they did this on purpose. They knew exactly how to attack the AI in the exact right way to mess it up. There are many “Magic Eye” type illusions that trick human vision. But, it would seem that currently it is much easier to trick an AI than to trick a human.

This whole post has been pretty negative about the state of AI and the path forward. This is, perhaps, a bit unfortunate. In reality, the advances in AI are exciting and very promising. However, someone has to rain on the parade. Often, not only in AI, but in technology in general, the proponents of a given new piece of tech will gloss it up really nicely and smear a lot of lipstick on the pig, hoping you think it looks like a supermodel. And, quite often, the public eats it up. It actually seems like many people are still fearful of self driving systems. This is good, they should still be fearful. The tech is progressing, sometimes very, very well. But, we don’t live in the future yet. They say that self driving is always 10 years away – sort of like fusion. And, so far they’re right. It’s at least 10 years away and I think they’ll have to rethink the entire premise before we get there but we will get there…. eventually. In the meantime, many manufacturers have some very cool tech that does some very impressive things. We just can’t get ahead of ourselves.

1 thought on “Peering into the eye of AI”

  1. DANDANTHEDRIVINGMAN

    Awesome 👍😎 perspective on humanity co-exsisting with technology as it advances. Traditional ways of it’s use and implementations are now possible for “enlightened” 🤔 beings to apply them in the most probable means for the greatest impact toward
    a common goal. Thanks for sharing. * this was Jack’s greatest impact toward anyone who listened 😊👂! Thank for carrying on with the Vision.
    Daniel 🚕
    Dandanthedrivingman… Beep.. beep

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Verified by MonsterInsights