How To Fool A Computer

I’m sure that some of my readers have been wondering… Is it possible to fool a computer? That is to thwart the mission and purpose of the computer? The answer is yes, it is certainly possible and sometimes advantageous to “fool” a computer. According to an article in the UK Guardian online entitled: “The rebel group stopping self-driving [electric] cars in their tracks – one cone at a time” it’s fairly easy to do.

There’s small anti-car activist group in the city of San Francisco called Safe Street Rebel which has decided to curtail any self-driving taxi activity in the city. And just how do they do that? By short-circuiting the taxi batteries? By downloading a destructive algorithm into the self-driving taxis’ computer? By generating some sort of “force field” that confuses the computer?

No, none of the above. The “rebels” simply place an orange traffic cone – a bright orange plastic cone intended for traffic control and for marking car parking areas and readily available at most home improvement stores – on the hood of the car and that car is immediately disabled.

Among other things that’s a strong argument against the over-reliance on computers which is very common nowadays. Much like the sensors on Tesla cars that run into emergency vehicles because they don’t recognize and respond to them correctly the sensors on the self-driving taxis haven’t been designed to deal with those type of inputs. But they have been programmed to shut down anytime they encounter something they don’t recognize and that’s what’s happening here.

If a computer gets one or more inputs that it hasn’t been programmed to deal with it responds by doing nothing or does something wrong. That’s why the programmers programmed the car to shut down because doing nothing is better than doing something wrong. That begs the question: If something as simple as an orange traffic cone can completely disable the self-driving taxi what other “holes” in the computer program are there? And what if one of these “holes” allows something to happen that threatens passenger or pedestrian safety?

There are two potential problems here: One is that the sensor(s) are not sending correct data to the computer and the other is that the computer is not processing that data correctly. One can have the greatest computer program ever and if it’s getting bad data from the sensors that program is worthless. Same goes for good sensors and poor computer programming. Both sensors and the computer program “reading” them have to be working right to get a reliable product.

I’m not sure what the sensors are “reading” if it’s detecting heat and motion like a motion sensor or light intensity or various colors but whatever it is sensing it must work every time all of the time in all conditions such as bright light, fog, haze, darkness, etc. Sensors must be able to withstand temperature extremes and physical vibration as well as aging (the passage of time) without degradation of output integrity. Anything less than that makes them useless.

In this case I’m guessing that the sensors are responding to various light inputs and the programmers didn’t foresee the need to be able to respond to those kinds of inputs so either their sensors are not capable of “resolving” that data or the computer program is not capable of “extracting” that data in a way that would result in the correct response by the car.

So with this self-driving product it’s either “back to the drawing board” or on to the ash heap of history.