There’s been a lot of hype in the media recently about deliberate attempts to cause neural network based systems to misclassify images using adversarial examples. This has expanded in to the use of such adversarial examples to fool driverless cars. This then led to the designing of stickers which could be applied to a regular road sign, causing a driverless car to misinterpret the sign . What was once a stop sign might now be a speed limit, or what was once a 30mph limit might now be a 60.
I’d seen in the past an article on an interesting advertising campaign. A child abuse charity has created a sign which, through lenticular printing, displays a different message when viewed from adult height vs. child height . I came across this article again recently, and it got me thinking. Where are cameras positioned on driverless cars? A human operator is going to be reasonably high up. Through the use of lenticular printing, could adversarial example-hosting signs be designed to appear normal to a human operator viewing it front on, but malicious from a camera’s point of view? I suppose it would depend on the camera location. If the sensors used by the camera were at bumper height like a parking camera, it should be quite viable. Dash-cams are, as the name suggests, dashboard based- making them only a little bit lower than the human operator. Is this difference sufficient?
What about other sensors? Could sensors such as RADARs on driverless cars be manipulated more covertly by assuming the sensors will have a fairly narrow angle of attack? Could you place materials designed to refract RADAR waves at a low height where they will only interfere with cars?
Food for thought!