Turtle

Photo: Labsix,

As we know today we are becoming increasingly reliant on smart surveillance cameras, AI is trailing technology robots, gadgets, AI To be able to detect things and to recognize specific faces have proved to be very useful but these mechanisms How easy is it to fool? This is the research that the researcher wanted to find. A group of pupils has unearthed how a brain network continually A strong and sustained move is detected in an unfairly recognizable object.

They are called “adversarial image”, a picture designed to drive this kind of intelligent computer program. It uses the specific pattern to fool AI It is not about the image; It’s about that pattern, which is in the image or overlaid. It can be added as an almost invisible layer on the existing image. But these bizarre images are always fine does not work Properties such as zoom, crop, perspective and other adjustments can regularly undermine corrupted or malicious stereotypes and The resulting positive can be detected. The students were interested in locating that each time AI one to fool What will be the way of creating an ad image

The MIT-based team was able to generate an algorithm, which uses aerial images to invigorate AI reliably will be applied to both two-dimensional images and 3d printing. Regardless of the angle of the object, these images will move to AI. The team has The 3d-printed turtle was introduced to Google to think about a rifle. You can read the full letter on your results at arXiv.org.

This is important because this problem is not limited to Google-it is a problem in all neural networks. To find out how people Can Fool systems (and show that it can be done in relatively easy and reliable ways), researcher Ai The validation systems can create new ways to make it more precise. And if we do not fix these problems right now, they Can lead to greater problems in the future.

LEAVE A REPLY

Please enter your comment!
Please enter your name here