An AI system tasked with reconstructing aerial images from street maps has ‘learned’ how to ‘cheat’ at its task, according to a 2017 report which recently caught public attention.

The system, called CycleGAN, was doing so well that it made its designers suspicious, and they later found that it was hiding data it would later use to reconstruct an image.

Credit: Patra Kongsirimongkolchai / EyeEm / Getty

But that doesn’t necessarily suggest that CycleGAN has grown smarter; in fact, the designers said the system ‘cheated’ because it wasn’t smart enough to do the task at hand, according to Tech Crunch:


The machine, not smart enough to do the actual difficult job of converting these sophisticated image types to each other, found a way to cheat that humans are bad at detecting. This could be avoided with more stringent evaluation of the agent’s results, and no doubt the researchers went on to do that.

As always, computers do exactly what they are asked, so you have to be very specific in what you ask them. In this case the computer’s solution was an interesting one that shed light on a possible weakness of this type of neural network — that the computer, if not explicitly prevented from doing so, will essentially find a way to transmit details to itself in the interest of solving a given problem quickly and easily.

In other words, the system wasn’t explicitly told not to use the data it had saved to complete its task, which is just another variant of the age-old problem of computers only doing specifically what humans tell them to do – or don’t forbid them to do.

Credit: CycleGAN, a Master of Steganography Paper

That poses a problem as computers get more advanced and the world gets more dependent on them: unlike computers, humans can use unspoken lines of communication, and if programmers assume computers will do the same when they won’t, such as avoiding a particular data set, the results could be catastrophic.

In a way, this story sheds more light on the privacy implications of Big Data than it does on the advancement of AI because a computer system would have to be specifically told NOT to use data sets with privacy implications.

Because, simply put, a computer can’t exactly interpret the Fourth Amendment when running a command line.

The Emergency Election Sale is now live! Get 30% to 60% off our most popular products today!


Related Articles


Comments