A.I. hid data from its creators to cheat at appointed task
Depending on how paranoid you are, this research from Stanford and Google will be either terrifying or fascinating. A machine learning agent intended to transform aerial images into street maps and back was found to be cheating by hiding information it would need later in “a nearly imperceptible, high-frequency signal.” Clever girl!
This occurrence reveals a problem with computers that has existed since they were invented: they do exactly what you tell them to do.
The intention of the researchers was, as you might guess, to accelerate and improve the process of turning satellite imagery into Google’s famously accurate maps. To that end the team was working with what’s called a CycleGAN — a neural network that learns to transform images of type X and Y into one another, as efficiently yet accurately as possible, through a great deal of experimentation.
In some early results, the agent was doing well — suspiciously well. What tipped the team off was that, when the agent reconstructed aerial photographs from its street maps, there were lots of details that didn’t seem to be on the latter at all. For instance, skylights on a roof that were eliminated in the process of creating the street map would magically reappear when they asked the agent to do the reverse process.
The post A.I. hid data from its creators to cheat at appointed task appeared first on Off The Grid News.
The article, "A.I. hid data from its creators to cheat at appointed task", was syndicated from and first appeared at: https://www.offthegridnews.com/happening-now/a-i-hid-data-from-its-creators-to-cheat-at-appointed-task/.
You may find more great articles by Off The Grid News on http://www.offthegridnews.com/.