Google’s deep learning computer can keep things secret

Pepper the Robot, courtesy Xavier CareGoogle Brain, Google’s deep learning project, has started protecting information from prying eyes.

Researchers Martín Abadi and David Andersen found that computers  could make their own form of encryption using machine learning, without being taught specific cryptographic algorithms.

OK the encryption was basic, but neural nets “are generally not meant to be great at cryptography”.

The Google Brain team has three neural nets called Alice, Bob and Eve. Alice’s job was to send a secret message to Bob, Bob’s job was to decode the message that Alice sent, and Eve’s job was to attempt to eavesdrop.

To make sure the message remained secret, Alice had to convert her original plain-text message into complete gobbledygook, so that anyone who intercepted it (like Eve) wouldn’t be able to understand it. The “cipher text” – had to be decipherable by Bob, but nobody else. Both Alice and Bob started with a pre-agreed set of numbers called a key, which Eve didn’t have access to, to help encrypt and decrypt the message.

Initially they were rubbish at it but Alice slowly developed her own encryption strategy, and Bob worked out how to decrypt it.

After 15,000 times, Bob could convert Alice’s cipher text message back into plain text, while Eve could guess just 8 of the 16 bits forming the message. As each bit was just a 1 or a 0, that is the same success rate you would expect from pure chance. The research is published on arXiv.

Practical implications for the technology are limited, but it does mean that computers can develop and hide secrets from their human masters.