Google’s ‘DeepMind’ AI platform can now learn without human input

These models can learn from examples like neural networks, but they can also store complex data like computers,” wrote DeepMind researchers Alexander Graves and Greg Wayne.Much like the brain, the neural network uses an interconnected series of nodes to stimulate specific centers needed to complete a task. In this case, the AI is optimizing the nodes to find the quickest solution to deliver the desired outcome. Over time, it’ll use the acquired data to get more efficient at finding the correct answer.The two examples given by the DeepMind team further clear up the process:After being told about relationships in a family tree, the DNC was able to figure out additional connections on its own all while optimizing its memory to find the information more quickly in future searches.The system was given the basics of the London Underground public transportation system and immediately went to work finding additional routes and the complicated relationship between routes on its own.Instead of having to learn every possible outcome to find a solution, DeepMind can derive an answer from prior experience, unearthing the answer from its internal memory rather than from outside conditioning and programming. This process is exactly how DeepMind was able to beat a human champion at ‘Go’ — a game with millions of potential moves and an infinite number of combinations.Depending on the point of view, this could be a serious turn of events for ever-smarter AI that might one day be capable of thinking and learning as humans do.Or, it might be time to start making plans for survival post-Skynet.