Takuma Miyazono began his racing career at age 4 when his father brought home the motorsport game Gran Turismo 4. In 2020, sixteen years later, Miyazono became the Gran Turismo world champion, winning a ‘triple crown’ of esports motor racing events. However, he had never competed against a driver like GT Sophy, an artificial intelligence developed by Sony and Polyphony Digital, the studio that created Gran Turismo.
‘Sophy is very fast, with lap times better than expected for the best drivers,’ he says. ‘But watching Sophy, there were certain moves that I only believed were possible afterward.’
In contrast to board games where AI are already experts in like chess, Gran Turismo requires non-stop judgements and swift reflexes. It is very complex and demands challenging driving maneuvers. It balances pushing a virtual car to its limits, battling with friction, aerodynamics and overtaking an opponent without blocking their line and incurring a penalty.
‘Outracing human drivers so skillfully in a head-to-head competition represents a landmark achievement for AI,’ said Chris Gerdes, a Stanford Professor who studies autonomous driving.
According to Gerdes, the techniques used to develop GT Sophy could be used in the development of autonomous cars. ‘GT Sophy’s success on the track suggests that neural networks might one day have a larger role in the software of automated vehicles than they do today,’ he writes.
In 2020, Sony announced that they would be developing a prototype electric car with advanced driver assistance features, but there are currently no plans to use GT Sophy in its automotive efforts.
GT Sophy shows how essential simulated environments are for real-world AI systems; how they are used to generate training data for their algorithms. For instance, Waymo, the self-driving car company says its vehicles have traveled the equivalent of 20 million miles in simulations.
‘The use of machine learning and autonomous control for racing is exciting,’ says Avinash Balachandran, Senior Manager for Human Centric Driving Research at the Toyota Research Institute. He says Toyota is researching ‘human amplification, in which technologies that leverage expert learnings from motorsport can one day improve active safety systems.’
Bruno Castro da Silva, Professor at the University of Massachusetts Amherst referred to GT Sophy as ‘an impressive achievement’ and called it a necessary step in training AI systems for autonomous vehicles. However, da Silva believes that going from Gran Turismo to the real world will be difficult because it is challenging for reinforcement learning algorithms to consider the long-term implications of judgements, and it is also difficult to guarantee the reliability of such algorithms.
‘Safety guarantees are paramount if we wish such AI systems to be deployed in real life,’ says da Silva. ‘A lack of safety guarantees is one of the main reasons why machine learning-based robots are not yet widely used in factories and warehouses.’
The GT Sophy algorithm may also be useful for other kinds of machines, like drones and robots which assist humans, says Hiroaki Kitano, CEO of Sony AI. ‘This can be applied to any physical system that interacts with a human,’ says Kitano.
GT Sophy mastered Gran Turismo with hours of practice. It is the first AI capable of beating professional esports drivers in a realistic high-speed game.
Kazunori Yamauchi, the creator of Gran Turismo and an actual race car driver, says GT Sophy’ ability to drive without incurring penalties is the most impressive part to him. He says that the technology will be added to the future versions of the game and foresees that it will help teach new and expert drivers to enhance their skills.
‘Sophy takes some racing lines that a human driver would never think of,’ he says. ‘I think a lot of the textbooks regarding driving skills will be rewritten.’
By Marvellous Iwendi.
Source: Wired