Artificial Intelligence developed greatly in the 2010s as a result of advances in deep learning― a category of AI which collects, stores and processes large quantities of data. In a lecture delivered by Prof Celestine Iwendi at ACM, he emphasized how AI tools are very useful in major areas of diseases such as cancer, neurology and the new COVID-19 virus. Today, it isn’t just an area of scientific research, it is a primary component for a lot of everyday applications.
However, despite the decade’s worth of advances in deep learning, it is clear that it is not the final solution to the challenge of the creation of human-level AI.
What is needed to push AI to the next level has become a hot topic in the AI community. It was the focus of an online discussion titled ‘AI debate 2: Moving AI forward: An interdisciplinary approach’, held by Montreal.AI. The discussion was attended by all sorts of scientists.
Cohost of the debate and cognitive scientist Gary Marcus discussed the main challenges of deep learning, some of which include exorbitant data requirements, low capacity for the transfer of knowledge to other domains, opacity, lack of knowledge representation, etc.
Marcus, a critic of deep learning approaches, published a paper early in the year 2020 where he proposed an approach that combines learning algorithms with rules-based software.
Other speakers also noted a hybrid artificial intelligence as a potential solution to the issues of deep learning.
‘One of the key questions is to identify the building blocks of AI and how to make AI more trustworthy, explainable and interpretable’, said Luis Lamb, computer scientist.
A coauthor of the book ‘Neural-symbolic Cognitive Reasoning’, Lamb suggested a foundational approach for neural-symbolic AI based on both logical formalization and machine learning.
‘We use logic and knowledge representation to represent the reasoning process that is integrated with machine learning systems so that we can also effectively reform neural learning using deep learning machinery’, said Lamb.
Fei-fei Li, a Professor of Computer Science at Stanford University and former Chief AI Scientist at Google Cloud observed that in the history of evolution, one of the primary catalysts for the advancement of intelligence in human beings has been vision. Also, image classification and computer vision has triggered the revolution of deep learning in the past decade. Li is the creator of ImageNet, a dataset with numerous labeled images used in the training and evaluation of computer vision systems.
‘As scientists, we ask ourselves, what is the next north star? There are more than one. I have been extremely inspired by evolution and development’, Li said.
He pointed out that intelligence in animals and humans comes from active interaction and perception of the world, something current AI systems do not have. Instead, they rely on data that is collected and labeled by humans.
‘There is a fundamentally critical look between perception and actuation that drives learning, understanding, planning and reasoning. And this loop can be better realized when our AI agent can be embodied, can dial between explorative and exploitative actions, is multi-modal, can multi-task, generalizable and oftentimes, sociable’, she said.
Li is presently working on building interactive agents that use perception and actuation to comprehend the world.
OpenAI researcher, Ken Stanley discussed the lessons he learnt from evolution. ‘There are properties of evolution in nature that are just so profoundly powerful and are not explained algorithmically yet because we cannot create phenomena like what has been created in nature. Those are properties we should continue to chase and understand, and those are properties not only in evolution but also in ourselves’, Stanley said.
Richard Sutton, a computer scientist noted that most work done on AI doesn’t have a ‘computational theory’, a term first said by neuroscientist David Marr, known for his work on vision. Computational theory defines what goal an information processing system seeks and why it seeks that goal.
‘In neuroscience, we are missing a high-level understanding of the goal and the purposes of the overall mind. It is also true in artificial intelligence― perhaps more surprisingly in AI. There’s very little computational theory in Marr’s sense in AI’, Sutton said. He also added that textbooks often define AI simply as ‘getting machines to do what people do’. Also, most of the current conversations in AI, inclusive of the debate between neural networks and symbolic systems are ‘about how you achieve something, as if we understood already what it is we are trying to do’.
‘Reinforcement learning is the first computational theory of intelligence’, Sutton said, making a reference to the branch of AI where agents are given the rudimentary rules of an environment and are left to figure out ways to maximize their reward.
‘Reinforcement learning is explicit about the goals, about the whats and the whys. In reinforcement learning, the goal is to maximize an arbitrary reward signal. To this end, the agent has to compute a policy, a value function and a generative model’, said Sutton.
He added that there needs to be further development of a computational theory of intelligence. He also stated that the reinforcement learning is currently the most promising candidate, although the other candidates should still be explored.
Sutton is a pioneer of reinforcement learning and author of a seminal textbook on the topic. DeepMind where he works is researching ‘deep reinforcement learning’, a variation of the technique that integrates neural networks into basic reinforcement learning techniques. Recently, DeepMind has used deep reinforcement learning to acquire expertise in games such as Go, chess and StarCraft 2.
Reinforcement learning is similar to the learning mechanisms in animal and human brains, and as well suffers from the same issues in deep learning. Reinforcement learning models need extensive training to learn the simplest things and restricted to the narrow domain they are trained on. Currently, the development of deep reinforcement learning models requires costly compute resources which limits this area of research to wealthy companies such as Google, which owns DeepMind and Microsoft, the quasi-owner of OpenAI.
Computer Scientist and Turing Award winner Judea Pearl, renowned for his work on Bayesian networks and causal interference pointed out that AI systems would require world knowledge and common sense to make the most of the data they are fed.
‘I believe we should build systems which have a combination of knowledge of the world together with data’, Pearl added. He added that AI systems based only on collecting and blindly processing large quantities of data are more likely to fail.
According to Pearl, knowledge does not emerge from data. Instead, we use the innate structures in our brains to interact with the world and we use data to learn and comprehend the world, as witnessed in newborns, who learn without being instructed.
‘That kind of structure must be implemented externally to the data. Even if we succeed by some miracle to learn that structure from data, we still need to have it in the form that is communicable with human beings’, said Pearl.
University of Washington Professor, Yejin Choi reiterated the importance of common and sense and the problems its absence causes to AI systems, which have the jobs of mapping input data to outcomes.
‘We know how to solve a dataset without solving the underlying task with deep learning today. That’s due to the significant difference between AI and human intelligence, especially knowledge of the world. And common sense is one of the fundamental missing pieces’, said Choi.
Choi also stated that the space of reasoning is infinite and reasoning itself is a generative task and differs from the categorization tasks today’s deep learning algorithms and evaluation benchmarks are suited for. ‘We never enumerate very much. We just reason on the fly, and this is going to be one of the key fundamental, intellectual challenges that we can think about going forward’, Choi said.
In order to reach common sense and reasoning in AI, Choi proposes a wide range of parallel research areas, which is inclusive of the combination of symbolic and neural representations, the integration of knowledge into reasoning and the construction of benchmarks that are not just categorization.
Although the full path to common sense isn’t clear yet, Choi stated that, ‘But one thing for sure is that we cannot just get there by making the tallest building in the world taller. Therefore, GPT-4, -5 or -6 may not cut it’.
By Marvellous Iwendi.
Source: VB