(Accepted and published by the House of Lords' Select Committee on Artificial Intelligence on 11th October 2017: written evidence AIC0184, PDF version)
Dr. Colin W. P. Lewis, A.I. Research Scientist
Prof. Dr. Dagmar Monett, A.I. Research Scientist (AGISI & Berlin School of Economics and Law)
The pace of technological change
1. (a) What is the current state of Artificial Intelligence?
There are currently no 'true' Artificial Intelligence (A.I.) systems. There are ad-hoc 'learning' systems, let's call them narrow A.I. systems.
Defining A.I. The literature abounds with definitions of A.I. and human intelligence although very little consensus has been reached to date. Our comprehensive research of A.I. practitioners worldwide, Research Survey: Defining (machine) Intelligence (Lewis & Monett, 2017), which has collected over 400 responses, has identified considerable interest in identifying a well defined definition and goal of A.I. We hope that the results of our survey help to overcome a fundamental flaw: "That artificial intelligence lacks a stable, consensus definition or instantiation complicates efforts to develop an appropriate policy infrastructure" (Calo, 2017).
The goal of A.I., closely linked to its definition and highlighted in our survey, should ensure the ‘why’ of Artificial Intelligence; however, very few research papers provide a robust goal with society-in-the-loop. We agree with Hutter (2005): "The goal of A.I. systems should be to be useful to humans." Or as Norbert Wiener wrote in 1960, "We had better be quite sure that the purpose put into the machine is the purpose which we really desire" (Wiener, 1960).
Whilst there are breakthroughs in narrow A.I. systems that can ‘simulate’ and surpass certain ‘individual’ aspects of human intelligence (for example, specific elements of pattern recognition, quicker at search, calculations, data analysis, and other cognitive attributes), A.I. development is currently some way off from achieving the goal of fully replicating human intelligence. However, the narrow A.I. methods, which are more specifically fields of A.I. research, are making considerable progress as stand alone techniques, namely Machine Learning (ML) and classes of ML algorithms such as Deep Learning (DL), Reinforcement Learning (RL), and Deep Reinforcement Learning (DRL).
Researchers acknowledge that the methodology applied in narrow A.I. systems can be unstable (Mnih et al., 2015). Nevertheless, these A.I. sub-domains are already starting to have considerable economic and social effect, as we outline below, and this impact will accelerate in the near future. Briefly:
Machine Learning: The most prevalent of these narrow A.I. sub-domains, in an operational context, is Machine Learning. ML algorithm can be either supervised, unsupervised or semi-supervised. The majority of current ML implementations are supervised learning. In supervised learning, the idea is we (humans) teach the computer how to do something. In unsupervised learning the machine learns by itself (Samuel, 1959).
ML systems are being used to help make decisions both large and small in almost all aspects of our lives, whether they involve simple tasks like dispensing money from ATM’s, recommendations for buying books or which movies to watch, email spam filtering, purchasing travel arrangements and insurance policies, to more objective matters like the prognosis of credit rating in loan approval decisions, and even life-altering decisions such as health diagnosis and court sentencing guidelines after a criminal conviction.
Systems utilizing ML information processing techniques are used for profiling individuals by law enforcement agencies, military drones, and other semi-autonomous surveillance applications. They capture information in our smart phones on our daily activities, from exercise and GPS data that tracks our location in real time, to emailing and social media interests and telephone calls. They are increasingly used in our cars and our homes. They are used to manage nuclear reactors and for managing demand across electricity grids, improving energy efficiency, and generally boosting productivity in the business environment.
Deep Learning: Deep learning is emerging as a primary machine learning approach for important, challenging problems such as image classification and speech recognition. Deep Learning methods have dramatically improved machine capabilities in speech recognition, approaching human-level performance on some object recognition benchmarks (He et al., 2016) and object detection (Ba, Mnih, & Kavukcuoglu, 2015). Which can also be very useful for self-driving cars and in many other domains where big data is available such as drug discovery and genomics (Nguyen et al., 2016).
Advances in Deep Learning will have broad implications for consumer and business products that can be significantly augmented by speech recognition. "Deep learning is becoming a mainstream technology for speech recognition at industrial scale" (Deng et al., 2013). This is particularly prevalent in telemarketing, tech help support desks (Vinyals & Le, 2015), and mobile personal assistants such as Apple’s Siri, Microsoft’s Cortana, Google Now, and Amazon Echo. Deep Learning is also being used for negotiations with other chatbots or people (Lewis et al., 2017).
Reinforcement Learning: Reinforcement Learning has gradually become one of the most active research areas in Machine Learning, Artificial Intelligence, and neural network research (Sutton & Barto, 2012). An RL agent interacts with its environment and, upon observing the consequences of its actions, can learn to alter its own behaviour in response to rewards received (Arulkumaran et al., 2017).
Within health, RL is being used for classifying gene-expression patterns from leukaemia patients into subtypes by clinical outcome (Ghahramani, 2015). These models have also contributed to massive savings at multiple Google Data Centers by helping to produce a 40% reduction in energy used for cooling and 15% reduction in overall energy overhead (Evans & Gao, 2016). Other typical examples of uses might include detecting pedestrians in images taken from an autonomous vehicle. As shown in (Shalev-Shwartz, Shammah, & Shashua, 2016), RL is proving to be especially effective in the development of self-driving cars which requires many capabilities such as sensing, vision, mapping, knowledge of driving policies, and regulations.
In robotics, RL is making progress in other seemingly simple tasks such as screwing a cap onto a bottle (Levine et al., 2016) or door opening (Chebotar, 2017).
A well-known successful example of RL is from the Google owned company DeepMind, specifically their AlphaGo, which defeated the human world champion in the game of Go. AlphaGo was comprised of neural networks that were trained using supervised and reinforcement learning in combination with a traditional heuristic search algorithm (Silver et al., 2016).
Deep Reinforcement Learning: One of the driving forces behind Deep Reinforcement Learning is the vision of creating systems that are capable of learning how to adapt in the real world. Further, researchers consider that "DRL will be an important component in constructing general AI systems" (Arulkumaran et al., 2017). As was shown through a single DRL architecture "in a range of different environments with only very minimal prior knowledge" (Mnih et al., 2015).
To date, DRL has been most prevalent in games (Mnih et al., 2013); however, recent development have shown that DRL algorithms have by "far the most complex behaviors yet learned" in a machine algorithm (Christiano et al., 2017).
1. (b) What factors have contributed to this?
Historically, developments in A.I. were driven by government investment in research and development within academia and other research institutes. Whilst governments around the world still make large investments into A.I. research, recent major advances have largely been driven by significant investments by leading technology companies relying on techniques that were previously developed through government and other institutions investment.
Furthermore, computing power has increased dramatically. Meanwhile, the growth of the Internet and social media in the last 10 years has provided opportunities to collect, store, and share large amounts of data. Many leading technology companies are amassing huge amounts of ‘Big Data,’ supported in part by cloud computing resources. These companies have invested heavily in A.I. technologies and further seek to develop A.I. techniques to ensure a competitive advantage.
Another major factor is open access of scientific inventions and research in general –sites such as arXiv, provide immediate online publication of research papers, conference proceedings, etc. Additionally, open source frameworks and libraries for the development of ML algorithms have put opportunities for development into the hands of millions, thereby profiting from the advantages of cloud computing and parallel processing on GPUs. Examples include TensorFlow, Theano, CNTK, MXNet, and Keras. They implement model architectures and algorithms for methods, especially deep learning that can be run by calling functions without the need to implement them from scratch nor locally.
1. (c) How is it likely to develop over the next 5, 10 and 20 years?
There are several recent surveys of experts opinions on when A.I. will be available and their impact on the workplace. Many uncertainties exist concerning future developments of machine intelligence, one should therefore not consider the ‘expert view’ to be predictive of likely ten and twenty year scenarios.
1. (d) What factors, technical or societal, will accelerate or hinder this development?
There are some obvious factors such as a slow-down in investment which would impact research and development and education, creating another ‘A.I. winter’ and skills gap. Other factors such as global instability and government policy, may all hinder the development of A.I.
Although the particular narrow A.I. models we outlined above already demonstrate aspects of intelligent abilities in narrow and limited domains, at this point they do not represent a unified model of intelligence and there is much work to be done before true A.I. is ‘amongst us.’
Further, technically there are still many factors that make narrow A.I unstable. Additionally there are technological challenges to overcome such as the curse of dimensionality –Richard Bellman (1957) asserted that high dimensionality of data is a fundamental hurdle in many science and engineering applications. He coined this phenomenon the curse of dimensionality, although recent developments in DRL have made some progress in addressing the curse of dimensionality (Bengio, Courville, & Vincent, 2013; Kulkarni et al., 2016). There are also many safety challenges to overcome such as security, data privacy (see for example (DeepMind, 2017)) and other technological problems still requiring breakthroughs.
Other advances will accelerate A. I. such as Facebook CommaAI (Baroni et al., 2017) and their A.I. roadmap (Mikolov, Joulin, & Baroni, 2015). Together with closer cooperation with Neuroscience and A.I. developers (Hassabis et al., 2017). We also believe the following papers will contribute to the acceleration of narrow A.I. solutions for mainstream uses beyond games and social media analytics: (Kalchbrenner, Danihelka, & Graves, 2015; Lake et al., 2016; Mnih et al., 2015).
2. Is the current level of excitement which surrounds artificial intelligence warranted?
We recommend the committee consider the findings in the paper by leading A.I. researchers at Microsoft, Ethan Fast and Eric Horvitz, Long-Term Trends in the Public Perception of Artificial Intelligence (Fast & Horvitz, 2017).
Impact on society
3. How can the general public best be prepared for more widespread use of artificial intelligence?
It is our belief that the goal of A.I must be to support humanity. At the present time it is difficult to predict the short term extent with which A.I. will impact on social and economic institutions but in the long term it could have a major negative consequence the social and economic effects of which could be severe for millions of people. In this case, according to a report to the US President of the United States (Furman et al., 2016), "Aggressive policy action will be needed to help (those) who are disadvantaged by these changes and to ensure that the enormous benefits of AI and automation are developed by and available to all." Other commentators such as Andrew Haldane (2015), Chief Economist at the Bank of England, believe it is clear that the introduction of AI machines and more advanced robotics could see a technological change and thus social and economic changes far larger than at any time in human history with massive unemployment of unprecedented scales.
Conversely, machines have been substituting human labor for centuries; yet, historically, technological changes have been associated with productivity growth and expanding rather than contracting total employment and with raising earnings. Research showed that factories that have implemented industrial robots also added over 1.25 million new jobs from 2009 to 2015 (Lewis, 2015).
The challenge for policymakers will be to update, strengthen, and adapt policies to respond to the social and economic effects of A.I. We have created an agenda with key research goals to ensure the development and the outcomes of A.I. and Artificial General Intelligence (AGI) are aligned with the social and economic advancement of all humanity, and how best to close those social and economic gaps through beneficial AI and AGI development.
4. Who in society is gaining the most from the development and use of artificial intelligence and data? Who is gaining the least? How can potential disparities be mitigated?
Overall we believe that whilst some large corporations and their shareholders will benefit from the gains of A.I. the potential for artificial intelligence to enhance people’s quality of life in areas including education, transportation, and healthcare is vast. However, we are willing to offer our expertise to the committee so that government, policy makers, and researchers collaborate to develop and champion methodology "for wealth creation in which everyone should be entitled to a portion of the world’s A.I. produced treasures" (Stone et al., 2016).
5. Should efforts be made to improve the public’s understanding of, and engagement with, artificial intelligence? If so, how?
Our research shows that theories of intelligence and the goal of A.I. have been the source of much confusion both within the field and among the general public. To help rectify this we are conducting a research survey: Defining (machine) Intelligence (Lewis & Monett, 2017).
The research survey on definitions of machine and human intelligence is still accepting responses and has an ongoing invitation procedure. However, we are incredibly surprised by the volume of responses together with the high level of comments, opinions, and recommendations concerning the definitions of machine and human intelligence that experts around the world have shared. As of September 6, 2017 we have collected more than 400 responses.
A.I. has a perception problem in the mainstream media even though many researchers indicate that supporting humanity must be the goal of AI. By clarifying the known definitions of intelligence and research goals of Machine Intelligence this should help us and other A.I. practitioners spread a stronger, more coherent message, to the mainstream media, policymakers, and the general public to help dispel myths about A.I.
6. What are the key sectors that stand to benefit from the development and use of artificial intelligence? Which sectors do not?
We recommend the committee consider the findings projected through to 2030 in the report, The One Hundred Year Study on Artificial Intelligence (Stone et al., 2016), especially the sections on transportation, healthcare, education, low-resource communities, and public safety and security.
7. How can the data-based monopolies of some large corporations, and the ‘winner-takes-all’ economies associated with them, be addressed? How can data be managed and safeguarded to ensure it contributes to the public good and a well-functioning economy?
(We left this question unanswered)
8. What are the ethical implications of the development and use of artificial intelligence? How can any negative implications be resolved?
Human intellect is the source of many of its own problems. Errors in thinking and biases, which have grown powerful over time, are also showing up in the intelligent machines we program and may become even more prevalent in machines programmed with Artificial Intelligence.
Machines can no more do ethics than they can have psychological breakdowns. They can help to change circumstances, but they cannot reflect on their value or morality. It is the human element and bias that must be considered above all else.
9. In what situations is a relative lack of transparency in artificial intelligence systems (so-called ‘black boxing’) acceptable? When should it not be permissible?
For an ‘unbiased’ view see paper by Adrian Weller (2017) where he states "a brief survey, suggesting challenges and related concerns. We highlight and review settings where transparency may cause harm, discussing connections across privacy, multi-agent game theory, economics, fairness and trust."
The role of the Government
10. What role should the Government take in the development and use of artificial intelligence in the United Kingdom? Should artificial intelligence be regulated? If so, how?
Key questions which governments and policy makers should be addressing are:
Learning from others
11. What lessons can be learnt from other countries or international organisations (e.g. the European Union, the World Economic Forum) in their policy approach to artificial intelligence?
(We left this question unanswered)
Arulkumaran, K. et al. (2017). A Brief Survey of Deep Reinforcement Learning. CoRR, abs/1708.05866.
Asilomar AI Principles (2017). Future of Life Institute.
Ba, J. L., Mnih, V., and Kavukcuoglu, K. (2015). Multiple Object Recognition with Visual Attention. CoRR, abs/1412.7755.
Baroni, M. et al. (2017). CommAI: Evaluating the first steps towards a useful general AI. CoRR, abs/1701.08954.
Bellman, R. (1957). Dynamic Programming. Princeton, NJ: Princeton Univ. Press.
Bengio, Y., Courville, A., and Vincent, V. (2013). Representation Learning: A Review and New Perspectives. IEEE Trans. on Pattern Analysis and Machine Intelligence, 35(8):1798–1828.
Calo, R. (2017). Artificial Intelligence Policy: A Roadmap.
Chebotar, Y. et al. (2017). Path integral guided policy search. CoRR, abs/1610.00529.
Christiano, P. F. et al. (2017). Deep Reinforcement Learning from Human Preferences. CoRR, abs/1706.03741.
DeepMind (July 2017). What we’ve learned so far.
Deng, L. et al. (2013). Recent advances in deep learning for speech research at Microsoft. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP, pp. 8604–8608, IEEE.
Evans, R. and Gao, J. (2016). DeepMind AI Reduces Google Data Centre Cooling Bill by 40%. DeepMind.
Fast, E. and Horvitz, E. (2017). Long-Term Trends in the Public Perception of Artificial Intelligence. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, AAAI-17, San Francisco, CA, USA, February 4-9, 2017. AAAI Press, pp. 963–969.
Furman, J. et al. (2016). Artificial Intelligence, Automation, and the Economy. Executive Office of the President, Washington, D.C. 20502.
Ghahramani, Z. (May 2015). Probabilistic machine learning and artificial intelligence. Nature, 521:452–459. DOI: 10.1038/nature14541.
Haldane, A. (2015). Labour’s Share – speech given at the Trades Union Congress, London. Bank of England.
Hassabis, D. et al. (July 2017). Neuroscience-Inspired Artificial Intelligence. Neuron, 95(2):245–258.
He, K. et al. (2016). Deep Residual Learning for Image Recognition. In Proceedings of the 29th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016. Las Vegas, NV, USA, pp. 770–778, IEEE.
Hutter, M. (2005). Universal Artificial Intelligence: Sequential Decisions based on Algorithmic Probability. Berlin: Springer.
Kalchbrenner, N., Danihelka, I., and Graves, A. (2015). Grid Long Short-Term Memory. CoRR, abs/1507.01526.
Kulkarni, T. D. et al. (2016). Hierarchical Deep Reinforcement Learning: Integrating Temporal Abstraction and Intrinsic Motivation. CoRR, abs/1604.06057.
Lake, B. M. et al. (2016). Building Machines That Learn and Think Like People. Behav Brain Sci., 4:1–101.
LeCun, Y., Bengio, Y., and Hinton, G. (2015). Deep Learning. Nature, 521:436–444.
Levine, S. et al. (January 2016). End-to-end training of deep visuomotor policies. Journal of Machine Learning Research, 17(1):1334–1373.
Lewis, C. W. P. (2015) Study – Robots are not taking jobs. Robotenomics.
Lewis, C. W. P. and Monett, D. (2017). Research Survey: Defining (machine) Intelligence. Ongoing survey.
Lewis, M. et al. (2017). Deal or No Deal? End-to-End Learning for Negotiation Dialogues. CoRR, abs/1706.05125.
Mikolov, T., Joulin, J., and Baroni, M. (2015). A Roadmap towards Machine Intelligence. CoRR, abs/1511.08130.
Mnih, V. et al. (2013). Playing Atari with Deep Reinforcement Learning. CoRR, abs/1312.5602.
Mnih, V. et al. (2015). Human-level control through deep reinforcement learning. Nature, 518:529–533.
Nguyen, D.-T. et al. (2016). Pharos: Collating protein information to shed light on the druggable genome. Nucleic Acids Research, 45(D1):D995–D1002.
Samuel, A. L. (1959). Some Studies in Machine Learning Using the Game of Checkers. IBM Journal of Research and Development, 3(3):535–554.
Shalev-Shwartz, S., Shammah, S., and Shashua, A. (2016). Safe, Multi-Agent, Reinforcement Learning for Autonomous Driving. CoRR, abs/1708.05866.
Silver, D. et al. (January 2016). Mastering the game of Go with deep neural networks and tree search. Nature, 28;529(7587):484–489.
Stone, P. et al. (September 2016). Artificial Intelligence and Life in 2030. One Hundred Year Study on Artificial Intelligence: Report of the 2015-2016 Study Panel, Stanford University, Stanford, CA.
Sutton, R. S. and Barto, A. G. (2012). Reinforcement Learning: An Introduction. Second edition. London, UK: The MIT Press.
Vinyals, O. and Le, Q. V. (2015). A Neural Conversational Model. CoRR, abs/1506.05869.
Weller, A. (2017). Challenges for Transparency. CoRR, abs/1708.01870.
Wiener, N. (1960). Some Moral and Technical Consequences of Automation. Science, 131(3410):1355–1358.