Human-level AI – Probability, Risk, Opportunities


70 years ago, Alan Turing initiated the quest to develop a computer smarter than a human brain by introducing a test examining the ability of a machine to imitate intelligent human behaviour. Arguably, some chatbots have already passed this test, but we cannot say that sentient machines are fully developed, at least not yet. The concept of machines surpassing human abilities is widely known as the “technological singularity”, yet some scientists prefer to avoid the term itself as it is often associated with unsubstantiated claims and leads to even more confusion. The focus has therefore shifted to human-level Artificial Intelligence (AI), as it has made substantial progress in the recent years and might be the way to reach the human-machine convergence. Superintelligence could drastically change the future of humanity by immense advancements in the science and technology sectors, but it might come at a price of possible increase of control over the society.

Regardless of whether we are going to reach the point of human-level AI we are certainly seeing an advance in a more sophistication of the algorithms and how they are impacting our daily lives.

Towards human-level AI – where are we?

Intelligent solutions are developing at an incredible speed and they are seemingly catching up with human intelligence. The technology-triggered revolution together with the scale, scope, and complexity of the impact of emerging digital tools is unlike anything the humankind has ever experienced before. Along with the significant advances in software development there also seems to be a parallel evolution in computer hardware to enhance machine intelligence capabilities (it is for example visible in the intensification of efforts towards developing more efficient, lower energy consuming, microprocessor chips). Since AI is becoming more and more part of our everyday lives the question arises whether we have decoded (or even whether we are able to decode) human intelligence that will allow us to create AI that is human-like. As a result, many complex challenges are emerging from our current approach to AI – i.a. what information looks like in the human brain and how it is integrated. It is essential to have advances that can address the interconnectedness gaps of event linking. It is said that what we do not understand we cannot create.

Also, it is important to consider that possibly the current methods of doing research and developing AI-based solutions will change in the future. The way AI is taught at the moment might not be the same in a couple of years. Therefore, assessing AI advancements from the perspective of today might be biased and might not take into account all circumstances that could potentially appear. Even though right now the neural networks models are the most advanced, we are already thinking of other ways of developing AI as reinforcement learning, rewards and many others. We should also consider other technologies breaking into the field such as quantum computers (or quantum neural networks) which is already under research and could significantly impact the AI development.

Non-linearity of the world and difficulties in making predictions – challenges in the development of AI systems

Human intelligence consists of whole collection of mental processes (like problem solving, thinking abstractly, decision making, learning, understanding, reasoning and more) which are all interrelated and interconnected. For the last century we have heavily relied on the current information theory concepts which assume that irrespective of complexity every single event that has happened (or is happening or would happen) would be written as a sequence of simple “yes” or “no” answers to a series of questions. Whatever be the complexity of events it is believed that one could merge all of them, rewrite in a line and execute in a sequence. The underlying concept is that events cause predictable reactions, but in the real world they actually do not. All the systems we have built so far assume that if one event happens there will be linear consequences or reactions. For the further development of intelligent solutions, it is therefore essential to understand that we are not living in a linear world where actions cause predictable reactions and there are many interconnectedness and interdependencies in the entire human ecosystem and in the universe.

Threats of super-intelligent systems

There is a major challenge around super-intelligent systems which is the threat of manipulation and the adversarial attacks on machine learning be it in the training data sets or afterwards. Decision-making process manifest itself through a feedback loopy – a machine is told if something is a good outcome or a bad outcome and based on that it reacts accordingly. Manipulated data sets can have catastrophic effects given that the presence of AI-based technologies in everyday life is increasing, including in critical areas of our economy. It also entails an issue of systems being explainable (how was this decision made and why?) and the need for humans, who should always be in the loop, to understand when something is wrong with the decision-making process based on intelligent systems.

The application of artificial superintelligence in public, private and military spheres should be sustainable in order to minimize threats to humankind and to ICT networks and systems. As the AI is a purely dual-use technology, it is said to significantly change the warfare battlefield.  We can already observe AI weaponization as a cybersecurity threat to the geopolitical order. AI-based weapons in cyberspace, geospace and space (CGS) might be used as a part of strategic competition between global powers.  Listing the AI-augmented cybersecurity risks should end with the ultimate threat – technological singularity which will allow AI systems to exceed human capabilities. It is in fact a cyber-world derived threat for the human race that needs to be considered while we are speeding up AI deployment.

Regardless of whether or not we will reach the point of human-level AI, we should carry on the discussion that in the event we reach it, how ready we will be in tackling and solving the already arising problems of AI, like biohacking, transhumanism or false data injections and manipulation.

Preparing society for the AI advancements

It is important to understand that once machines become better in a specific area, it entails the change of our role as humans in different processes and the way we engage with machines. If we look at machine vision as an example of the AI-based solution, for a long time the existing techniques were very poor. Nowadays, however, machine vision has significantly exceeded human capabilities what entailed the changes of human role in the processes where the machine vision is used. Society needs to adapt to the increasing presence of AI-based tools, take a proactive approach and consider different scenarios for the future. One of the methods that was discussed during the webinar was scenario planning which enables long-term strategies and at the same time keeps a high degree of flexibility (depending on the circumstances, the method is being adjusted). In order to achieve a specified and desired outcome there are some points on the path that need to be completed (for example improving digital skills or promoting life-long learning initiatives). The method combines already known facts and key driving forces at several levels such as social, economic or political. It is also very valuable for security experts who can analyse the potential impact of emerging threats.

Join us at the panel discussion at the CYBERSEC GLOBAL 2020

which will further explore the threats to digital identity.

More information

Webinar’s participants:

  1. Martin Achimovič – Director, NATO Counter Intelligence Centre of Excellence
  2. Izabela Albrycht – Chair, The Kosciuszko Institute; President, Organising Committee of the European Cybersecurity Forum – CYBERSEC
  3. Bonnie Butlin – Co-founder & Executive Director, Security Partners’ Forum
  4. Carsten Maple– Professor of Cyber Systems Engineering, WMG, Principal Investigator NCSC-EPSRC Academic Centre of Excellence in Cyber Security Research, Deputy Pro-Vice-Chancellor (North America), University of Warwick; Fellow, Alan Turing Institute
  5. Jayshree Pandya– Founder and CEO, Risk Group LLC
  6. Andrea Rodriguez– CYBERSEC 2019 Young Leader; Researcher and Project Manager, Barcelona Centre for International Affairs (CIDOB); Associate Member, Observatory for the Social and Ethical Impact of Artificial Intelligence (OdiseIA)
  7. Rafal Rohozinski– CEO, SecDev Group
  8. Barbara Sztokfisz – CYBERSEC Programme Director
  9. Omree Wechsler – CYBERSEC 2019 Young Leader, Senior Researcher, Yuval Ne’eman Workshop for Science, Technology and Security, Tel Aviv University

Comments are disabled.