Hacking humans – threats to digital identity


Introduction

The human race has been on the journey of self-exploration for a very long time now, whether through scientific research within disciplines like biology and medicine or by taking a cognitive approach and investigating the human nature from a psychological and philosophical point of view.  The expedition to the core of humanity has evolved and what was previously built up in the minds of the greatest thinkers is now very often outsourced to big data algorithms. Biometric processing enables digital analysis of peoples’ appearance, movements and emotions shown through facial expressions and behaviours. Physiological data hidden under our skin (visible for instance in body temperature or heartbeat) decodes real emotions and genetic testing helps discover less visible characteristics like ancestry or proneness to illnesses. The potential is remarkable, as solutions based on the abovementioned could introduce great improvements for example in medical research and prevent identity theft, due to the uniqueness of identifiers. At the same time, as our lives become more and more digital, the technology exposes us to greater manipulation, deception, surveillance, control and even gradual loss of autonomy in making decisions, all of which can lead to one thing – hacking a human being.

Over the past few months those risks have become even more evident as digital tools are deployed to fight against coronavirus. That includes contact tracing and tracking solutions which are most often using mobile and biometric applications issued by governments all over the world. Various entities such as OECD and the European Data Protection Board have been highlighting the importance of protecting data and fundamental rights and left recommendations for introducing new solutions in those challenging times. The digital transformation has only accelerated this year because of the ongoing health crisis and the ongoing health crisis has somehow forced us to become even more dependent on the digital world.

The plethora of risks associated with increased digital presence of human beings as well as enhancing brain power and physical capabilities of our bodies for example through microchips should be addressed through a public and political debate. Participants of this webinar tried to answer an existential question on how to ensure that by moving our lives more and more to the digital world and making our bodies more digital or digitally-integrated, we do not lose what constitutes the essence of our humanity – autonomy, freedom of will, and the right to a subjective perception of the world.

DOWNLOAD THE PDF

Minimizing the risk of exploitation and hacking our identity

We live in a time where our privacy is very often traded for convenience and for higher productivity. The omnipresence of technological tools, that aim to make our lives easier, comes with a price that we pay every day, but which we may not be fully aware of. The increasing possibilities of tools empowered by data might result in the adversarial use of the technology leading to the hacking of our identity. The biometric identities, used currently for security purposes, once stolen might become a powerful weapon in the hands of hostile actors – identities might be duplicated, or authentication systems might be cheated (for example through generating a face that can pass as someone else) and used for criminal activities. However, we are not able to slow down the march of the technology. We can come up with regulations and laws, but the technology is evolving too quickly and there are going to be countries around the world that are going to use it to their advantage. There is a need to have a global and systemic conversation – how should the technologies (like for example facial recognition) be used, what are the dangers, and what are the chances they will be manipulated in the future with deep fakes.

A proactive approach and the development of new tools aiming at securing our digital presence are needed. Regulations and policies, though very helpful, are not the only solution. Innovation as a response to adversarial activities might become the most effective solution. Also, establishing trusted and secure digital identities (along with the control over our privacy) is critical for the further development of the digital world. At the same time, the education of the society and the awareness raising on how the emerging technologies work and what are their consequences should become a priority for policymakers and technology leaders. To solve the challenges, we need to first define and understand them.

 

Challenges and solutions to trustworthiness of technology – facial recognition case study

The progress in facial recognition technology which has been recently made allows us to currently observe a lot of different use cases, including live facial recognition, all around the world – ones already crossing the line of surveillance and others being tests on how society is willing to coexist with such technology on a daily basis. First of all, it raises data security concerns – how long the data is kept for, where it resides, who has access to it. Another challenge is false-negative and false-positive outcomes – based on the tests in London, 30% of the criminals will not be detected with the facial recognition systems even if their picture is in the database (false-negative). On the other hand, false-positive will be generated by one in a thousand. Opponents claim that facial recognition systems are biased and inaccurate for black and minority ethnic people pictures because of the non-sufficient training sets.

Also, the public perception of the facial recognition systems puts the large-scale use of this technology in question. More than 50% of the society do want the government to impose restrictions on its use by the police. Normally people would be very keen to support the police in matters of national security, for example to counter terrorism but here it is not the case. People feel it is invading their privacy. Nearly half (46 %) wanted the right to opt out of facial recognition which brings a challenge in itself – how can we avoid cameras on the streets? And of course, if there is the option to opt out, then criminals would also take advantage of it and avoid cameras wherever possible. Within ethnic minorities, the percentage of people wanting the opt out was higher (56%) because of the fear of being unfairly targeted. The key question that arises here is: Is the technology really doing enough benefit to warrant the discomfort and distrust of many parts of society?

The trustworthiness should be a key priority in building the digital identity systems, especially those based on the biometric data. The systems which are trustworthy can be defined as secure, confidential, privacy preserving, ethical, transparent, fair (unbiased), resilient, robust, repeatable, accurate and explainable. Those attributes might seem easy to say but hard to bring to life. They demand advanced security techniques as well as consensus and international cooperation among various stakeholders.

 

Data protection and ownership

The amount of data the society is currently generating is enormous and is growing every day. Over the past few years, we have observed a continuous surge in the deployment of new solutions based on biometric data processing (for authentication processes or border protection), neuro-measurements, genetic testing and nanotechnology. The number of entities ready to invest in those technologies is growing largely due to the unique character of the data powering them. It is fairly easy to single out a person from a crowd, which sparks debate over personal data protection, big data ethics and privacy. It is important to underline that not all the pieces of data are crucial to deliver or sell a product. There is a need to draw a clear line on how much data exactly is required in different use cases – it might be accomplished by both regulation and awareness raising (people simply will not be willing to share their data as they will be aware of all privacy and security concerns). Also, the important question arises – how can we prevent or stop potential abuse of the data from companies and from governments?  One of the solutions that was theoretically discussed during the webinar was to balance away the control over privacy from those who monetize it to those who are actually producing it. It is worth considering creating systems that are gathering data on individuals (both in the online world and in physical space) which is subject to property rights. Individuals could then be able to sell or lease their data at the price they want and not at the price which is imposed in advance (and which now in the majority of cases is only the free access to the services).

 

Join us at the panel discussion at the CYBERSEC GLOBAL 2020 which will further explore the threats to digital identity.

More information.

Webinar’s participants:

  1. BonnieButlin  Co-founder & Executive Director, Security Partners’ Forum
  2. Michael Earle– Cyber Threat Intelligence Lead, CyberQ Group
  3. Hervé LeGuyader  Deputy for strategy and international partnerships, Ecole Nationale Supérieure de Cognitique (ENSC); Vice-chair, Architecture and Intelligent Information Systems, NATO Information Systems Technology Panel; Member, NATO High-Level Group of Experts Allied Future Surveillance Control (AFSC)
  4. Carsten Maple– Professor of Cyber Systems Engineering, WMG; Principal Investigator NCSC-EPSRC Academic Centre of Excellence in Cyber Security Research; Deputy Pro-Vice-Chancellor (North America), University of Warwick; Fellow, Alan Turing Institute
  5. Christopher Painter– President, The Global Forum on Cyber Expertise; Commissioner, Global Commission on Stability of Cyberspace; Former Coordinator for Cyber Issues, U.S. State Department
  6. Jayshree Pandya– Founder and CEO, Risk Group LLC
  7. Andrea Rodriguez– CYBERSEC 2019 Young Leader; Researcher and Project Manager, Barcelona Centre for International Affairs (CIDOB); Associate Member, Observatory for the Social and Ethical Impact of Artificial Intelligence (OdiseIA)
  8. Rafal Rohozinski– CEO, SecDev Group
  9. Barbara Sztokfisz – CYBERSEC Programme Director
  10. Paul Timmers– Research Associate, Oxford University; Former Director, Sustainable & Secure Society Directorate, DG CONNECT, European Commission
  11. OmreeWechsler – CYBERSEC 2019 Young Leader, Senior Researcher, Yuval Ne’eman Workshop for Science, Technology and Security, Tel Aviv University

 

 

Comments are disabled.