AI and Cybersecurity: How can ethics guide artificial intelligence to make our societies safer?

Artificial Intelligence (AI) and Machine Learning (ML) are generally perceived as futuristic technologies over which we don’t have control. They are often associated with fictional scenarios in which machines are taking over the world.

In reality, these new technological tools are already part of our life. They are part of a growing business development trend that has opened new opportunities. AI and machine learning are already employed in diverse types of industries, ranging from banking, advertising, retail shopping, all the way to cybersecurity.

Recent surveys in the UK have found that 48% of UK consumers would not buy new AI-driven products, such as personal virtual assistants or appliances for connected home, because of the fear of their being hacked.

As for all scientific advancements, it is true also for the contemporary technological development that Ai can be used for making the world a better place or fall into the hands of criminals and terrorists with potentially devasting consequences. But what are some real benefits and risks in the use and application of AI and machine learning for new technologies?

Safety and security

Artificial Intelligence (AI) and Machine Learning (ML), cobating social engineering

On the one hand, when it comes to fighting the war against hackers and cybercrime, AI and machine learning represent a great step forward, for they have many practical purposes. AI can be especially effective in password protection and user authentication, discovering phishing and spam attempts, identifying fake news and so on.

On the other hand, the malicious use of AI poses imminent threats to digital, physical and political security by allowing for large-scale, finely targeted, highly efficient attacks, e.g., on our critical infrastructures.In fact, researchers are already working with law enforcement agencies and industry to improve the detection of illicit activities and the use of data and virtual realities for criminal purposes.

Intelligence is creating tools to fight cybercriminals who could use AI-powered technologies, for example, to select potential victims for financial crime and for ‘social engineering’ attacks. However, is there a way to prevent the use of AI with criminal intent?

 Regulating the use of Artificial Intelligence

During a workshop organised by CANVAS, an EU-funded project focused on aligning cybersecurity with European values and fundamental rights, David Wright, director at Trilateral Research, presented his views on ethics, AI and cybersecurity to an audience comprising technology developers, cybersecurity companies and lawyers. Among the questions he posed and discussed were the following:

  • What are the dangers of AI-powered cyber attacks?
  • What ethical issues arise from AI-powered cybersecurity?
  • What are the ethics of counter-attacks?

The last issue – whether companies or institutions should counter-attack – generated much discussion, pro and con.

Trilateral is developing cutting-edge research on the regulation and application of new emerging technologies, working in different sectors with a strong insight into cybercrime and cyberterrorism. Contact our Policy, Ethics and Emerging technology team for more information:

 

David Wright, Director – Policy, Ethics and Emerging Technologies

 

Rowena Rodrigues, Research Manager at Trilateral Research

 

David Barnard-Wills, Research Manager at Trilateral Research