AI for good, IEEE forum, ethics

AI for good – fostering the ethical use of artificial intelligence

Smart Information Systems (SIS) hold large promises and raise significant concerns. The application of artificial intelligence and machine learning via deep neural networks based on big data analytics form the backbone of an envisioned smart world in which they will power smart technologies, drive smart cities and transform our lives – hopefully for the better.

In fact, alongside the enthusiasm for what is sometimes called the “fourth industrial revolution”, these technologies raise a number of concerns with regards to their ethical consequences and their implications for human rights.

In a Digital Trends article, David Wright, director of Trilateral Research, expresses his concerns regarding the ethical use of artificial intelligence looking at the case of “deepfake” – AI manipulated videos in which people’s likenesses are repurposed in once unimaginable ways.

“I think people should be deeply concerned about deepfake technology. It will continue to evolve and become even more difficult to distinguish between what is real and what isn’t. Porn sites will continue to exploit celebrities — voices and images — with deepfake technologies. Cyber gangs will inevitably use deepfake technology for ultra-sophisticated spear phishing. We can expect right-wing politicos and their henchmen to use it to deceive voters and undermine the reputations of their opponents. They will be aided and abetted by foreign powers interfering in electoral processes.”

There are numerous initiatives that aim to find ways to identify and address the potential negative impact of emerging technologies. What is currently missing in this debate is an integration of empirical insights, their theoretical evaluation and the translation of such knowledge into practical outcomes.

To contribute to this debate, the EU- funded SHERPA project is analysing the ethical dimensions of IT technologies by working with a broad range of stakeholders to clarify and represent ethical, human rights and security issues of smart information systems.

The project collects existing and develops novel ways to address these issues and advocate those solutions that are most socially acceptable, desirable and sustainable.

Forum on Ethics and Human Rights in Smart Information Systems

On 19 August 2019, David Wright presented two of the future scenarios developed for the SHERPA project at the Ethics and Human Rights in Smart Information Systems forum of the IEEE Smart World Congress (19-23/08/2019, Leicester, UK).

During his talks, David Wright focused on what the landscape of information warfare and mimicking technologies might look like six years from now.

As part of the SHERPA project, Trilateral has developed five scenarios on technologies that mimic peopleinformation warfaredriverless carspredictive policing and learning buddy robots.

Each scenario introduces the technologies and applications that may be available in 2025 to illustrate how the technologies or applications may be used, their ethical, legal, social and economic impacts, and the recommendations to reach a desired future and avoid an undesired one.

IEEE forum 2019

Chaired by the SHERPA project coordinator Bernd Stahl, the forum brought together a multidisciplinary community of scholars to contribute to the current high-level discussion of ethical and human rights issues of smart information systems.

The session included presentations by invited speakers, SHERPA partners, members of the SHERPA Stakeholder Board and academics selected in the Call for Papers:

  • Invited Talk: The Challenge of Practical Ethics, Declan Brady
  • Technofixing The Future: Ethical Side Effects of Using AI and Big Data to Meet the Sdgs, Mark Ryan; Laurence Brooks; Tilimbe Jiya; Kevin Macnish; Bernd Stahl; Josephina Antoniou
  • Ethics and Design in The Smart Bikeshare Domain, Robert Bradshaw
  • What If We Had Fair – People-Centred – Data Economy Ecosystems? Jani Simo Sakari Koskinen; Sari Knaapi-Junnila; Minna Rantanen
  • Embedding Private Standards in AI and Mitigating Artificial Intelligence Risks Martijn Scheltema
  • Creating Companions for Senior Citizens with Technologies That Mimic People, David Wright
  • AI Management: An Exploratory Survey of the Influence of GDPR And FAT Principles, Chiara Addis; Maria Kutar
  • Automated Automobiles in Society, Olli Heimo; Kai Kimppa; Antti Hakkala
  • AI and Information Warfare In 2025, David Wright
  • Internet Filtering: Solution to Harmful and Illegal Content? Marie Eneman

The forum was co-organized by three EU projects engaged in ethics and human rights of AI: SIENNASHERPA and PANELFIT as well as the UK Observatory for Responsible Research and Innovation in ICT (ORBIT), as part of the IEEE Smart World Congress.

PANELFIT, SHERPA, and SIENNA are collaborating and working together with stakeholders to improve ethical, human rights and legal frameworks for information and communication technologies (ICT), big data analytics, artificial intelligence (AI) and robotics.

The three projects aim to improve existing ethical and legal frameworks in a structured way and will deliver specific and complementary guidance for software developers, industry, policymakers, researchers and citizens.

Read about our work in SHERPA and SIENNA.

 

For more information about our work in this research area please contact our team.

David Wright, Director - Policy, Ethics and Emerging Technologies

Tally Hatzakis, Senior Research Analyst at Trilateral Research

Rowena Rodrigues, Research Manager at Trilateral Research

 



‘Risk Assessment Report and Methodology’

You can view the Executive Summary and Table of contents of the Project Solebay Risk Assessment Methodology Report.

Please sign up to the Solebay mailing list to download the Full Solebay project report.