04 Dec Artificial intelligence and Big Data – use responsibly
The development of new powerful technologies based on Artificial Intelligence and Big Data opens new opportunities but also gives rise to many concerns on the unintended consequences and their possible misuses to harm rather than make our society a better place.
We have started looking at issues concerning privacy and data protection, but we have soon realised that these issues are far from the only ones.
Alongside our research partners, we have identified ethical concerns about fairness, justice, hidden biases in big data, equality, public security, duty of care to vulnerable members of society, transparency, and many more.
There is a need to comprehensively understand these issues and find mechanisms of addressing them that involve stakeholders, including decision-makers but also civil society, to ensure that these technologies’ benefits outweigh their possible disadvantages.
While this is certainly an ambitious endeavour, an approach based on responsible research and innovation (RRI) offers a promising and realistic way of dealing with these challenges.
Regulating AI systems
Artificial intelligence (AI) and big data analytics are the key technological drivers of what we call “smart information systems” (SIS). Examples of such technologies include Google’s search engine and translation tools, AI algorithms used in Facebook and other social media, Amazon’s Alexa home assistant, healthcare and surgery robots, personal fitness applications, virtual and augmented reality, and many more.
Although privacy and data protection are certainly the most prominent, there are other numerous issues surrounding these technologies that are frequently discussed. There is much debate about accountability and transparency in machine learning and in the development of AI algorithms. For example, many experts have noted that algorithms reflect the biases and mindsets of their creators, even if those biases and mindsets were not intended.
The potential misuse of these powerful technologies becomes even more evident when considering the use of algorithms to manipulate consumer behaviour with targeted advertising or their intervention in our democratic society with the manipulation of the public opinion and voter intentions, as demonstrated by the Cambridge Analytica scandal.
As a result, there have been calls to regulate AI systems by academics and parliamentarians because current ethical and legal principles are deemed insufficient in this area.
How can we address the ethical issues arising from AI?
Recently, Irish researchers have proposed an “Ethics by Design” methodology for AI research projects, while American researchers have proposed a way of detecting bias in black-box models. Others believe that students learning AI, computer science, or data science, as well as AI practitioners, should receive training in ethics and security topics.
However, what seems to be missing is a way of aligning the various remedies and ensuring they are joined up and create synergies. The six keys to Responsible Research and Innovation
- public engagement
- science education
- gender equality
- open access
offer a solution to safeguard ethical principles and human rights in the development and deployment of such technologies.
Although a relatively novel concept that has gained prominence since about 2010, Responsible Research and Innovation is an attempt to rethink research and innovation governance with a view to ensuring that processes as well as outcomes are acceptable, desirable, and sustainable.
David Wright and Bernd Stahl’s aim in writing Ethics and Privacy in AI and Big Data: Implementing Responsible Research and Innovation is to raise awareness about the need to integrate various viewpoints and concerns regarding the advantages and downsides of Smart Information Systems and how the concept of Responsible Research and Innovation offers a perspective that allows for such an integration.
Responsible Research and Innovation implies that societal actors (researchers, citizens, policy makers, business, third sector organisations, etc.) work together during the whole research and innovation process in order to better align both the process and its outcomes with the values, needs and expectations of society.
The SHERPA project adopts this approach. In collaboration with stakeholders, the project is investigating, analysing and synthesising our understanding of the ways in which Smart Information Systems impact ethics and human rights issues.
It will develop novel ways of understanding and addressing Smart Information Systems challenges, evaluate with stakeholders, and advocate the most desirable and sustainable solutions.
To contribute to this debate, Trilateral Research is undertaking a representation of the ethical and human rights challenges of Smart Information Systems through case studies and future scenarios, as well as a Delphi study, which will involve more than 60 European experts in a two-step survey, to explore regulatory options for the future.
Please feel free to contact us for more information on this research area:
David Wright, Director - Policy, Ethics and Emerging Technologies
Tally Hatzakis, Senior Research Analyst at Trilateral Research