AI mimicking people and ethics and human rights issues

What will the future look like if AI mimics people?

Artificial intelligence and big data analytics have the potential to add greatly to the benefits of information and communication technologies, but they can also have undesirable impacts on ethics and human rights.

Privacy and data protection are the most obvious issues, but they are far from the only ones. Concerns range from questions of fairness and hidden biases in big data all the way to the possibility of truly autonomous machines that may harm people. A key topic of debate is the social consequence that these technologies may have in the future of work and employment.

Regulating information and communication technologies

The use of hardware, software, and applications to perform big data analytics and to mimic human cognitive capabilities have pushed progress in information and communication technologies in recent years. Given their significant social and economic impact, they require regulatory supervision and ethical and social assessment.

Trilateral Research, as part of the SHERPA project, is addressing them involving stakeholders, including civil society, to ensure that these technologies’ benefits outweigh their disadvantages.

AI that mimics people

On 3 July 2018, Trilateral Research organised the first SHERPA scenario planning workshop focussed on “AI that mimics people”, which took place at the Brussels office of Innovate UK. 22 representatives from 8 European countries representing 17 different organisations (from academia, industry, civil society, standards bodies, and ethics committees) attended the workshop, as well as policy officer, Albena Kuyumdzhieva, and other representatives of DG Research.

The workshop is the first activity in a series of planned scenario iterations with a wider network of stakeholders which will enable the consortium to unravel the technology innovation drivers and their societal impact.

AI that mimics people future scenario

The Impact of Future Scenarios

Five scenarios will explore emerging smart information systems that are likely to be implemented and socially relevant in the medium term. Each scenario will explore the key factors affecting ethical, legal, social and economic aspects of everyday life in 2025. They will describe how technology drivers can be organised, evaluated and prioritised to identify indicators that can provide early warning about how they will develop and inform policy. The scenarios are focused on:

AI that mimics people future scenario

  • AI that mimics people (3 July 2018, Brussels, Belgium)
  • AI in education (17-18 September 2018, Brussels, Belgium)
  • AI in defence (17-18 September 2018, Brussels, Belgium)
  • AI in law enforcement (25-26 September 2018, Enschede, Netherlands)
  • AI in transport (25-26 September 2018, Enschede, Netherlands)

SHERPA aims to reflect on advanced technologies that are sufficiently grounded in reality, so as to help policymakers and other stakeholders deal with the issues raised by offering actionable information.

Read more about the SHERPA project here.

If you would like to be involved in one of our future scenario planning workshops and be part of the consultation process, click here, or feel free to contact our team.

 

Tally Hatzakis, Senior Research Analyst at Trilateral Research

 

Rowena Rodrigues, Research Manager at Trilateral Research

 

 

The SHERPA project, Shaping the ethical dimensions of smart information systems (SIS) – a European perspective, is an EU funded project under the European Union’s H2020 research and innovation programme, grant agreement No. 786641.