Ethical impacts from AI “unimaginable”, says EU think tank


AI: carries risks as well as benefits

Artificial intelligence (AI) software poses risks to society including tracking and identifying individuals, ‘scoring’ people without their knowledge, and powering lethal autonomous weapons systems, an influential EU group has warned.

The risks were outlined by the European Commission’s high-level expert group on AI when it published its ethics guidelines for trustworthy AI this week.

At the same time it launched a pilot project to test the guidance in practice.

With growing use of AI in legal practice, many of the issues raised have resonance.

Academic lawyers sat on the group, including experts from the universities of Birmingham and Oxford.

Several years in the making, the guidelines are the final version of proposals made in draft at the beginning of the year, which urged that AI be both human-centric and trustworthy.

The EU’s ambition is to boost spending on AI to €20bn (£17bn) annually over the next decade. The bloc is currently behind Asia and North America in private investment in AI.

In order for AI to be trustworthy and thereby gain public acceptance, the group recommended that it had three components: it should be lawful, complying with all laws and regulations; it should be ethical; and it should be robust from a technical and social perspective, so it did not cause harm unintentionally.

Those developing and using AI should bear in mind that while the technology could bring benefits, it could also impact negatively on “democracy, the rule of law and distributive justice, or on the human mind itself”.

The experts continued: “AI is a technology that is both transformative and disruptive, and its evolution over the last several years has been facilitated by the availability of enormous amounts of digital data, major technological advances in computational power and storage capacity, as well as significant scientific and engineering innovation in AI methods and tools.

“AI systems will continue to impact society and citizens in ways that we cannot yet imagine.”

Noteworthy risks included face recognition technology, the use of involuntary biometric data – such as “lie detection [or] personality assessment through micro expressions” – and automatic identification that raised legal and ethical concerns.

They also highlighted “citizen scoring in violation of fundamental rights”. Any such system must be transparent and fair, with mechanisms allowing the challenging and rectifying of discriminatory scores.

“This is particularly important in situations where an asymmetry of power exists between the parties,” they added.

The final example of risk brought about by AI was of lethal autonomous weapon systems, such as “learning machines with cognitive skills to decide whom, when and where to fight without human intervention”.

They concluded: “it is important to build AI systems that are worthy of trust, since human beings will only be able to confidently and fully reap its benefits when the technology, including the processes and people behind the technology, are trustworthy.”

Tags:




Leave a Comment

By clicking Submit you consent to Legal Futures storing your personal data and confirm you have read our Privacy Policy and section 5 of our Terms & Conditions which deals with user-generated content. All comments will be moderated before posting.

Required fields are marked *
Email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Blog


Succession (Season 5) – Santa looks to the future

It’s time for the annual Christmas blog from Nigel Wallis, consultant at Legal Futures Associate O’Connors Legal Services.


The COLP and management 12 days of Christmas checklist

Leading up to Christmas this year, it might be a quieter time to reflect on trends, issues and regulation, and how they might impact your firm.


The next wave of AI: what’s really coming in 2025

The most exciting battle in artificial intelligence isn’t unfolding in corporate labs; it’s happening in the open-source community.


Loading animation