The EU’s AI Act
The first official EU parliament gathering to discuss the AI Act displayed high levels of agreement on administrative procedures and regulatory standards. Formulated by the Commission at the Parliament’s request in 2020, the Act proposes systems regulation, to increase innovation in the EU but also security, using a risk-based approach to AI, with four levels (unacceptable, high, limited, and minimal). The idea being that AI applications of ‘unacceptable risk’ will be prohibited. The discussion is being led by two key Parliament committees: the Internal Market and Consumer Protection and the Committee on Civil Liberties, Justice and Home Affairs.
So far, discussion has focused on facial recognition software, where the Greens and the Socialists & Democrats groups are backing a ban, and so are 50 European interest groups. These groups are against mass biometric surveillance due to its known bias towards racial profiling. They also point to its frequent abuse of using AI to track minorities and dissidents by authoritarian regimes. Realistically, this regulation is still in its early stages, but Spain is pushing for a 2023 turnaround, with the aim of giving European startups an advantage in AI innovation.
While approval is pending, more regulations are being approved and put forward. The Digital Services Act was published in the Official Journal of the European Union in October, after being agreed and approved by the Council and Parliament. It defines responsibility and accountability in the form of fees and audits for providers of ‘intermediary’ services (social media, online marketplaces and search engines) to combat the sale of illegal products and services, and the spreading of illegal content and targeted advertising of minors.
The Digital Markets Act was published in the Official Journal in October as well. It is the first of its kind worldwide, aiming to regulate fair markets in the digital sector. (With it already in force, here’s a list of dos and don’ts.)
The EU Commission published a proposed Cyber Resilience Act on the 15th of September, with the aim to ensure safe hardware and software products for EU citizens, and reduce costs of cybercrime, currently estimated at €5.5 trillion worldwide. The idea is to incentivise manufacturers to invest in secure design, rather than just relying on reputational incentives of offering ‘safe’ products. The draft is now in the hands of the Parliament and Council for revision.
An AI Liability Directive was also suggested by the EU Commission in September, as a complementary clause to the AI Act, to make AI companies accountable for potential damage and discrimination caused by their systems. This draft still needs to be approved by EU governments and the European Parliament.
Further articles
- Europe’s General Court has decided, Google will be fined 4B euros for antitrust violation, after it was found to have abused its dominance of the android system Euronews
- The UK and the Netherlands are following suit, and claiming up to 15B euros in damages from Google’s adtech Euronews
- A third of scientists working on AI say it could cause catastrophe on the scale of nuclear war New Scientist
- Unfair AI: How FTC intervention can overcome the limitations of discrimination law SSRN
- US president Biden unveiled a new AI bill of rights, it’s about time Tech review
- A Danish AI-led political party, the Synthetic Party, is running at the next election, to represent non-voters Vice