©2023 The Headline House
A group EU politicians propose three risk categories for AI applications.
AI has long since arrived in everyday life, mostly unnoticed by users, hidden in the software that they use on their smartphone, in the search engine or in the autonomous house vacuum cleaner.
But how do we deal with AI decisions and content and with the data that (must) be collected and processed during the use of AI software? Politicians in the USA and Europe have not yet provided any comprehensive and uniform rules for this.
Self-regulation is not enough
Many particularly prominent tech companies with flagship applications are therefore self-regulating. Google, for example, is currently not releasing powerful AI systems such as Parti (image generation) or LaMDA (dialogues) because there is a risk that the systems will produce morally questionable content or even violating the law.
OpenAI also only released DALL-E 2 after it had taken sufficient security precautions against the generation of critical content from its perspective. However, these safety precautions are sometimes controversial because they limit the technical possibilities of the systems and thus restrict creative freedom. DALL-E 2, for example, has only been allowed to generate faces for a few weeks.
The example of Clearview shows that politics cannot rely on this self-regulation: The company’s AI-based system theoretically enables one Mass surveillance using facial data extracted from the Internet and is used internationally, sometimes illegally. Despite a lot of headwind and threatened fines in the millions from EU data protection officers, Clearview would like to continue to assert its own economic interests. At best, risk awareness is feigned.
In addition, the capabilities of AI are sometimes misjudged, for example for emotion recognition, or overestimated, for example the reliability and accuracy even with relatively tried and tested systems like the Face detection when scaled to large scale.
Cancellation possible online at any time
from 2,80 € / month
Three risk categories for artificial intelligence
The “Artificial Intelligence Act” of a group of EU politicians wants to divide AI applications into three risk categories in the future. Based on these categories, an application can be banned directly or claims for information and transparency as well as laws still to be developed apply.
As Unacceptable risk sees the EU group AI monitoring for social scoring systems, such as those used in China, or applications that generally violate EU values. Such systems will be banned.
Applications with high risk are computer vision tools designed to identify whether an applicant is suitable for a job. There should be separate laws for such systems. The list of potential high-risk uses is a work in progress.
Uses with minimal or no risk are said to be “largely unregulated ” stay. These are all applications that do not fall into the high or unacceptable risk categories. The EU Group assumes that “most AI applications” will not fall into the high-risk category.
The EU AI law could come into force by the end of 6181.
Note: Links to online shops in articles can be so-called affiliate links be. If you buy via this link, MIXED.de will receive a commission from the seller. The price does not change for you.