1️⃣ Precise scope: we aligned the #AI definition with the OECD and underlined that the #AIAct was mainly proposed for addressing legal gaps caused by machine learning and deep learning. Simple software is therefore not covered.
🐦🔗: https://n.respublicae.eu/AxelVossMdEP/status/1656563802056400900
3️⃣ GPAI and value chain: while GPAI providers do not need to fulfil high-risk requirements, we included them in the scope of the AI Act by adding a new article that requires them to do extensive testing and provide comprehensive documentation.
🐦🔗: https://n.respublicae.eu/AxelVossMdEP/status/1656564485778186247
4️⃣ Regulatory Sandboxes: we strengthen the provisions + increased the possibilities of the participants to experiment. The liability questions for participants were clarified as well as the fact that after completing the sandbox there is the benifit of a presumption of conformity
🐦🔗: https://n.respublicae.eu/AxelVossMdEP/status/1656564928948621317
5️⃣ No expensive AI agency: instead we agreed on establishing an AI Office with a consultative and advisory role but no decision making power. The enforcement will take place by national supervisory authorities together with the Commission.
🐦🔗: https://n.respublicae.eu/AxelVossMdEP/status/1656565037559869441
6️⃣ Better regulation: we identified and removed many legal overlaps and contradictions in the AI Act with other horizontal laws (i.e. GDPR, DSA) and sectorial legislation (i.e. MDR, NIS2).
🐦🔗: https://n.respublicae.eu/AxelVossMdEP/status/1656565261489537026
@AxelVossMdEP This is what we wanted! GREAT!
🐦🔗: https://n.respublicae.eu/Andreas_Schwab/status/1656582478457454598
2️⃣ High-risk classification: with our changes, not every use case listed in Annex III will be automatically classified as high-risk. Instead only those #AI systems that pose a significant risk to health, safety and fundamental rights will qualify.
🐦🔗: https://n.respublicae.eu/AxelVossMdEP/status/1656564235579674624