RT by @MarietjeSchaake: Together with Jaan Tallinn, the founding engineer of Skype and Kazaa, we wrote in the EU startup magazine Sifted that EU startups cannot currently trust general-purpose AI model developers. For a startup to meet safety standards under the EU AI Act, they'll need information and safety guarantees from the developers. They're not getting it: https://sifted.eu/articles/mistral-aleph-alpha-and-big-techs-lobbying-on-ai-safety-will-hurt-startups.
As negotiations on the AI Act unfold, there have been efforts to replace hard regulation for general-purpose AI models with soft codes of conduct. France, Germany and Italy have recently written a joint non-paper on the regulation of such models, advocating for "mandatory self-regulation through codes of conduct". This may be to the benefit of big tech companies — and perhaps, the French Mistral AI and German Aleph Alpha —but it will force European startups, such as downstream general-purpose AI model developers and deployers, to pick up the tab for legal and compliance costs.
Please share within your European startup networks to help them understand the situation better and ensure that the original developers of general-purpose AI indeed have legal responsibility.
🐦🔗: https://nitter.cz/RistoUuk/status/1729784598412812524#m
[2023-11-29 08:49 UTC]