Are you boosting your business with AI? If the answer is YES, the AI Act calls upon you to develop an action plan to achieve the goals set herein.
The AI Act is the very first comprehensive legal framework for AI globally. In this article we delve into the essence of the AI Act and equip you with practical points of action to ensure your business stays compliant.
Understanding the AI Act
The AI regulation, often referred to as the AI Act, establishes a uniform legal framework for the development and use of specific AI systems in the European Union. While many AI systems offer significant benefits with minimal risk, others require careful handling to mitigate potential negative outcomes.
The primary aim of this regulation is to promote trustworthy AI within the European Union and beyond by ensuring that AI systems respect fundamental rights, safety and ethical principles and by addressing risks posed by highly powerful and impactful AI models. These rules also apply to businesses based outside the European Union, provided that they offer services within EU territory.
Using a risk-based approach, the AI Act assigns obligations to providers and operators based on the risk level associated with their AI systems. The regulatory framework categorizes AI systems into four tiers of risk: minimal risk, limited risk, high risk, and unacceptable risk.
Do you use chatbots, like ChatGPT, in your company’s operations? Lack of transparency in AI usage is considered a ‘limited risk’ under the AI Act. Instead, it’s essential to ensure that users are aware when they’re interacting with a machine, allowing them to make informed decisions. Providers will also have to ensure that AI-generated content is identifiable. Additionally, AI-generated text intended to inform the public on matters of public interest must be explicitly labelled as artificially generated, including audio and video content like deep fakes.
All AI systems considered a clear threat to the safety, livelihoods and rights of people will be banned, from social scoring by governments to toys using voice assistance that encourages dangerous behaviour.
Enforcement
In February 2024, the European Commission established the European AI Office, a new EU-level regulatory body tasked with monitoring, overseeing, and enforcing the AI Act’s requirements across the 27 EU member states.
Compliance is crucial to avoid fines, which can reach up to 25 million euros per violation or 7 percent of the entity’s global turnover
Call to Action
If AI is integral to your company’s operations, take the time to carefully evaluate your systems, assess their outputs, and determine the associated risk levels. Make sure you understand and comply with the relevant requirements, including any additional national regulations and sector-specific guidelines. Avoid unlawful practices to prevent fines and minimize harm to others.
Let us know in the comments i) which AI systems your company uses and ii) to what purpose. We might discuss the risks of using those systems (as well as tips to minimize those risks) in an upcoming article!