AI and data privacy regulation
Regulation of technology in general, and AI in particular, is a major topic on both national, international, and global ...
Regulation of technology in general, and AI in particular, is a major topic on both national, international, and global level. The European AI Act came into force on Aug 1st, 2024, and is currently considered as the world's most comprehensive AI regulation. It takes a risk-based approach to defining prohibited and high-risk use cases, which are followed by a significant set of compliance requirements. The AI Act has received both praise and criticism, and whichever way you look at it, there is a bumpy implementation path ahead. However, the regulation is in force, so companies need to start planning for the required compliance measures.
Here are the most important deadlines of enforcement that you need to be aware of [1].
Feb 2nd 2025:
Prohibited use cases. This means that from this date on, the use cases defined as prohibited are in fact prohibited within the EU.
Requirements on AI literacy. From this date, companies developing or providing AI solutions to the market are required to make sure that all relevant resources in their companies receive relevant and sufficient training on AI.
Aug 2nd 2025:
If you launch a general-purpose AI model after this date, there is a set of compliance requirements regarding technical documentation, copyright, training data, compute power, and cyber security. If you were quick enough to launch your model before this date, you have until Aug 2nd, 2027, to ensure compliance.
Aug 2nd 2026:
This is deadline for compliance of all use cases, with a couple of exceptions (product risks under CE requirements and AI components in large-scale systems). It is particularly important to be aware of the rules regarding high-risk use cases, including transparency and risk management.
Aug 2nd 2027 / Dec 31st 2030:
No more exceptions, full enforcement.
How to comply?
A typical question we get from leaders is when and how to approach the compliance beast. Here is a very rough checklist. For details, please refer to legal resources. We recommend checking out The AI Act Explorer [2].
- Are you using or planning to use AI for use cases defined as prohibited? [3]. Just do not. It is that simple. Even if you managed to find a way around the legal stuff, media and customers will rip your brand apart if it becomes known.
- Are you using or planning to use AI for use cases defined as high-risk? [4] In these cases, you need to start working on compliance requirements:
- You need to have a risk management system and relevant procedures in place. Due to the nature of AI as key component in achieving the business strategy, the board of directors and the C-suite executives must take active part if the risk assessments. This means that your company needs to have significant AI literacy on both board level and C-suite level.
- Data governance is essential; which data are used for training, how were these data collected and processed, what types of biases are present in your data, how is data privacy and security ensured? This means that good old data engineering and information management is back in fashion.
- Technical documentation must be created, maintained, and made available to your customers.
- Shit always happens. In addition to preventing and mitigating it, you also need to keep minute record of it and be prepared to share this with relevant authorities upon request.
- The black-box-design approach is a no-go, and you cannot hide behind "all AI is black box" - it is not.
- There must be human oversight of the process in which the high-risk AI is applied.
- And just a personal non-legal tip from the authors: if the intent of the use case is to make yourself richer at the cost of the rights of the individual, you should think twice.
- If your AI use cases do not fall under the definitions of prohibited and high-risk, you are probably in the clear with respect to the AI Act. However, you still need to keep good old GDPR in mind. A quick reminder considering algorithmic processing:
- An individual has the right to be informed about data collection and for what the data are used. This means that if you want to collect more information about your customers or your employees to use this in an AI application, you will have to inform them correctly about what and why and then get their consent in advance.
- An individual has the right to have erroneous data rectified. This may give you a headache if the data are used to train an AI model. Simply correcting data will not automatically update the model. You may in in fact have to retrain your entire model if the extent of the correction is sufficiently large. The morale is to make sure that your data collection processes are accurate and robust and not prone to human sloppiness.
- An individual has the right to be forgotten. Keep in mind that data fed to an AI model for training purposes are never forgotten unless you retrain the model on a new data set.
- An individual has the right to restrict and object to automated decision-making. Again, this is about consent and human oversight. In the race to automate and apply AI agents, be aware that customers and employees have the legal right to refuse automated decision-making.
Selected AI Regulations Outside the EU
Norway, the home market of the authors of this Outlook, is an EEC member state, which means that the AI Act will be incorporated at some point in time, although currently unknown when. However, the guardrails of good old data privacy (GDPR) still apply.
As of January 2025, federal AI regulation in the US is in the blue. POTUS has revoked the executive order from former President Biden whilst at the same time gearing up investments. On the other hand, state-level regulation is an evolving patchwork of regulations, covering both AI, data privacy, and cyber security. Although POTUS and financial supporters from Silicon Valley proclaim an absolute minimum of regulation, there are strong movements with different perspectives on the need for regulation.
China has a comprehensive set of regulations around data privacy, AI, and cyber security. From a European perspective, we often tend to believe that China does not regulate the use of data, but that is not the case. The Chinese data privacy law has similarities with the European GDPR, and the AI regulations have significant focus on transparency of training data, copyright, and deep fakes.
For more details on global AI regulations, see https://www.techieray.com/GlobalAIRegulationTracker