The Artificial Intelligence Act (AIA) regulates the use and development of artificial intelligence, ensuring ethical and safe AI solutions. It classifies AI systems based on risk levels and imposes strict requirements on high-risk systems, such as AI-powered medical software.
The AI Act applies to a wide range of stakeholders involved in developing, providing, or using AI systems. The impact of the AI Act on an individual company depends on the risk classification of the AI systems used and whether the company uses AI internally or provides AI systems as a service. Companies must determine which AI systems can be used and how, what data can be fed into the system, and train personnel in AI usage. Transparency requirements mandate that AI-generated content must be clearly labeled so users can recognize AI-produced material.
As AI evolves rapidly, the legal issues surrounding its use are becoming increasingly complex. We offer expert guidance and support in the following areas:
- Implementation and compliance of AI systems: we assist clients in evaluating risk levels and addressing questions related to the adoption of AI.
- Drafting guidelines and strategies: we develop internal guidelines and strategies for the use of AI tools.
- AI-related data protection issues: when personal data is processed using AI, compliance with the General Data Protection Regulation (GDPR) is required. We conduct Data Protection Impact Assessments (DPIA) for AI, as mandated by the GDPR.
- Contracts and licenses: we draft and review agreements related to the development and implementation of AI technologies.
- Training: We provide training on AI usage and the associated legal frameworks for employees.
We help our clients leverage AI safely and efficiently, ensuring compliance with EU regulations.
Our team
Saara Ryhtä, Kimmo Oila, Erika Leinonen, Kimmo Suominen, Marko Moilanen, Mikko Koskinen ja Robin Eklund.