The EU Artificial Intelligence Act (2024/1689) seeks to ensure that AI systems are safe, transparent and subject to human oversight.
The AI Act requires providers to manage risks, ensure data traceability, clearly inform users and apply safeguards against bias and discrimination.
Depending on the risk category—unacceptable, high, limited, or minimal—it imposes proportional obligations, with an emphasis on accountability and the protection of fundamental rights.
Our AI agents only perform informational and advisory functions, they do not make automated decisions with legal or significant effects on individuals.
By their nature, our services fall primarily into the “minimum” risk category under the regulation; in few cases they may fall into the “limited” risk category and in no case into the higher- risk categories.
In addition, providers of language models we use—OpenAI, Anthropic and Google—have published commitments on transparency, technical documentation and responsible-use policies that align with the requirements of Regulation (EU) 2024/1689.
These providers are actively working to mitigate biases and hallucinations, although they acknowledge that such risks cannot be eliminated entirely.