2025 marks the rise of autonomous artificial intelligence agents. These systems are revolutionizing productivity by executing business tasks independently. However, they also raise legal questions about liability when things go wrong. The EU Artificial Intelligence Act (Regulation 2024/1689) introduces a regulatory framework that companies must understand to mitigate legal risks.
What Are Autonomous AI Agents?
Autonomous AI agents go beyond basic automation. Tools like 8n8, Make.com, or Zapier have evolved into systems capable of making complex decisions without human intervention. These agents can draft contracts, process payments, manage business communications, or handle sensitive data using AI-driven logic.
The key feature distinguishing them from traditional automation is their ability to adapt and make independent decisions, which may qualify them as high-risk systems under Article 6 of the Regulation.
Obligation for Effective Human Oversight
AI systems operate under predefined criteria, but they can hallucinate or cause harm. Article 26.2 requires companies to assign oversight to individuals with the training, competence, and authority to intervene. Oversight must be effective and proportional to the system’s autonomy level. This means businesses must implement mechanisms to monitor system behavior, detect anomalies, and reverse decisions when necessary.
Types of Business Liabilities
Liability Due to Lack of Human Review
When an agent generates critical content like contracts or legal documents without subsequent review, the company assumes responsibility for errors. The Regulation does not exempt companies from liability when relying solely on automated outputs.
Systems that perform irreversible actions—such as wire transfers or database modifications—trigger strict liability. Article 14.4 mandates that responsible individuals can refuse, override, or reverse decisions made by the system.
Cybersecurity Vulnerabilities
Article 15.5 sets cybersecurity obligations requiring systems to be resistant to external attacks or manipulation. Companies are liable for damages caused by unpatched vulnerabilities, including data poisoning, model tampering, or adversarial attacks.
Allocation of Responsibility
Article 25 clarifies the responsibility chain: developers are liable for design flaws, lack of warnings, poor technical documentation, or non-compliance with safety standards.
Business users are responsible for misuse, lack of oversight, failure to implement recommended security measures, and unauthorized modifications of the system.
Compliance Recommendations
Companies should implement clear human oversight protocols and keep detailed logs of automated decisions per Article 12. It’s critical to ensure human intervention is possible at all times, especially in critical decisions. Technical documentation must be updated regularly.
Our legal team’s experience in technology law has shown that prevention and compliance are key to avoiding future liability. It’s essential for companies to assess AI system risks and maintain continuous supervision.
Does your company use autonomous AI agents? At Navas & Cusí, our legal team has extensive experience in tech law. If you’re looking for legal experts in emerging technologies, we can help assess your compliance level and design effective oversight protocols under the European AI Act. We offer appointments in Madrid, Barcelona, Brussels, or by video call.
Frequently Asked Questions on AI Agents and EU Regulation
What legal responsibilities does a company have when using AI agents?
The company must supervise and validate the agent’s outputs. Without effective human oversight, it may face legal consequences for errors, vulnerabilities, or automated decisions.
How are AI agents different from traditional automation tools?
Autonomous AI agents don’t just automate tasks—they also make complex decisions without direct human input. This makes them potentially high-risk under the EU Regulation 2024/1689.
What does the Regulation say about human oversight?
Article 26.2 mandates assigning trained and authorized individuals to supervise and, if needed, override or reverse decisions. Passive monitoring isn’t enough—active oversight is required.
What happens if the AI agent sends a wrong contract?
The company remains liable. Using AI does not waive due diligence obligations. Human review is advised, especially for legal, financial, or contractual outputs.
Who is responsible for cybersecurity vulnerabilities in AI agents?
Article 15.5 holds companies accountable for system resilience. If harm arises from known but unpatched flaws, the company may be held liable.
How is responsibility shared between developers and business users?
Developers are liable for design or documentation issues. Business users are responsible for misuse, lack of oversight, or unauthorized changes. Article 25 defines both roles clearly.