By continuing to browse this website, you agree to our use of cookies. Learn more at the Privacy Policy page.
Contact Us
Contact Us

AI regulation in Latin America (LATAM): Brazil leads

PostedMay 22, 2025 4 min read
LATAM AI regulation: current rules & what's coming next

AI is gaining momentum across Latin America, transforming industries from banking and healthcare to agriculture and public services. Yet regulatory readiness hasn’t kept pace. As AI adoption accelerates, governments face growing pressure to define legal boundaries, protect citizens’ rights, and create business-friendly innovation environments.

The majority of Latin American countries, with the exception of Brazil, have yet to establish formal AI governance frameworks. Nations such as Chile, Mexico, Argentina, and Colombia are in the early stages of drafting national strategies, but have not enacted binding laws. Others remain largely inactive on the regulatory front.

Much of the region is still wrestling with the implications of existing data protection regimes, like Brazil’s LGPD and Mexico’s Federal Law on Protection of Personal Data. These frameworks consume legal and institutional bandwidth, often delaying progress on AI-specific legislation. As a result, current efforts are fragmented and mostly focused on sectoral oversight, particularly in finance, healthcare, and public services.

However, Brazil has broken new ground by introducing Latin America’s first national AI law. Its framework could serve as a blueprint or at least a motivator for other countries in the region to follow suit.

This article explores where Latin America stands on AI regulation today, with a detailed look at Brazil’s AI Bill and what it signals for the region’s regulatory future.

Brazil

Brazil is leading the charge in Latin America with the region’s first AI law. After a successful vote in December 2024, Bill No. 2338/2023 (aka the AI Bill)  is set to become the country’s national AI framework, centered on safeguarding fundamental rights and preventing AI-driven discrimination.

Key provisions 

Similar to the European AI Act, Brazil’s AI Bill established a tiered risk-based model for AI systems: 

  • Excessive risk AI systems (e.g., government-run social scoring systems, mass public surveillance apps, and predictive policing tools) are prohibited outright.
  • High-risk AI systems (e.g., AI-based hiring tools, clinical diagnostic support systems, credit scoring apps) are subjected to strict regulations and oversight.
  • Other AI systems (e.g., AI chatbots, recommendation engines, or personalization algorithms) only face basic transparency and accountability obligations.

AI systems deployed in sensitive domains, such as healthcare, education, employment, and public services, must comply with expanded requirements to ensure safety, fairness, and human rights protections:

  • Risk management identification and mitigation through the AI model lifecycle with a focus on safety and anti-discrimination.
  • User disclosures about the adverse impacts AI systems can have on their rights or well-being.
  • All AI decisions must be explainable, and when applicable, these explanations should be provided to end-users.
  • Human supervision and intervention mechanisms must be integrated, along with an option to override automated decisions.
  • Proactive steps must be taken to prevent and correct biases in AI outputs, particularly when personal or sensitive data is involved.
  • Technical documentation about model training, data sources, system functioning, and risk management measures must be kept.
  • Users must be given mechanisms to challenge automated decisions and seek redress if their rights are negatively impacted.
  • All AI system components must include protection against cyberattacks, technical failures, and adversarial manipulation.
  • Companies must provide evidence of compliance upon request from the regulatory authorities.

The government is in the process of setting up a new authority to oversee these AI regulations. It will develop further technical standards for complaints, monitor high-risk AI deployments, conduct audits, and enforce penalties for violations. 

Penalties for non-compliance

The regulation has not yet been enacted as law, meaning no penalties are in place. But, if approved in the current version, the new regulator will have the right to impose fines up to R$50 million per violation or 2% of a company’s Brazilian revenue, whichever is higher.

R$50 million per violation

Or 2% of a company’s Brazilian revenue, whichever is higher

Implementation timeline

The AI Bill still has to be approved by the Chamber of Deputies (Brazil’s lower house) and then signed into law by the President. The vote will take place in mid-to-late 2025. It will likely include a phased implementation period, meaning full enforcement could start sometime in 2026. But there’s no definite date yet. 

What’s next for AI regulation in Latin America?

Brazil’s AI Bill represents a turning point for the whole region. As the first Latin American nation to formalize AI governance, Brazil is setting a precedent that other governments may soon feel compelled to follow. Whether through national legislation or sectoral rules, more regulatory momentum is expected across Latin America in the coming years.

For businesses operating in the region, the message is clear: don’t wait. Whether AI laws are already passed or still on the horizon, aligning with global best practices: transparency, explainability, human oversight, and bias mitigation can help organizations innovate and build systems that are future-proof, both technically and legally.

Act early to adapt to LATAM AI regulations

Free consultation

How to prepare for AI compliance in LATAM

  • Audit your AI systems for explainability, human oversight, and transparency
  • Review local data protection laws (LGPD, Mexican Data Protection Law)
  • Map system risk levels to Brazil’s upcoming categories (excessive, high, general)
  • Implement MLOps tools with versioning, data lineage, and security controls
  • Monitor Brazil’s regulatory authority updates and neighboring country signals

Takeaways 

As AI capabilities expand, so do the guardrails. Risk-based frameworks like the EU AI Act, South Korea’s AI Basic Act, and Brazil’s new AI Bill impose heavy compliance obligations on high-risk and unacceptable-risk AI systems.  Fines can hit up to 7% of global revenue, and in countries without unified laws, sectoral and state rules can be just as costly.

With the new era of accountability in AI governance upon us, your best strategy is a head start. Analyze existing systems against upcoming or voluntary regulations to understand your standing and prioritize areas for improvement. For AI products at the conceptual stage, consider alternative algorithms, offering better explainability — an area of Xenoss AI consulting

Think through the implementation of data labeling requirements, proper disclosures, and human oversight mechanisms to avoid costly reworks later on. And focus on building strong internal governance — MLOps workspaces with data logs and model versioning control, cybersecurity protocols, and streamline data lineage — to operate with built-in compliance.