Insights > Client Alerts

Client Alerts

CFM publishes resolution on the use of AI in medicine

March 9th, 2026

On February 27, 2026, Brazil’s Federal Council of Medicine (“CFM”) published CFM Resolution No. 2,454/26, which provides for the use of artificial intelligence (“AI”) in medicine. The resolution will take effect 180 days after publication.

According to the CFM, the resolution is the result of an 18-month debate conducted by a working group convened to discuss how to integrate AI into medical practice. The initiative responds to the growing adoption of AI-based tools in healthcare and aims to ensure compliance with the principles of beneficence, non‑maleficence (“do no harm”), medical autonomy, justice, and patient‑centered care.

he new regulatory framework, however, is not limited to clinical practice. The resolution significantly affects healthcare institutions, technology companies, software developers, research sites, and other stakeholders across the digital health ecosystem, requiring a structured approach to governance, data protection, and intangible asset management.

Find below the key aspects of the CFM’s new regulation.

 

Scope and general principles

CFM Resolution No. 2,454/26 establishes parameters for research, development, governance, auditing, monitoring, training, and responsible use of solutions that employ AI models, systems, and applications in medicine. The goal is to foster technological development and efficiency in medical services, while safeguarding patients’ fundamental rights.

Governance of these solutions must respect the autonomy of physicians and institutions, allowing the deployment of technologies tailored to local contexts, provided they meet auditing, transparency, and monitoring criteria proportionate to the associated risk.

The resolution also stresses that AI systems must be auditable and monitorable in a practical and accessible manner, while preserving trade secrets. Transparency must be ensured through scientific indicators, demonstrating accuracy, effectiveness, and safety.

 

Doctor-patient relationship and duty to inform

The use of AI must not undermine doctor–patient relationship, active listening, empathy, confidentiality, or respect for human dignity.

Patients must be clearly and accessibly informed whenever AI models, systems, or applications are used as a material aid in their care. The resolution also prohibits delegating the communication of diagnoses, prognoses, or therapeutic decisions to AI without human mediation, thus preserving the physician’s ultimate authority.

Notably, the rule ensures the patient’s right to refuse the use of such technologies in their care.

 

Physicians’ rights and duties

The resolution sets out physicians’ rights and duties in using AI. Find some of the highlights below:

Rights:

  • Using AI tools as support instruments in medical practice, clinical decision-making, health management, scientific research, and continuing medical education;
  • Having access to clear, transparent, and comprehensible information on the functioning, purposes, limitations, risks, and degree of scientific evidence of the systems used;
  • Refuse to use systems lacking adequate scientific validation or appropriate regulatory certification.
  • Preserving their professional autonomy and not being compelled to follow automated recommendations uncritically;
  • Being protected against undue liability for failures attributable exclusively to AI, provided that the physician can demonstrate diligent, critical, and ethical use of the tool.

Duties:

  • Using AI exclusively as a support tool, and being ultimately in charge of clinical, diagnostic, therapeutic, and prognostic decisions;
  • Exercising critical judgment on the recommendations provided by AI;
  • Staying up to date on the capabilities, limitations, risks, and known biases of the systems used;
  • Using only solutions that comply with the current ethical, technical, legal, and regulatory regulations in force;
  • Documenting the use of AI as a medical decision support tool in the patient’s medical record;
  • Reporting any material failures or risks to the competent authorities.

 

AI governance and accountability:  

Under the new resolution, medical institutions that develop or contract their own AI solutions must implement internal governance procedures focused on safety, quality, and ethical compliance.

CFM Resolution No. 2,454/26 also prohibits communicating diagnoses, prognoses, or therapeutic decisions to patients via AI systems, reinforcing the central role of human mediation in doctor–patient relationships and the need to document technology use in medical records.

Institutions adopting their own systems must establish an AI and Telemedicine Committee, under medical coordination and reporting to the technical directorate, to ensure ethical and supervised use of these tools. Regional Medical Councils are responsible for overseeing and enforcing compliance within their jurisdictions.

Within this context, CFM Resolution No. 2,454/26 introduces a risk assessment and classification logic for AI systems – ranging from low to unacceptable – as a core element for defining governance, supervision, and control measures applicable to each solution. These elements are now part of the compliance and risk management agenda for organizations that use or develop AI-based solutions in healthcare.

 

Risk classification and categorization

Medical institutions that develop or use AI solutions must conduct a preliminary assessment to determine the risk level of the tool, considering, among other factors:

  • Potential impact on fundamental rights and patient health;
  • Severity of the use context;
  • System complexity and degree of autonomy;
  • Intended and potential purposes;
  • Level of human intervention in the outputs; and
  • Volume and sensitivity of data used.

Based on these criteria, systems will be categorized into low, medium, high, or unacceptable risk. This classification must be disclosed to the user.

 

Privacy, transparency, and health data

Data used to develop, train, validate, and implement AI systems must strictly comply with the Brazilian General Data Protection Law (LGPD – Law No. 13,709/2018) and specific information security standards for healthcare.

CFM Resolution No. 2,454/26 requires the adoption of technical and administrative safeguards aligned with market practice and commensurate with the criticality of the data processed, to prevent destruction, loss, alteration, unauthorized access, and leakage of sensitive information. Data sharing should occur only when strictly necessary and supported by an appropriate legal basis.

In practice, the use of AI in healthcare now requires:

  • Appropriate legal bases for processing health data;
  • Technical and organizational measures for information security; and
  • Transparency, traceability, and accountability policies in the use of algorithms.

The resolution also reinforces a preventive governance approach in which privacy, information security, and risk mitigation must be considered from design through deployment, directly affecting contracts, operational workflows, and innovation strategies in the sector

 

Intellectual property, trade secrets, and technological innovation

Another noteworthy matter concerns balancing regulatory transparency with protecting intellectual property assets.

CFM Resolution No. 2,454/26 stipulates that AI models, systems, and applications must be auditable and monitorable, while also safeguarding industrial and trade secrets involved in developing these technologies.

This scenario calls for special attention to:

  • Allocation of ownership for solutions developed in clinical or research environments;
  • Contractual structures for licensing, data sharing, and co‑development.

The resolution encourages cooperative and interoperable models of technological development, emphasizing the importance of clear contractual arrangements that balance innovation, data sharing, and the preservation of intangible assets.

 

Conclusion

CFM Resolution No. 2,454/2026 marks a notable development in regulatory maturity for AI use in medicine, shifting the debate from whether it should be used to how it must be governed.

By requiring clear governance structures, transparency, data protection, and preservation of intangible assets, the resolution imposes an integrated view that aligns medical ethics, technological innovation, and legal certainty.

For organizations in the digital health ecosystem, both challenges and opportunities lie in converting these guidelines into strategic advantages, by revisiting internal structures, contracts, and innovation policies according to this new paradigm.

This sector‑specific regulatory development is occurring amid a broader AI regulation movement in Brazil, as evidenced by ongoing discussions regarding a general AI legal framework, such as Bill No. 2,338/2023, currently pending before the Brazilian Congress. Any legislative advances in this field may significantly affect sectoral regulations already in force, including in healthcare, emphasizing the importance of a dynamic, forward‑looking approach to compliance.

 

Demarest’s Life Sciences and, Data, Privacy and Technology, and Intellectual Property, Technology, and Innovation teams are available to provide further clarification.