Understanding Vietnam’s First Law On Artificial Intelligence

18/12/2025 17:00

On 10 December 2025, the National Assembly of Vietnam officially enacted the Law on Artificial Intelligence (Law on AI), which will come into force on 1 March 2026. The Law on AI is expected to establish a breakthrough legal framework for artificial intelligence (AI), with the aim of creating a favourable legal environment to promote innovation and enhance national competitiveness while managing risks, and protecting national interests, human rights and digital sovereignty. While the final promulgated version of the Law on AI has not yet been officially released, in this legal update, we will highlight notable provisions based on the latest draft Law on AI submitted for approval.

1. Scope of application

The Law on AI applies to both domestic and foreign agencies, organisations, and individuals that research, develop, provide, deploy, or use AI systems in Vietnam, excluding AI activities conducted solely for national defence, security, or cryptographic purposes.

2. AI system classification

The Law on AI introduces a risk-based regulatory framework that classifies AI systems into three (3) categories:

(i) High risk: AI systems that may cause significant harm to the life, health, legitimate rights and interests of individuals and organisations, national interests, public interests, and national security.

(ii) Medium risk: AI systems that may pose a risk of misleading, influencing or manipulating users because they cannot recognise that the interacting party is an AI system or the content generated by the AI system.

(iii) Low risk: AI systems not covered by categories (i), or (ii) above.

Providers are responsible for classifying AI systems. AI systems classified as medium or high-risk must have a classification dossier and must notify the classification results to the Ministry of Science and Technology (MOST) via the National AI One-Stop Portal prior to deployment. Deployers may rely on the classification results provided by providers and are responsible for ensuring the safety and integrity of the system during use. However, in case of modifications, integrations, or functional changes that introduce new risks or increase existing risks, deployers must coordinate with the provider to reclassify the system. If the risk level cannot be determined, providers may request guidance from the MOST on classification based on the technical dossier.

3. Transparency requirements

In principle, providers and deployers of AI systems must ensure transparency throughout the system’s provision and deployment. Providers must label AI-generated audio, images, or video in a machine-readable format. Deployers must notify the public when AI-generated or AI-edited content could be misleading in relation to the authenticity of events or individuals. Content that simulates or replicates the appearance, voice of real people, or actual events must be clearly labelled. The specific forms and methods of notification and labelling will be further detailed by the Government.

4. AI Management 

Based on the risk classification, the Law on AI establishes regulatory requirements corresponding to each level of risk.

(i) High-risk AI systems

Before deployment, high-risk AI systems must undergo conformity assessment. Conformity assessment under the Law on AI is conducted as follows:

  • High-risk AI systems included in the list of systems requiring conformity certification prior to deployment: the assessment must be carried out by a conformity assessment body that is registered or recognised under the law.
  • Other high-risk AI systems: providers may self-assess conformity or engage a conformity assessment body that is registered or recognised under the law.

The results of the conformity assessment are a prerequisite for the deployment of high-risk AI systems. The list of high-risk AI systems, including those requiring conformity certification prior to deployment, will be further specified by the Prime Minister.

In addition, providers and deployers of high-risk AI systems are required to comply with strict obligations. Notably, foreign providers offering high-risk AI systems in Vietnam must have a legal contact point in Vietnam. For systems subject to mandatory conformity certification prior to deployment, a commercial presence or an authorised representative in Vietnam must be established or appointed.

(ii) Medium-risk AI systems

Providers and deployers of medium-risk AI systems are subject to the following responsibilities:

  • Transparency requirements; and
  • Accountability upon request by relevant State authorities: providers shall be accountable for the system’s intended use, operational principles at the functional description level, main input data types, and risk and safety management measures, and deployers shall be accountable for the system’s operation, risk management, incident handling, and the protection of the legitimate rights and interests of individuals and organisations.

In addition, system users must comply with requirements relating to notifications and labelling.

(iii) Low-risk AI systems

Management requirements for low-risk AI systems are relatively light. Providers and deployers are only required to ensure accountability upon request by relevant State authorities. Users may operate and use low-risk AI systems for lawful purposes and bear personal responsibility under the law for such use.

5. Liability for Damages

Where a high-risk AI system is managed, operated, and used in accordance with applicable regulations but nevertheless causes damage, the deployer shall bear liability to compensate the affected party. After compensation, the deployer may seek reimbursement from the provider, developer, or other relevant parties, subject to contractual arrangements between the parties. 

However, the liability for compensation may be exempted where:

(i) the damage is caused entirely by the intentional fault of the injured party; or

(ii) the damage arises from force majeure or emergency circumstances, unless otherwise provided by law.

Where damage results from unlawful intrusion, takeover, or interference by a third party, such third party is liable for compensation. However, if the deployer or provider is at fault in allowing the system to be intruded upon or unlawfully interfered with, they shall bear joint liability for compensating the affected individuals.

6. Transitional Provisions

For AI systems deployed before the Law on AI comes into force, providers and deployers are required to ensure compliance with the Law on AI within the following timeframes:

(i) Eighteen (18) months from the effective date of the Law on AI for AI systems operating in the healthcare, education, and financial sectors; and

(ii) Twelve (12) months from the effective date of the Law on AI for AI systems not falling within category (i) above.

During the applicable transitional period, deployed AI systems may continue to operate, unless they are temporarily suspended or terminated at the request of relevant State authorities upon determining that the system poses a risk of causing serious damage.

Law on AI represents a significant step in shaping Vietnam’s regulatory approach to the development and deployment of AI technologies. Law on AI signals a clear policy direction toward fostering innovation while strengthening governance, risk management, and accountability. Businesses, technology developers, and other stakeholders are therefore advised to closely monitor upcoming developments, assess the potential impact on their operations, and begin preparing for compliance as the Law on AI moves toward implementation.

Click here to download: Legal Update (EN) - Understanding Vietnam’s First Law On Artificial Intelligence


This material provides only a summary of the subject matter covered, without the assumption of a duty of care by Frasers Law Company.
The summary is not intended to be nor should it be relied on as a substitute for legal or other professional advice.

© Copyright in this article is owned by Frasers Law Company