On November 21, Minister of Science and Technology Nguyen Manh Hung presented the draft Law on Artificial Intelligence to Vietnam’s National Assembly. The draft legislation is guided by the dual goals of managing risks while fostering development and innovation in the field.
According to Minister Hung, the law is designed to promote human-centered AI development and positions the state as a key actor in regulating, coordinating, and enabling the growth of artificial intelligence technologies.
The law proposes a risk-based regulatory framework to ensure that the development, application, and use of AI is safe, transparent, accountable, and controllable. It outlines four levels of risk: unacceptable, high, medium, and low.

AI providers must classify their systems before release and are responsible for the accuracy of the classification. For systems deemed medium or high risk, the provider must notify the Ministry of Science and Technology.
Systems classified as “unacceptable” will be banned from development, distribution, implementation, or use in any form. This includes systems used for legally prohibited activities, deceptive deepfakes, manipulation that causes serious harm, exploitation of vulnerable groups (such as children or the elderly), or the creation of falsified content that endangers national security.
Organizations or individuals violating the law could face disciplinary action, administrative penalties, or criminal prosecution. If harm is caused, compensation must be paid under civil law.
In severe cases, fines may reach 2% of the organization’s revenue from the previous year. Repeat offenders could be fined up to 2% of their global revenue. The maximum administrative fine would be 2 billion VND (approximately $82,000) for organizations, and 1 billion VND (around $41,000) for individuals.
Importantly, the law defines damage caused by high-risk AI systems as damage from “sources of extreme danger,” meaning that providers and operators may be liable for compensation even in the absence of fault, except in cases outlined under civil law exemptions.
A key provision requires labeling of AI-generated or modified content that includes fake elements or simulated people/events that could mislead viewers into believing the content is real. This also applies to AI-generated content used in media, advertising, propaganda, or public information.
The draft law includes incentives to encourage research, investment, and high-quality workforce development. It also provides a national AI ethics framework to ensure that systems are developed and used for human benefit, without harm or bias, and in line with humanistic values.
No obstacles for AI research
Presenting the review report on the draft law, Nguyen Thanh Hai, Chair of the Committee for Science, Technology, and Environment, expressed broad support for the draft’s main policies.
She proposed adding core principles to ensure AI-related data is accurate, complete, clean, real-time, and shared across systems. This would prevent data fragmentation and bottlenecks that hinder research and development.
The committee also emphasized the need for mandatory cybersecurity and data protection rules to defend national AI infrastructure from potential control breaches or data leaks.
Recognizing that AI can commit the same errors as humans, Hai noted the legal complexities in assigning responsibility to AI, which lacks legal personhood. This could lead to disputes involving administrative, civil, or criminal liabilities.
The committee called for clarification on accountability between parties, particularly foreign providers offering cross-border AI services, and differentiation between intentional violations, negligence, or technical limitations beyond foreseeable control.
Tran Thuong