luat AI.jpeg
Minister of Science and Technology Nguyen Manh Hung.

Hung said the draft law is built on the principle of both managing risks and promoting the development and innovation of AI.

The law “ensures the development of AI for humans, placing humans at the center,” with the State playing the leading role in management, coordination, and development planning for AI.

The draft law establishes a risk-based management mechanism to ensure that the development, application, and use of AI are safe, transparent, controllable, and accountable.

The Government aims for flexible and effective AI development and encourages innovation. Specifically, there are four levels of risk: unacceptable (the highest level), high risk, medium risk, and low risk.

Providers must classify their systems before circulation and are responsible for the classification results. For medium- and high-risk systems, providers must notify the Ministry of Science and Technology (MST).

Systems with unacceptable risk are prohibited from being developed, provided, deployed, or used under any circumstances.

The prohibited list includes systems used for behavior that is illegal, systems that use falsification to deceive or manipulate causing serious harm, systems that exploit vulnerabilities of at-risk groups (children, the elderly etc), or systems that create fabricated content that severely threatens national security.

Under the draft law, organizations and individuals that violate regulations will face disciplinary action, administrative penalties, or criminal prosecution; if causing damage, they must compensate in accordance with civil law.

For serious violations, the maximum fine can reach 2 percent of the organization’s revenue from the previous year. In the case of repeated violations, the maximum penalty is 2 percent of global revenue from the previous year.

The maximum administrative fine is VND2 billion for organizations and VND1 billion for individuals.

A key provision states that damage caused by high-risk AI systems is considered caused by a source of extreme danger. Accordingly, providers and deployers of such systems must compensate even when they are not at fault, except in cases eligible for exemption under the Civil Code.

The draft law requires labeling for: content generated or edited by AI that involves falsification, simulates real people or real events, and may cause viewers, listeners, or readers to misunderstand it as real; and AI-generated content used for communication, advertising, propaganda, or publicly provided information.

Regarding incentives, the Government proposes various policies to support research, investment, high-quality human resource training, and to enable enterprises, organizations, and individuals to participate in the development and application of AI.

The draft law also establishes a national AI ethics framework to ensure that AI systems are developed and used for humans, do no harm, avoid bias, and uphold humanistic values.

Avoiding bottlenecks in AI R&D

Presenting the appraisal report, Chair of the NA’s Committee for Science, Technology and Environment Nguyen Thanh Hai said the appraisal body agrees with the major policies in the draft law.

The appraisal committee recommends adding core principles to ensure data quality for AI, such as ensuring that data are accurate, sufficient, clean, real-time, and standardized for shared use; establishing mechanisms for interconnected and shared data to avoid fragmentation and prevent bottlenecks in AI research and development.

The Committee for Science, Technology and Environment also recommends establishing mandatory principles for cybersecurity, data protection, and defense measures for national AI infrastructure to prevent risks such as system hijacking or data leakage.

According to the appraisal body, AI can perform actions and errors similar to those committed by humans. Meanwhile, legal responsibility for AI itself remains a complex and debated issue, making it difficult to define liability in the traditional sense. When incidents occur, disputes over administrative, civil, and criminal responsibility may arise.

The committee recommends adding principles to distinguish responsibilities among stakeholders, including foreign providers offering cross-border AI services, and to differentiate between intentional violations, unintentional violations, and errors caused by unpredictable technical limitations.

Du Lam