top of page
Search

Artificial Intelligence Regulations: A Comparative Analysis of the US, China and EU Legal Frameworks

  • Oliver Donnelly
  • Mar 11
  • 24 min read

INTRODUCTION

 

Artificial Intelligence (AI) has been developed and implemented across the public and private sectors at an unprecedented rate within the past ten years. States and private enterprises view AI as having significant potential to maximise the efficiency of current systems and ‘advance global prosperity, security and social good’.[1] However, AI’s unpredictable outcomes and potential risk to the environment, human rights and social cohesion present a unique issue for State lawmakers.[2] Hence, the mitigation of AI risk is dependent upon developing effective legal and policy frameworks and efficiently responding to its increasingly international nature.[3]

 

The global commodification of AI has led to increasing international accessibility and therefore potential issues of cross-jurisdictional legal problems. This encourages an analysis of different State regulations and whether any current State’s laws could inform a cohesive compliance framework in the international legal sphere. The United States (US), China and European Union (EU) are the dominant bodies in developing, implementing and regulating AI across the public and private sectors; however, their approaches differ in their underlying value orientations.

 

This essay will therefore critically analyse the legislative regulations over AI in China, the US and EU, comparing their legislative strengths and weaknesses, before discussing how these approaches could best inform international AI frameworks and assist developing countries in their regulation of AI.

 

CHINA

 

Aims

 

China has largely adopted a state-driven model of AI regulation. China’s New Generation Artificial Intelligence Development Plan (AIDP) (2017) outlined China’s goal of upgrading the economic implementation of AI whilst minimising risks of changing employment structures and violating personal privacy.[4] This plan intends for China to be the world leading innovation centre for AI by 2030, with staged legislative development focusing on ‘ordered innovation’, local experimental governance, and embedded ethical protocols.[5] China’s AI laws intend to be ‘vertical and iterative’ with the overriding goal to ‘shape technology so that it serves the Chinese government’s agenda.’[6] Since 2017, China has established multiple bodies and series of regulations at the local and national level targeting both internal AI development and extraterritorial AI interactions.

 

Primarily, the Cyberspace Administration of China (CAC) has had an active role in issuing provisions for AI regulation and fostering internal AI development. The China Association of Artificial Intelligence (CAAI), established in 2018, develops ethical guidelines for AI through establishing regulatory principles to serve human values and best inform future AI laws.[7] For example, the Code of Ethics for New Generation Artificial Intelligence (2021) ensures AI development protects ‘privacy and security’ and ensures ‘honesty and fairness’.[8] In fostering positive AI regulation, China intends to create a policy environment to establish their technological dominance whilst implementing AI to address issues facing Chinese people.[9]

 

Method

 

China intends to assert global dominance over AI innovation, mandate company compliance with State laws and encourage AI implementation across the private and public sectors. The purpose of China’s AI regulation focuses on ‘international competitiveness, economic growth and social governance’,[10] balanced with ‘the protection of individuals, adherence to core socialist value, and participation in rulemaking’.[11] Laws, such as the LAL,[12] require companies to adhere to state-based training models and digital technologies governed by State acts, emphasising utilising AI development for the benefit and implementation into the State.[13]

 

China’s decentralised governance structure grants local and provincial authorities wide discretion in interpreting and enforcing national directives.[14] Chinese AI regulations for recommendation algorithms (2021), deep synthesis (2022) and generative AI (2023) share the common purpose of supporting positive principles such as ‘upholding mainstream value orientations’, adherence to ‘the correct political direction’ and ensuring ‘truth, accuracy’ and ‘objectivity’.[15] China’s approach is characterised by strong State interventionism paired with long-term planning, however their established regulations prioritise consumer rights and management of societal risks.[16] This multi-layered governance model encompasses AI's technological trinity; computing power via "East Data West Computing" infrastructure, algorithms through ministry-level classification rules, and data leveraging triad legislation of Cybersecurity, Data Security and Personal Information Protection Laws.[17]

 

Outcome

 

China has restricted access to foreign AI through the Interim Measures for the Administration of Generative Artificial Intelligence Service (2023) (GenAI Measures), which bar external providers that fail to comply with such rules.[18] However, Article 6(1) of the GenAI Measures encourages ‘participation in the formulation of international rules related to generative AI’ to ensure China remains a ‘norm-shaper’ in the future of AI development.[19] Inherently, deviation exists in regulation of different economic zones within China, with Shenzhen and Shanghai’s AI development receiving special mobilisation and support compared to other states.[20]

 

Temporary measures for managing generative AI systems in 2023 held Chinese developers responsible for all generated content, emphasising transparency, non-discrimination, collaboration, and requiring developers to prevent discrimination and protect personal data.[21] China's fault-tolerant zones in Shanghai and Shenzhen allow high-risk AI trials with government oversight, whilst algorithmic accountability measures mandate human intervention interfaces.[22] However, China’s 2020 and 2021 crackdowns on technology companies to pursue their aims of ‘common prosperity’ demonstrate potential undercutting of market innovation, and extensive localism and fraudulent companies remain rife within China compared to the EU due to more lenient regulations and willingness to invest.[23]

 

THE UNITED STATES

 

Aims

 

The United States (US) poses a ‘security-first’ model of AI regulation, emphasising domestic deregulation and extraterritorial restriction. The US was amongst the earliest at administering policy targeting AI, with reports affecting federal agencies and private markets being issued since 2016.[24] Drawing from the National Security Commission on Artificial Intelligence (NSCAI), US regulation aims to exert control over all aspects of AI development, particularly semiconductor supply chains to ensure technological superiority.[25] However, the US lacks any comprehensive national law specifically governing AI, relying on federal policies and practices to regulate agency-specific AI use.

 

US tech giants have contributed to developing ethical guidelines, pledging voluntary commitments, and collaborating with federal agencies to promote these aims and ensure AI deregulation.[26] US AI regulation is predicated on establishing federal guidelines and encouraging self-regulation, characterising US AI laws as market-driven, sector specific and vertically focused.[27] As the NSCAI asserted, AI constitutes ‘the future core of military and economic power’, manifesting in concrete policy outcomes from algorithmic targeting of semiconductor supply chains to extraterritorial application of data governance regimes.[28]

 

Method

 

Whilst the Biden administration had implemented some forms of policy and orders to protect citizen rights, the primary purpose of the US model is to ensure military security and domination over the process of AI development.[29] In 2023, 91% of AI-related legislative proposals in the 118th Congress contained explicit national security justifications.[30] The Creating Helpful Incentives to Produce Semiconductors (CHIPS) Act is the most cohesive federal AI regulation law AI in the US.[31] This follows the Foreign Investment Risk Review Modernization Act (FIRRMA) Act (2021) which reinforces domestic supply chains of AI manufacturing essentials to ensure ‘technological sovereignty’ and uphold the ‘national security framework’.[32] 

 

The CHIPS Act intends to promote AI research and attract manufacturing capacities through implementing targeted subsidies, federal grants and tax cuts to the private sector and public-private partnerships.[33] Additionally, the CHIPS Act Article 103 coerces US allies into a tech blockade, such as banning export of ASML chips to China.[34] Hence, US policies targeting AI ensure control over AI development and undermine the capacity of states, such as China, from developing their own AI. Executive Order (EO) 13859 ‘Maintaining American Leadership in Artificial Intelligence’ (2019) emphasised the US’ intention of pursuing dominance in investment, access and training in AI models, whilst minimising vulnerabilities and reducing barriers of use.[35] 

 

Subsequent EOs, particularly the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (White House 2023), provided some positive rights for workers, such as the ‘best practices commitment’.[36] However, the US Department of the Treasury’s report on ‘Nonbank Financials, Fintech and Innovation’ (2018), which has actively influenced orders since, primarily focuses AI integration, and development for the benefit of ‘market providers’.[37] For example, the One Big Beautiful Bill Act proposed in 2025 codifies the deregulation of AI through prohibiting states from enacting or enforcing AI laws which restrict advancement of US AI development.[38] Evidently, the key purpose of federal legislation has emphasised development, domination and prioritising US innovation over individual rights, and has emphasised security above individual rights.

 

Outcome

 

US State laws have introduced extensive AI legislation to fill the gap of federal regulation, with States introducing legislation tailored for their own necessities and therefore creating different legal frameworks between each jurisdiction. In 2023 400 AI laws were introduced across US States, with the 2024 total sextupling this amount,[39] compared to the 30 bills and 36 hearings focusing on AI in the Federal US House and Senate in 2023.[40] States have readily created acts targeting specific issues such as Illinois’ AI Video Interview Act (2020),[41] Colorado’s SB24-205 protecting worker’s rights and data discrimination,[42] and California’s AI Reporting Bill requiring maintained records of data used by AI for delivering products or services to the public.[43]

 

Federal legislation has been slow to implement or examine these laws and translate equivalent protections at a national level, largely because the US Government ‘has not taken a clear regulatory approach beyond President Biden’s AI Bill of Rights and general pronouncements of support for innovation while balancing risks to citizens’.[44] Due to the size of some US state economies, the AI laws have informed an element of extraterritoriality, such as Senate Bill 1047 requiring AI products to be tested before release.[45] However, the laws within States have largely focused on data privacy and disclosure considerations, resulting in a ‘diversity of regulations across states’ creating a ‘fragmented regulatory environment’.[46] Hence, US States have regulated AI according to their specific environment and in response to the lack of federal legislation, resulting in variable scope of AI regulation between each State.

 

THE EUROPEAN UNION

 

Aims

 

The EU’s primary objective under the Charter of Fundamental Rights of the European Union,[47] is to guarantee the indivisible and universal value of human dignity and guarantee a single digital market for all countries.[48] This informs the EU’s ‘human-centric’ approach to AI regulation which focuses on overseeing the impact AI has on human agency over purchasing and political decisions.[49] This approach is best demonstrated by the EU Artificial Intelligence (AI) Act (2024) (‘AI Act’), which posits as a risk prevention model.

 

The primary objective of the AI Act is advancing responsible innovation, ensured accountability of AI systems and mandating application of law to ensure ‘fundamental rights, safety and ethical standards, with a risk-tiered regulation’.[50] Specifically, Article 114 of TFEU informs Recital 1 of the AI Act, which mandates that the EU must oversee digital marketisation, such as AI development, as an extension of their capacity to govern the functioning of the internal market.[51] Hence, the AI Act was intentionally created as an extension of already standing digital harmonisation across the EU with horizontal regulatory capacities. The AI Act has been questioned for stifling innovation,[52] however, Article 53(1) of the AI Act promotes some innovation by allowing AI regulatory ‘sandboxes’ within a controlled environment.[53] Hence, the EU AI Act promotes economic harmonisation between States through cohesive hard law and emphasising accountability of AI developers.

 

Method

 

Although the AI Act is the most recent and comprehensive legal instrument by the EU to date, the Act should still be considered in the broader regulatory framework. The General Data Protection Regulation (GDPR),[54] established foundational data protection principles that inform subsequent AI regulation. The Civil Liability Regime for Artificial Intelligence (2020) emphasised transparency in determining civil liability in AI systems and accountability measures for AI developers, deployers and operators.[55] The EU’s Artificial Intelligence Act (2021) affirmed their ambition to ‘build a proportionate and risk-based European regulatory approach’, reinforcing the EU’s rights-based approach.[56]

 

The purpose of the AI Act (2024) is ensuring harm prevention through a ‘precautionary and promotive role for civil liability’, complemented by past laws which emphasise transparent measures to evaluate AI safety, accuracy and performance.[57] The AI Act categorises AI systems and models based on risk (unacceptable, high-risk, limited-risk and minimal) and further classifies such as prohibited, high-risk and general purpose AI systems.[58] Due to the rights-based approach of the AI Act, scholars such as Johanna Chamberlain have drawn parallels to the AI Act being one of a tort nature.[59] This is furthered by the recent proposal of a revised Production Liability Directive (PLD) comprising of a strict liability regime benefitting consumers who have suffered harm from a defective AI product and expand on the fault-based liability regime for damage established by the AILD.[60] Analysed as a tort law, the AI Act emphasises accountability like previous EU laws and allows for quantifiable measures in cases of breach.

 

Outcome

 

In separating the levels of risk, the EU values contravention of EU Charter’s values greater than threats to EU values, though the assessment of risk is determined on a case-by-case basis and is largely subjective.[61] Within every risk category, the benefits of the AI system are weighed against its risks to EU fundamental rights, encouraging a fault-liability understanding. However, the issue of liability is not explicitly addressed in the AI Act.[62]

 

Despite having a very comprehensive singular legal instrument for AI regulation, the AI Act lacks clarity in the language used and has not been extensively tested as of date. Terms such as ‘putting into service’[63] and ‘users of AI systems located within the Union’[64] are not clearly defined.[65] For example, the AI Act requires ‘sufficient care’ from AI developers, but does not explicitly define this responsibility.[66]Article 2(1)(c) also intends to extend the scope of the Act to any providers and users located in third countries so long as they produce output used in the EU.[67] The lack of clarity regarding the terms of the AI Act denote a level of responsibility to AI developers, but fails to define such responsibilities nor engage any international doctrines or legal principles to protect citizens against conflicting foreign AI use.[68] Additionally, compliance costs remain significant, with high-risk systems requiring €400k compared to China's average €85k.[69]

 

COMPARISON OF LAWS

 

Regulatory Philosophy and Implementation

 

The Chinese and US Governments strongly emphasise innovation as the driving force of their respective AI approaches and clearly outline the role of both the government and private sector in achieving this goal. Both states exhibit large-scale cooperation with AI developers, such as China’s key AI developers have been appointed ‘national champions’ and the US funding through the CHIPS Act. Both States have also had deviation between internal states and federal policies. The US’ lack of a federal legislation has largely afforded a non-interventionist model, enabling California and New York to self-regulate their AI development creating ‘technically complex and politically charged’ AI regulations distinct from US federal orders.[70] In China, federal AI-related documents have been adapted to inform local policies, however, local governments still emphasise prioritising application and innovation in order to foster regional competition.[71]

 

The primary differences between Chinese and US AI regulation are the changing approaches of the US Government since 2016 compared to the more consistent staggered regulations of the Chinese Government. US AI policy can be best described as having ‘three distinct phases of policy’ corresponding with the rotating administrations.[72] For example, Biden’s administration largely emphasised a ‘value contest with China’,[73] establishing no national hard laws and only principles such as the EO targeting AI safety,[74] compared to the narrower approach emphasised at the end of the Obama administration.[75] Comparatively, China’s consistent milestone goals were established in the New Generation AI Development Plan which continue to be followed through 5 year goals.[76] China inherently has more cohesive and enforceable AI regulatory frameworks compared to the US due to their top-down and long-sighted approach.

 

Rights Protection and Market Integration

 

The key difference between US and EU approaches to AI legislation is that the AI Act provides a legally binding framework which is contrasted against the US’s ‘decentralized, sector-specific regulatory strategy’.[77] Resultingly, EU institutions have adopted a significantly more restrictive approach to the use and implementation of LLMs and generative AI, whereas the US has fostered more significant investment and eased such restrictions, whilst adopting a heavily restrictive extraterritoriality principle focus. Both the EU and US prioritise improving ‘economic, social and ethical outcomes’ and both bodies’ AI-initiatives have had to overcome conflict between federal, or EU-organisational, state, and local concerns.[78]

 

However, the EU’s approach is predicated on being an intergovernmental economic organisation, compared to the United States’ position as a politically and constitutionally-motivated common law system. This has had an inverse effect, as the AI Act has created a system of hard and soft law compared to the US’ lack of ‘enforceable regulation’ to date.[79] Moreover, the EU’s rights-based approach views individuals rights through a lens of collective wellbeing, whereas the US state-driven model emphasises ‘neoclassical, laissez-faire ideas of freedom’.[80] The US and EU approaches are predominantly different in the regulation of generative AI and Copyright laws.[81] EU Directive 2019/790, determines that training an AI model with data can only be used non-commercially, whereas the US acknowledges AI ‘as a sui generis database right holder’.[82] The US currently lacks legislation directly targeting possession over AI-generated works and relies on weakly applicable fair use doctrines, such as 17 US Code 107, which cannot adequately govern copyrighted content for training AI models.[83]

 

State Intervention and Consumer Protection

 

China’s AI approach focuses on an innovation-first and developing a ‘common prosperity’ model, whereas the EU’s approach emphasises ethical outcomes and protecting their fundamental rights.[84] Both China and the EU have attempted to prevent the internal monopolisation of AI development, such as China’s Regulation on the Management of Algorithmic Recommendations (2022).[85] Moreover, both establish governmental bodies to oversee AI regulation. For example, the AI Act established the AI Office to oversee it’s enforcement and implementation.[86] Comparably, China’s Ministry of Industry and Information Technology (MIIT), and CP Central Science and Technology Commission (CSTC), significantly influence policy and make recommendations for future regulations.[87]

 

Both the EU and China have created regulations for the benefit of the people, however the EU places greater emphasis on individual rights compared to China’s emphasis on “harmony” and “cooperation”. The EU’s GDPR provides much stricter privacy protections compared to China’s Personal Information Protection Law (2021) (PIPL),[88] which specifically excludes measures which ‘impede state organs’ fulfilment of their statutory duties and responsibility’.[89] The PIPL prioritises consumer protections and is unlikely to extend to comprehensive citizen protections, whereas the EU has significant judicial review in place to protect such rights.[90] Moreover, the PIPL and Cyberspace Administration of China’s regulation prohibit ‘unreasonable differentiation’ in consumer traits empowering consumers, but not citizens.[91] Therefore, both China and the EU prioritise fairness and justice, although the AI Act explicitly bans systems posing risk of discriminatory outcomes.

 

China’s major laws regulating different types of AI, such as the Rules for Deep Synthesis Algorithms (2023),[92] and Rules On Generative AI (2023),[93] are more specific and comprehensive than the more general ethical regulations of the EU.[94] China, despite having more AI regulatory frameworks, has less operational compliance guidelines compared to the EU resulting in operational paradoxes.[95] Inherently, the EU’s focus on harm-reduction leads to criticism of stifling innovation compared to China’s emphasis of successfully spurring companies to adopt and deploy AI technologies. However, such emphasis limits domestic competition by allowing national champions to develop standards that affect wider-industry rules to tailor to their needs, rather than fostering an open innovation environment.[96]

 

APPLICATIONS TO NATIONAL & INTERNATIONAL LAW

 

As AI continues to develop and global accessibility to AI products increases, China, the US and EU will play the largest roles in shaping national and international frameworks for AI regulation. Since the Bletchley Declaration on AI Safety (2023), there has been a growing desire for international collaboration in AI regulation.[97] Organisations, such as the OECD and UN, have since facilitated and organised discussions on developing value-based frameworks centring on human rights and corporate responsibility.[98] To achieve positive outcomes in AI development, such as accountability and transparency, there must be translatability of international regulations into the domestic sphere and hence the US, China and EU should be analysed for such translatability.

 

The EU’s GDPR has been adopted as the international standard for data-related regulation by both nations and multinational companies due to its principles-focused structure.[99] Article 28(d) of the AI Act already emulates the OECD AI Principles (2019) which demonstrates cohesion between existing EU laws and international standards.[100] The EU approach to AI has been listed as creating a ‘Brussels Effect’ whereby the EU is already viewed as being the de facto standard for AI acts since the GDPR.[101] For example, the GDPR principles recently influenced AI regulation laws in Brazil and Chile’s LPPD.[102] Chile’s revision of data protection law (21.719) replaced Law 19.628, the first national data protection framework in Latin America, with amendments emulating GDPR protections such as extraterritoriality, broader definitions of personal data, and cross-border data transfers.[103]

 

However, this 'Brussels Effect' presents dangerous precedents for Global South nations.[104] Extraterritorial enforcement forces Southern regulators to adopt misfit standards, and resource drain occurs as ENISA-style technical audits require infrastructure Southern states lack.[105] The EU's compliance burdens paradoxically increase discrimination incidents despite stringent rules, with Spanish SME cases demonstrating 287-page compliance documents for startup AI compared to 40 pages in China.[106] For Global South nations, this reveals critical imperatives to reject regulatory mimicry of Western frameworks that inherently prioritise Northern interests.

 

Comparatively, China’s state-based model has allowed them to develop AI regulations which are somewhat better tailored to national adoption and offer context-sensitive scaffolding approaches.[107] China’s trialling of AI into their medical services is representative of one area where Chinese laws have bettered the EU, with implementation regulations such as the 2016 IBM ‘Watson for Oncology’, providing legislative implementation standards for national countries.[108] China’s state-based model has also influenced countries, such as Japan’s AI Governance Act, to create a more preventative regulation of AI and making it easier to limit breaches of personal information through state-enforced services.[109]

 

China’s emphasis on security reviews of AI, division of administrative bodies to oversee AI enforcement and centralised system of AI regulation make it more appealing for individual States.[110] Many EU requirements for comprehensive risk management under Article 50 of the AI Act emulate Chinese state-based regulations for transparency in Generative AI.[111] China's approach therefore offers sector-based regulations for domains like healthcare through establishing wider sandbox zones, with greater oversight than the EU, for iterative policy testing and then developing sovereign frameworks aligned with developmental needs.[112] Therefore, the Chinese model measures success by innovation-security elasticity which sustains technological development and safeguards collective dignity.[113]

 

To date, there has been a significant lack of independent global cooperation with multinational corporations who develop AI. Despite the US ‘lag-ging behind with legislative or regulatory initiatives’,[114] only the EU and US have explicitly dealt with research and digital negotiations with specific companies.[115] However, the US model provides limited translatability to developing nations seeking comprehensive frameworks due to their deregulated approach, security-first orientation and fragmented state-level regulations.[116] As evidenced by CHIPS Act restrictions on extraterritoriality and proposed federal pre-emption of state laws, the US approach is misaligned with the current development and needs of AI regulations in the Global South.

 

CONCLUSION

 

Collectively, there is a growing desire by nations to establish international frameworks to safeguard individual protections and foster innovation. The presented tripartite model reveals that each body’s frameworks prioritise specific interests, with the EU having notable resource drains, the US offering fragmented state-level regulations and China having unique sectoral rules and sandbox zones which could be unachievable for developing States. However, to maximise the effectiveness of such legal frameworks, sovereign states must adopt their own policies and legislations to ensure accountability of AI risks.

 

On the established basis, developing nations should adapt elements of China’s phased implementation methodology and fault-tolerant zones whilst incorporating EU principles of accountability and transparency, rather than wholesale adoption of a single model. The effectiveness of AI governance should therefore prioritise flexibility and context-sensitivity over rigid harmonisation, enabling diverse regulatory approaches which reflect varying developmental stages, institutional capacities, and societal values across the global community.


Oliver is [author bio].


Disclaimer: The opinions expressed in this piece are those of the authors, and do not necessarily reflect the views and opinions of Protocol Policy Lab.

[1] Lee Tiedrich, ‘Editorial The Grand Challenge: Creating Frameworks Unlocking AI’s Benefits and Mitigating Harms’ (2024) 1(1) AIRe 1, 2.

[2] Margot Kaminski, 'Regulating the Risks of AI' (2023) 103(5) Boston University Law Review 1347, 1351.

[3] Ibid.

[4] Luca Belli, ‘Computer Law & Security Review: The International Journal of Technology Law and Practice’  (2024) 55(1) Computer Law & Security Review 1, 4.

[5] George G. Zheng, 'Ordered Innovation China’s Approach to AI Regulation and its Global South Implications' (Lecture Slides, Shanghai Jiao Tong University, 10 September 2025) sl 26.

[6] Matt Sheehan, ‘China’s AI Regulations and How They Get Made’ (2023) 1(1) Carnegie Endowment for International Peace 1, 15.

[7] Iirina Filipova, ‘Legal Regulation of Artificial Intelligence: Experience of China’ (2024) 2(1) Journal of Digital Technologies and Law 46, 50.

[8] Ibid.

[9] Ibid 56.

[10] Huw Roberts et al., ‘The Chinese Approach to Artificial Intelligence: An Analysis of Policy, Ethics, and Regulation’ (2021) 36(1) AI & Society 72, 78.

[11] Yan Wang, ‘Do not go gentle into that good night: The European Union’s and China’s different approaches to the extraterritorial application of artificial intelligence laws and regulations’ (2024) 53(1) Computer Law & Security Review 1, 10.

[12]《中华人民共和国涉外民事关系法律适用法》 [The Law of the People’s Republic of China on Choice of Law for Foreign-related Civil Relationships] (Republic of China) National People's Congress, Order No. 36, 28 October 2010.

[13] Heidi L. Frostestad, ‘AI Regulation in a ChatGPT Era’ (2024) 32(1) Indiana Journal of Global Legal Studies 1, 11.

[14] Jinghan Zeng, ‘Artificial intelligence and China's authoritarian governance’ (2020) 96(6) International Affairs 1, 8.

[15] Matt Sheehan, ‘China’s AI Regulations and How They Get Made’ (2023) 1(1) Carnegie Endowment for International Peace 1, 4.

[16] Robert Donoghue, ‘AI regulation, development and governance: the case of China’ (2025) 4(2) Global Political Economy 243, 244.

[17] George G. Zheng, 'Ordered Innovation China’s Approach to AI Regulation and its Global South Implications' (Lecture Slides, Shanghai Jiao Tong University, 10 September 2025) sl 27.

[18] Yan Wang, ‘Do not go gentle into that good night: The European Union’s and China’s different approaches to the extraterritorial application of artificial intelligence laws and regulations’ (2024) 53(1) Computer Law & Security Review 1, 10.

[19] Ibid 11.

[20] Iirina Filipova, ‘Legal Regulation of Artificial Intelligence: Experience of China’ (2024) 2(1) Journal of Digital Technologies and Law 46, 52.

[21] Maulen Alimkhanov, ‘Comparative Analysis of International AI Regulatory Approaches: The United States, European Union, Canada, China, Kazakhstan, Russia’ (Research Paper, UC Berkeley School of Law, 2024) 1, 4.

[22] George G. Zheng, 'Ordered Innovation China’s Approach to AI Regulation and its Global South Implications' (Lecture Slides, Shanghai Jiao Tong University, 10 September 2025) sl 31.

[23] Yan Wang, ‘Do not go gentle into that good night: The European Union’s and China’s different approaches to the extraterritorial application of artificial intelligence laws and regulations’ (2024) 53(1) Computer Law & Security Review 1, 11.

[24] Justin B. Bullock, ‘Artificial Intelligence, Discretion, and Bureaucracy American Review of Public Administration’ (2019) 49(7) The American Review of Public Administration 751, 752.

[25] George G. Zheng, 'Ordered Innovation China’s Approach to AI Regulation and its Global South Implications' (Lecture Slides, Shanghai Jiao Tong University, 10 September 2025) sl 50.

[26] Tatevik Davtyan, ‘The U.S. Approach to AI Regulation: Federal Laws, Policies, and Strategies Explained’ (2025) 16(2) Journal of Law, Technology & The Internet 223, 254.

[27] Ibid 227.

[28] George G. Zheng, 'Ordered Innovation China’s Approach to AI Regulation and its Global South Implications' (Lecture Slides, Shanghai Jiao Tong University, 10 September 2025) sl 50.

[29] Adam Litwin et al., ‘A Forum on Workplace AI Regulation Around the World’ (2024) 77(5) ILR Review 799, 809.

[30] George G. Zheng, 'Ordered Innovation China’s Approach to AI Regulation and its Global South Implications' (Lecture Slides, Shanghai Jiao Tong University, 10 September 2025) sl 50.

[31] CHIPS and Science Act, 117-167, 117th Congress (H.R.4346, 2022).

[32] George G. Zheng, 'Ordered Innovation China’s Approach to AI Regulation and its Global South Implications' (Lecture Slides, Shanghai Jiao Tong University, 10 September 2025) sl 49.

[33] Adam Litwin et al., ‘A Forum on Workplace AI Regulation Around the World’ (2024) 77(5) ILR Review 799, 811.

[34] George G. Zheng, 'Ordered Innovation China’s Approach to AI Regulation and its Global South Implications' (Lecture Slides, Shanghai Jiao Tong University, 10 September 2025) sl 53.

[35] Ibid.

[36] Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (30 October 2023), s 6.

[37] Thomas Wischmeyer & Timo Rademacher, Regulating Artificial Intelligence (Springer 1st Ed, 2020) p264.

[38] George G. Zheng, 'Ordered Innovation China’s Approach to AI Regulation and its Global South Implications' (Lecture Slides, Shanghai Jiao Tong University, 10 September 2025) sl 49.

[39] Grant Gross, ‘The Complex Patchwork of US AI Regulation Has Already Arrived’, CIO (Website, 5 April 2024) <http://ezproxy.lib.uts.edu.au/login?url=https://www.proquest.com/trade-journals/complex-patchwork-us-ai-regulation-has-already/docview/3033828220/se-2>.

[40] Heidi L. Frostestad, ‘AI Regulation in a ChatGPT Era’ (2024) 32(1) Indiana Journal of Global Legal Studies 1, 9.

[41] Public Act 101-0260 (‘AI Video Interview Act’) Illinois General Assembly (2020).

[42] SB24-205 (‘Consumer Protections for Artificial Intelligence’) Colorado General Assembly (2024).

[43] SB53 (‘California’s AI Reporting Bill’) California Senate (2025).

[44] Ibid

[45] Grant Gross, ‘The Complex Patchwork of US AI Regulation Has Already Arrived’, CIO (Website, 5 April 2024) <http://ezproxy.lib.uts.edu.au/login?url=https://www.proquest.com/trade-journals/complex-patchwork-us-ai-regulation-has-already/docview/3033828220/se-2>.

[46] Ibid.

[47] Charter of Fundamental Rights of the European Union, opened for signature 7 December 2000, [2009] OJ C  364/01 (entered into force 1 December 2009).

[48] Ainhoa López, ‘The systematics of the European Artificial Intelligence Act in the context of the fundamental rights of the Union: the myth of the digital constitutionalism’ (2024) Deusto Journal of Human Rights 73, 82.

[49] ‘European Commission, Study to Support an Impact Assessment of Regulatory Requirements for Artificial Intelligence in Europe: Final Report’ (Luxembourg: Publications Office of the European Union, 2021) 31.

[50] George G. Zheng, 'Ordered Innovation China’s Approach to AI Regulation and its Global South Implications' (Lecture Slides, Shanghai Jiao Tong University, 10 September 2025) sl 43.

[51] Ainhoa López, ‘The systematics of the European Artificial Intelligence Act in the context of the fundamental rights of the Union: the myth of the digital constitutionalism’ (2024) Deusto Journal of Human Rights 73, 85.

[52] Dan Svantesson, ‘The European Union Artificial Intelligence Act: Potential implications for Australia’ (2021) 47(1) Alternative Law Journal 4, 6.

[53] Ibid.

[54] Regulation 2016/679 (‘General Data Protection Regulation’) OJ C [2018] (entered into force 25 May 2018).

Regulation 2024/1689 (‘Artificial Intelligence Act’) OJ C [2024] (entered into force 1 August 2024) arts [5], [6], [59].

[55] Rafaella Nogaroli, ‘Medical Liability and Artificial Intelligence Brazilian and European Legal Approaches’ (Springer, 1st edition) 203.

[56] Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions, Fostering a European Approach to Artificial Intelligence (Brussels, 21.4.2021 COM (2021) 205 final), 6 (‘Communication’) 2.

[57] Rafaella Nogaroli, ‘Medical Liability and Artificial Intelligence Brazilian and European Legal Approaches’ (Springer 1st edition) 253.

[58] Regulation 2024/1689 (‘Artificial Intelligence Act’) OJ C [2024] (entered into force 1 August 2024) arts [5], [6], [59].

[59] Johanna Chamberlain, 'The Risk-Based Approach of the European Union’s Proposed Artificial Intelligence Regulation: Some Comments from a Tort Law Perspective' (2023) 14(1) European Journal of Risk Regulation 1, 2.

[60] ‘Anatomy of a Fall On the Anticipated Withdrawal of the AI Liability Directive Proposal’ Verfassungsblog: on Matters Constitutional (Cristina Frattone, 6 May 2025) <https://verfassungsblog.de/anatomy-of-a-fall-aiact-aild-pld/>.

[61] Johanna Chamberlain, 'The Risk-Based Approach of the European Union’s Proposed Artificial Intelligence Regulation: Some Comments from a Tort Law Perspective' (2023) 14(1) European Journal of Risk Regulation 1, 7.

[62] Ibid 2.

[63] Regulation 2024/1689 (‘Artificial Intelligence Act’) OJ C [2024] (entered into force 1 August 2024) art 3(11).

[64] Ibid art 2(1)(b).

[65] Dan Svantesson, ‘The European Union Artificial Intelligence Act: Potential implications for Australia’ (2021) 47(1) Alternative Law Journal 4, 6.

[66] ‘What Open Source Developers Need to Know about the EU AI Act’ Blog for The Linux Foundation (Cailean Osborne, 3 April 2025) <https://linuxfoundation.eu/newsroom/ai-act-explainer>

[67] Dan Svantesson, ‘The European Union Artificial Intelligence Act: Potential implications for Australia’ (2021) 47(1) Alternative Law Journal 4, 6.

[68] Ibid 5.

[69] George G. Zheng, 'Ordered Innovation China’s Approach to AI Regulation and its Global South Implications' (Lecture Slides, Shanghai Jiao Tong University, 10 September 2025) sl 46.

[70] Max Gulker and Marc Scribner, ‘A moratorium on state laws targeting AI would safeguard innovation and interstate commerce’ (Commentary on Reason Foundation, 7 August 2025) <https://reason.org/commentary/a-moratorium-on-state-laws-targeting-ai-would-safeguard-innovation-and-interstate-commerce/>

[71] Emmie Hine and Luciano Floridi, ‘Artificial intelligence with American values and Chinese characteristics: a comparative analysis of American and Chinese governmental AI policies’ (2024) 39(1) AI & Society 257, 260.

[72] Ibid.

[73] Ibid 262.

[74] Ibid 258.

[75] Ibid 260.

[76] Ibid 259.

[77] Tatevik Davtyan, ‘The U.S. Approach to AI Regulation: Federal Laws, Policies, and Strategies Explained’ (2025) 16(2) Journal of Law, Technology & The Internet 223, 225.

[78] Huw Roberts et al., ‘Achieving a ‘Good AI Society’: Comparing the Aims and Progress of the EU and the US’ (2021) 27(6) Science and Engineering Ethics 1, 4.

[79] Ibid.

[80] Ibid.

[81] Satish Kumar and Akansha Yadav, ‘Recent trends in major world jurisdictions regarding copyright law and works generated by artificial intelligence: A comparative analysis of the European union, the United States, and Japan’ (2024) 3220(1) AIP Conference Proceedings 1, 2.

[82] Ibid 10.

[83] Adil S. Al-Busaidi et al., ‘Redefining boundaries in innovation and knowledge domains: Investigating the impact of generative artificial intelligence on copyright and intellectual property rights’ (2024) 9(4) Journal of Innovation & Knowledge 1, 2.

[84] Huw Roberts et al., ‘Governing artificial intelligence in China and the European Union: Comparing aims and promoting ethical outcomes’ (2022) 39(2) The Information Society 79, 85.

[85]《互联网信息服务算法推荐管理规定》[Regulation on the Management of Algorithmic Recommendations] (People’s Republic of China) The Cybersecurity Administration of China, Order No 9, 31 December 2021.

[86] Matt Sheehan, ‘China’s AI Regulations and How They Get Made’ (2023) 1(1) Carnegie Endowment for International Peace 1, 10.

[87] Ibid.

[88]《中华人民共和国个人信息保护法》[Personal Information Protection Law] (People’s Republic of China) National People’s Congress of the People’s Republic of China, Order No 91, 20 August 2021.

[89] Graham Greenleaf, ‘China’s Completed Personal Information Protection Law: Rights Plus Cyber-security’ (2021) 172(1) University of New South Wales Law Research Series 91, 91.

[90] Ibid 92.

[91] Ibid.

[92]《互联网信息服务深度合成管理规定》[Administrative Provisions on Deep Synthesis of Internet Information Services] (People’s Republic of China) Cyberspace Administration of China, Order No 12, 10 January 2023.

[93]《生成式人工智能服务管理暂行办法》[Interim Measures for the Administration of Generative Artificial Intelligence Services] (People’s Republic of China) Cyberspace Administration of China, Order No 15, 15 August 2023.

[94] George G. Zheng, 'Ordered Innovation China’s Approach to AI Regulation and its Global South Implications' (Lecture Slides, Shanghai Jiao Tong University, 10 September 2025) sl 34.

[95] Harry Qu, Tobias Bräutigam and James Gong ‘Preparing for compliance: Key differences between EU, Chinese AI regulations’ (Website, 5 February 2025) iapp <https://iapp.org/news/a/preparing-for-compliance-key-differences-between-eu-chinese-ai-regulations>.

[96] Huw Roberts et al., ‘Governing artificial intelligence in China and the European Union: Comparing aims and promoting ethical outcomes’ (2022) 39(2) The Information Society 79, 85.

[97] Joshua Meltzer and Paul Triolo (Website on Brookings, 4 October 2024) ‘The Bletchley Park process could be a building block for global cooperation on AI safety’ <https://www.brookings.edu/articles/the-bletchley-park-process-could-be-a-building-block-for-global-cooperation-on-ai-safety/>.

[98] Heidi L. Frostestad, ‘AI Regulation in a ChatGPT Era’ (2024) 32(1) Indiana Journal of Global Legal Studies 1, 6.

[99] Gerard Buckley, Tristan Caulfield and Ingolf Becker, 'How might the GDPR evolve? A question of politics, pace and punishment' (2024) 54(1) Computer Law & Security Review 1, 4.

[100] Kai Zenner 'A law for foundation models: the EU AI Act can improve regulation for fairer competition' OECD Policy Observatory (Website, July 20, 2023) <https://oecd.ai/en/wonk/foundation-models-eu-ai-act-fairer-competition>.

[101] Huw Roberts et al., ‘Governing artificial intelligence in China and the European Union: Comparing aims and promoting ethical outcomes’ (2022) 39(2) The Information Society 79, 86.

[102] Maria Badillo ‘Chile’s New Data Protection Law: Context, Overview, and Key Takeaways’ (Blog, 27 February 2025) Future of Privacy Forum  <https://fpf.org/blog/chiles-new-data-protection-law-context-overview-and-key-takeaways/>

[103] Ibid.

[104] Ben Crum, ‘Brussels effect or experimentalism? The EU AI Act and global standard-setting’ (2025) 14(3) Internet Policy Review 1, 18.

[105] George G. Zheng, 'Ordered Innovation China’s Approach to AI Regulation and its Global South Implications' (Lecture Slides, Shanghai Jiao Tong University, 10 September 2025) sl 47.

[106] Ibid 40.

[107] Ibid 26.

[108] Huw Roberts et al., ‘Governing artificial intelligence in China and the European Union: Comparing aims and promoting ethical outcomes’ (2022) 39(2) The Information Society 79, 83.

[109] ‘REPORT FROM THE EXPERT GROUP ON HOW AI PRINCIPLES SHOULD BE IMPLEMENTED’ (9 July 2021) 1(1) AI Governance in Japan <https://www.meti.go.jp/shingikai/mono_info_service/ai_shakai_jisso/pdf/20210709_8.pdf >.

[110] Harry Qu, Tobias Bräutigam and James Gong ‘Preparing for compliance: Key differences between EU, Chinese AI regulations’ (Website, 5 February 2025) iapp https://iapp.org/news/a/preparing-for-compliance-key-differences-between-eu-chinese-ai-regulations

[111] Ibid.

[112] George G. Zheng, 'Ordered Innovation China’s Approach to AI Regulation and its Global South Implications' (Lecture Slides, Shanghai Jiao Tong University, 10 September 2025) sl 26.

[113] Ibid.

[114] Heidi L. Frostestad, ‘AI Regulation in a ChatGPT Era’ (2024) 32(1) Indiana Journal of Global Legal Studies 1, 4.

[115] Ibid.

[116] George G. Zheng, 'Ordered Innovation China’s Approach to AI Regulation and its Global South Implications' (Lecture Slides, Shanghai Jiao Tong University, 10 September 2025) sl 26.

 
 
 

Recent Posts

See All

Comments


Protocol Logo

Contact Us

Email

emma@protocolpolicylab.org

We would love to hear from you! 

Follow Us

  • Instagram
  • LinkedIn

Subscribe to our Newsletter 

Subscribe and stay up-to-​date on Protocol's latest news, upcoming events and opportunities. 

bottom of page