Abstract: The main challenge in AI governance today is striking a balance between controlling AI dangers and fostering AI innovation. Regulators in a number of nations have progressively extended the regulatory sandbox, which was first implemented in the banking sector, to AI governance in an effort to reduce the conflict between regulation and innovation. The AI regulatory sandbox is a new and feasible route for AI governance in China that not only helps to manage the risks of technology application but also prevents inhibiting AI innovation. It keeps inventors' trial-and-error tolerance space inside the regulatory purview while offering a controlled setting for the development and testing of novel AI that hasn't yet been put on the market. By providing full-cycle governance of AI with the principles of agility and inclusive prudence, the regulatory sandbox offers an alternative to tf10b6dbd29e37ad137b24f168e60dd7707af1710d3a0e5639a0e3543cec277a3he conventional top-down hard regulation, ex-post regulation, and tight regulation. However, the current system also has inherent limitations and practical obstacles that need to be overcome by a more rational and effective approach. To achieve its positive impact on AI governance, the AI regulatory sandbox system should build and improve the access and exit mechanism, the coordination mechanism between the sandbox and personal information protection, and the mechanisms of exemption, disclosure, and communication.
Keywords: artificial intelligence governance; regulatory sandbox; agile regulation; inclusive prudential regulation; access and exit mechanism;coordination mechanism
CLC: TP 18; D 923; D 922.17 DC: A Artical: 2096-9783(2024)05?0136?13
1 Formulation of the Problem
Since the detrimental effects of AI applications have come to light, the number of regulatory initiatives aimed at AI is growing quickly. Governments have also been exploring the governance route of AI in attempt to control and reduce the risks posed by application of AI. China has introduced the Interim Measures for the Administration of Generative Artificial Intelligence Services, further improving the regulatory system for generative AI. The EU has passed the Artificial Intelligence Act (AI Act) and reached an agreement on the rules for AI regulation. The United States, meanwhile has published the Guidance for Regulation of Artificial Intelligence Application as well as the principles for the regulation of Artificial Intelligence to govern AI. Simultaneously, researchers who are interested in artificial intelligence governance have also put forth regulatory measures, such as "transparency and openness of artificial intelligence", "algorithmic auditing and risk assessment" and "artificial intelligence filing and registration" to govern AI. This global AI governance dynamic has been defined as a "race to AI regulation"[1].
However, considering the huge social benefits generated by the application of AI, regulators should also support AI innovation while preventing technological risks and encouraging AI competition among enterprises. In other words, efficient AI governance should have the dual objectives of preventing risks and supporting innovation. This makes technology governance by regulators more complex and challenging. The reason stems from the concern that regulatory intervention could stifle AI innovation if artificial intelligence is subjected to overly stringent and restrictive rules before the technology has had time to mature. Especially for disruptive innovation in AI, an overly rigid and strict governance approach can make it impossible to forecast and appropriately handle the challenges, and instead may have a chilling effect on innovation. However, once the systemic risks of AI have become more visible and recognizable, it may then be too late for regulators to implement effective risk-control tools to govern potentially out-of-control AI systems. This AI governance dilemma is the famous Collingridge's Dilemma, which states that "when a technology is relatively easy to control, its effects are difficult to fully anticipate; when the need to control the technology becomes strong, such control has become very expensive, difficult, and time-consuming"[2]. Therefore, in order to accommodate ongoing advancements in AI, regulators must continue to modify their regulatory regimes and choose more flexible, agile and resilient governance. Otherwise, the two main objectives of efficient AI governance won't be achieved.
Regulatory sandboxes, with their flexible, agile, and experimental approach, can be a valuable tool to help identify and control technological risks while also supporting innovation-a promising new route for AI governance. Regulatory sandbox provides a safe space for AI innovations to be tested, reducing the geographic and demographic scope of risks. With the help of regulatory sandbox, regulators may also more thoroughly comprehend and recognize the potential risks of AI and take proportionate risk control measures to protect the legitimate rights and interests of the public. In reality, several nations-including China-have already taken action and recently established AI regulatory sandboxesin an effort to strike a balance between regulation and innovation.
For example, China has issued the Notice on the Trial Implementation of the Sandbox Regulatory System for Automobile Safety and the Notice on Carrying out Pilot Work on Access and Road Traffic of Intelligent Connected Vehicles to initiate the pilot work on the sandbox regulation of autonomous driving systems. The European Commission, on the other hand, requires member state authorities to establish a regulatory sandbox in AI Act, aimed at testing artificial intelligence before it is put on the market. While regulatory sandboxes have long been seen as a regulatory tool for fintechs, there is little literature examining the application and institutional composition of such sandboxes in the context of AI governance. Although there are individual foreign scholars discussing the regulatory sandbox in response to the EU AI Act1, there is still a lack of Chinese literatures that study the regulatory sandbox system of AI. Some literature analyzes the regulatory sandbox for smart connected cars, but it has not yet been discussed from the perspective of AI[3]. Therefore, this paper focuses on the connotation and significance, the implied regulatory concepts and institutional construction of the AI regulatory sandbox, with a view to exploring a new governance route to balance AI regulation and innovation.
2 The Connotation and Development Status of the Regulatory Sandbox for Artificial Intelligence
2.1 The Connotation of the Regulatory Sandbox for the Artificial Intelligence
Regulatory sandboxes are legal experiments that provide a "safe space"2 for innovators to test new technologies, products and services for a specific period of time in order to promote innovation. The term sandbox is originally a computer term for a virtual technology used in the field of computer security to run applications in a restricted security environment and to provide a testing environment for programs whose sources are untrustworthy, destructive, or whose intent cannot be determined by restricting the code access granted to the application[4]. In the context of technology regulation, the regulatory sandbox is a regulatory tool for testing new technologies and products in an artificially created regulatory environment. The regulatory sandbox is designed to balance the tension between technology regulation and technology innovation. It provides a testing environment controlled by the regulator while limiting the reach of technological risks, so it is called "safe space".
Although initially created as a regulatory tool for fintech, regulatory sandboxes are increasingly being used to govern other new technologies and products such as artificial intelligence. In 2014, the regulatory sandbox was born in the context of the UK's fintech policy. Therefore, regulatory sandbox is generally considered to be a "safe space" for certain financial entities in order to promote innovation in the fintech or financial industry[5]. Indeed, regulatory sandboxes are not limited to the financial sector, but widely used in industry sectors such as health, legal services, aviation, transportation and logistics, as well as in emerging technologies such as AI and blockchain. The European Council has issued Conclusions on Regulatory Sandboxes and Experimentation Clauses as tools for an innovation-friendly, future-proof and resilient regulatory framework, which emphasized that regulatory sandbox can provide important opportunities for growth for all businesses, especially SMEs and start-ups, in industry, services and other sectors[6]. This indicates that regulatory sandboxes can serve not only fintech companies, but also companies engaged in other industries, such as companies that develop and provide AI.
The regulatory sandbox for AI is a controlled environment, or safe space, dedicated to AI, especially innovative and high-risk AI. AI systems can be tested and validated in the sandbox. The EU AI Act describes the regulatory sandbox as a measure to support innovation. In Article 3(55) 53, the AI regulatory sandbox means a controlled framework set up by a competent authority which offers providers or prospective providers of AI systems the possibility to develop, train, validate and test, where appropriate in real-world conditions, an innovative AI system, and pursuant to a sandbox plan for a limited time under regulatory supervision[7]. In addition, the AI regulatory sandbox temporarily exempts, relaxes or does not enforce certain regulatory rules to reduce the regulatory burden on innovators. The regulatory rules in AI regulatory sandbox is more lenient than the existing AI regulatory regime, benefiting innovators who enter the sandbox. This can accelerate the development and diffusion of new AI. OECD 2023 Report on Regulatory Sandboxes in Artificial Intelligence emphasizes the need for AI regulatory sandboxes to be evidence-based and to impose no new unnecessary burdens on market participants, while reducing existing unnecessary burdens[8]. Sandboxes allow regulators to temporarily impose a different regulatory regime on a small group of firms. Companies that enter the regulatory sandbox are given the opportunity to test new types of AI without fully complying with existing regulatory regulations, thereby reducing the cost of innovation. For instance, the EU AI Act provides an exemption clause of administrative punishment: if artificial intelligence providers observe the specific plan and terms of regulatory sandbox and follow in good faith the guidance given by the national competent authority, no administrative fines shall be imposed by the authorities for infringements of EU AI Act.
Another goal of the regulatory sandbox is to allow regulators to better learn and understand AI by themselves. It can both promote the development of AI and provide relevant technical information for regulators. In fact, the key reason behind Colingridge's dilemma is the information asymmetry between regulators and the regulated. Compared to the regulated technology companies, regulators often do not have the information advantage, and are prone to rush to introduce stringent regulatory measures without understanding new things. The regulatory sandbox creates a window for regulators to take a closer look at AI technology. The sandbox generates usable empirical data and provides regulators with the knowledge to make better regulatory decisions. With the application of regulatory sandbox, regulators also engage in iterative learning and make rapid regulatory adjustments as trial results are generated[9]. This enables regulators to develop timely regulatory rules to accommodate and monitor AI innovation.
2.2 The Practical Development and Operational Logic of the Regulatory Sandbox for Artificial Intelligence
Regulators in China, other nations, and international organizations have used regulatory sandboxes for AI governance to reconcile the conflict between regulation and innovation. Overall, the number of AI regulatory sandboxes is steadily rising and progressing well. Nonetheless, there are still not many AI regulatory sandboxes at the national level in China because these sandboxes are still in their infancy. However, these initial cases will provide valuable experience and inspiration for how to build and improve the AI regulatory sandbox system moving forward.
The regulatory sandbox for AI governance did not arise as a stand-alone in the early stage of development, but was endogenous to data governance regulatory sandbox. In the early stage of AI governance, regulators had not yet realized that regulatory sandbox could serve as a new type of governance route. But based on the close relationship between AI development and data processing, regulators have also discovered its positive effect for AI governance when setting up regulatory sandboxes that promote personal data protection and innovation. The UK Information Commissioner's Office, or ICO, has boosted the development of AI apps by companies participating in the sandbox trials through the Personal Data Regulatory Sandbox, which was launched in 2019[10]. The sandbox was originally intended to strengthen data protection, as well as to support the development of new products and services in the public interest by companies using personal data. Since AI is a data-driven technology, the sandbox can also support the innovation of AI trained with data. Thus, through the regulatory sandbox for data governance, regulators have initially explored a new route for AI governance.As regulators gain experience in governance, specialized regulatory sandboxes for AI have emerged.In 2020, the Norwegian Data Protection Authority (Datatilsynet) introduced an AI regulatory sandbox designed to promote ethical, privacy-friendly, and responsible AI innovation. Inspired by the ICO Regulatory Sandbox, companies participating in the Norwegian Regulatory Sandbox will be guided to develop AI products and services that are compliant with data protection laws, socially ethical and respectful of individuals' fundamental rights[11].
Moreover, the AI regulatory sandbox is codified in statute law, with a specialized AI regulator responsible for creating sandboxes as well as observing AI experiments. As a result of the introduction and progression of the draft EU AI Act, Spain issued Royal Decree 729/2023 and 817/2023 in 2023, respectively, which established the Spanish Agency for the Supervision of Artificial Intelligence (AESIA) and created the regulatory sandbox for assessing the compliance of AI with the proposed AI Act of the European Parliament and of the Council[12]. This is the first EU regulatory sandbox aimed at experimenting with AI. Now, European Parliament has officially passed the Artificial Intelligence Act on March 13, 2024. EU member states should clearly define artificial intelligence regulatory sandboxes in their domestic laws and ensure that sandboxes start operating 24 months after AI Act comes into effect. At this point, the new governance path of artificial intelligence supervision sandbox has been legalized. Of course, countries outside the EU will also prescribe AI regulatory sandboxes in order to balance the development of AI and risk control. Although China has not included AI regulatory sandbox in formal legal provisions, it has also actively explored AI regulatory sandboxes in the automotive sector and issued the Notice on Trial of the Sandbox Supervision System for Vehicle Safety.
Taking the regulatory sandbox created by AESIA as an example, the operation of the AI regulatory sandbox needs to go through five main stages. After the Spanish decree authorizes AESIA, the AI regulator, to set up the AI regulatory sandboxes, new AI developers or providers first need to apply to participate in sandbox testing. Secondly, regulators should evaluate applications, determine which AI requires sandbox test, and screen sandbox participants. After determining the participants, the sandbox operation enters the preparation phase. AI regulators should set various parameters for the formal testing phase and requirements for participants. Afterwards, the regulatory sandbox officially begins the testing phase. During the testing phase, participants need to actively communicate and interact with regulators. For example, regulators need to provide guidance that participants comply with AI governance rules, and participants should submit self-assessments on compliance with requirements, and report events that may be considered illegal at any time[13]. The final stage is the evaluation stage, which evaluates the experimental results of AI in the sandbox. If the test is successful, the AI can enter the market for promotion; If the test fails, participants should reverse check and optimize the AI again.
To summarize, AI regulatory sandbox is evolving as a new governance route. AI regulators in various countries are making efforts to explore the application of regulatory sandbox in AI governance. Among them, China also takes an active role in innovating regulatory tools to promote the new AI applications to the ground through regulatory sandboxes, pilots, and modeling applications[14]. This regulatory innovation accelerates the commercialization of new AItechnologies.
3 The Significance and Issues of the Regulatory Sandbox for Artificial Intelligence Governance in China
3.1 The Institutional Advantages of the Regulatory Sandbox for Artificial Intelligence
3.1.1 Preventing Stifling Innovation in the AI Race
One of the justifications basis for AI regulatory sandboxes lies in an inclusive and prudent approach to new types of AI to support its innovation and commercialization applications. For years, there have been concerns that stringent regulation of AI may hinder its future development[15]. AI governance practices suggest that a route to address this concern should include the creation of AI regulatory sandboxes. Regulatory sandboxes would help prevent stifling innovation (by regulators) under a stringent liability regime. Because the sandbox would not limit experimentation with high-risk AI (e.g., black-box AI), it would permit testing of the technology within a supervisory context by AI providers to understand the impact of high-risk AI on the market and society[16]. In fact, for innovative technologies with high potential risks and uncertainties, AI regulatory sandboxes can encourage innovation and tolerate error, so that it is not necessary to put emerging technologies on the high shelf due to concerns about risks, and it will not lead to an unfavorable situation where wholesale liberalization and intervention are lagging behind[17].
In addition, sandboxes can reduce the constraints on innovation imposed by strict regulatory regimes by often temporarily exempting, relaxing or not enforcing certain regulatory rules. This is particularly important for start-ups and MSMEs, which often find it more difficulty in complying with stringent regulatory requirements. Start-ups and MSMEs do not have sufficient capital and experience to fulfill their obligations under the regulatory regime compared to firms with dominant or even dominant position in the market. Therefore, sandboxes lower the threshold and barriers for MSMEs to enter the AI market. It has also been shown that participation in sandbox trials has a positive spillover effect on the birth and financing of high-growth start-ups[18].
3.1.2 Controlling the Risks of the Application of AI Technology
The second justification for AI regulatory sandboxes is to ensure that the pace of regulatory innovation keeps up with the pace of technological iteration, and to anticipate and control the risks of the application of AI. In the digital age, the speed of AI innovation makes it difficult for regulators to promptly identify regulatory needs and find appropriate regulatory measures. Some scholars have labeled this problem as the pacing problem, which refers to the phenomenon that regulatory and legal systems are not being updated fast enough to keep pace with technological development[19]. The pacing problem poses a major challenge to traditional regulatory regimes. It takes months or even years for traditional regulatory regimes to materialize into formal legal requirements. AI however, is performing at an astonishing rate. Regulations governing AI may be outdated before they are formalized. However, AI regulatory sandboxes can complement traditional regulatory regimes by guaranteeing that regulators are fully aware of new AI products and quickly adapt their regulatory tools. Through sandboxes, regulators can proactively improve their insights into emerging technological developments, as well as identify application risks earlier. Enterprises participating in sandbox trials can also receive early risk alerts from regulators, avoiding sunk costs and legal liabilities for unacceptable risks arising from the AI they develop.
3.1.3 Strengthening Interaction and Cooperation Between Regulators and Innovators
The third justification for AI regulatory sandboxes is to strengthen the interaction and cooperation between Regulators and innovators, and to promote the joint achievement of AI governance objectives by multiple subjects. Regulatory sandboxes pay more emphasis on the subjective initiative of the regulated than the traditional regulatory methods, and aim to establish a effective communication method, cooperation mechanism and interactive relationship between the regulators and the regulated[20]. Traditional regulation is characterized by static unidirectionality, that is, the regulators mainly take unilateral mandatory regulatory measures to intervene in the market by means of top-down "command-and-control", and there is a lack of dynamic, effective and equal communication between the regulators and the regulated[21]. In contrast, the governance concept implemented by the sandbox has shifted from one-way regulation to pluralistic governance, striving to mobilize innovative enterprises and other non-regulators to jointly participate in AI governance. Regulatory sandboxes themselves can play an active role in AI pluralistic governance by serving as an exchange platform betweenregulators and sandbox participants. With the help of the regulatory sandbox, regulators provide AI developers with compliance guidance, risk alerts, and suggestions. At the same time, AI developers need to provide regulators with information on the background of technology application, development and test results. In the sandbox, regulators can take steps to engage in a dialog with stakeholders in industries that use AI by accessing information and influencing the development of AI in a socially beneficial way[22].
3.2 The Feasibility of Adopting the Regulatory Sandbox for AI Governance in China
The significance of the regulatory sandbox lies not only in its institutional advantages, but also in its feasibility of embedding in China's artificial intelligence governance practices. The feasibility of adopting regulatory sandbox for China's AI governance is demonstrated in the following aspects:
Firstly, the technology regulatory principles reflected in the regulatory sandbox are consistent with the AI governance principles that China adheres to. China has always adhered to the unity of risk prevention and innovation supportin AI governance, and continuously demanded that governance principles keep pace with technological development. The governance principles have been updated in normative documents such as the Principles for the Governance of the New Generation of Artificial Intelligenceand Interim Measures for the Administration of Generative Artificial Intelligence Services in China. The regulatory sandbox, as a new type of regulatory tool, has also updated the principles of technical regulation, reflecting the principles of agile, full-cycle and inclusive and prudent regulation. Therefore, the regulatory sandbox conforms to the principles of AI governance in China, and it is feasible to fit the development trend of China's governance methods.
Secondly, both AI developers and China's public have a strong practical demand for regulatory sandboxes. On the one hand, Chinese developers need regulatory sandboxes to develop and test new AI because sandboxes can evaluate the operation of AI, while helping developers find its shortcomings. In addition, sandboxes also reduce the regulatory burden on developers and provide more flexible regulatory policies. On the other hand, the public of China, as users of the new AI, also need regulatory sandboxes to protect their legitimate rights. Sandboxes set up the rights protection mechanism for consumers. Regulatory sandboxes require AI with high risk or uncertain risk to conduct sandbox tests before entering the market.
Thirdly, China has initiated pilot programs and legislative exploration of regulatory sandboxes, incorporating these sandboxes into its AI governance framework. In the case of China's regulation of AI in the automotive sector, local governments have effectively created local sandboxes for regulation of autonomous driving systems. Shanghai and Shenzhen have introduced regulations on the management of intelligent connected vehicl and established regulatory sandboxes, which allowed intelligent connected vehicles to undergo road testing in road sections designated by the traffic department. Meanwhile Beijing has taken the lead in exploring and practicing innovative regulatory sandbox mechanisms, and established a regulatory sandbox mechanism for the Beijing Artificial Intelligence Data Training Base[23]. Therefore, China has tried to use the regulatory sandbox as a means of AI governance, which directly proves the feasibility of the regulatory sandbox.
3.3 The Issues of the Regulatory Sandbox for AI Governance
While the AI regulatory sandbox has the function of balancing the relationship between regulation and innovation, it also has limitations and practical obstacles. In applying the sandbox to the governance of AI, regulators should not only see its significanceand superiority, but also see its limitations and practical obstacles.
3.3.1 The Inherent Limitations of the Regulatory Sandbox for AI
The AI regulatory sandbox has inherent limitations due to its own nature. First, unlike formal legal norms with general applicability, regulatory sandboxes are legal experiments with a "case-by-case" character. While this characteristic can prevent the risks of AI applications from spreading, it also limits the broad applicability of regulatory sandboxes and increases the burden on regulators. The regulatory sandbox is highly dependent on two-way interaction based on case-by-case approach. From entry to operation, it is highly dependent on the tailor-made and continuous interaction between the testing enterprise and the regulator based on the specific situation. Making the "replicable and generalizable" experience is relatively limited[15]. The "case-by-case" nature of the regulatory sandbox also imposes higher requirements on the communication mechanism between the regulator and the testing company. Mutual trust and active communication are necessary for regulators to be able to better design and implement tests for a particular AI technology. On top of that, regulatory sandboxes personalized for a certain AI face high creation cost. AI regulatory sandboxes are also an experiment for regulators, requiring them to constantly update their governance notion and develop their governance capabilities.
Second, there is always a gap between the test environment simulated by the regulatory sandbox and the real environment, which leads to the fact that some of the technical defects and application risks of AI cannot be detected. Regulators can only try their best to simulate a test environment under their control with reference to the real environment. But the regulatory sandbox cannot completely reproduce the real environment in which AI is applied and operated. Even if AI is recognized as not posing significant risks after sandbox testing, its safety still needs to be tested by macroeconomic policies, social and cultural environments. The consequent impact on the conditions and variables in the testing environment can easily lead to the emergence of new risks in the innovative product or service, or stimulate defects that have not yet been discovered prior to the test[24].
3.3.2 The Possible Practical Obstacles of Adopting the Regulatory Sandbox for AI Governance in China
Besides the inherent institutional limitations, the AI regulatory sandbox will also encounter some practical obstacles when operating in China, affecting its positive governance efficacy. First, the AI regulatory sandbox has a high institutional cost. Compared with traditional ex-post regulation, the regulatory sandbox is a full-cycle regulatory approach that runs through the whole process of AI development and requires more public resources and financial support. Therefore, the application of the regulatory sandbox means that regulators should devote more expertise and energy to the governance of AI. So regulators should be cautious in creating regulatory sandboxes and only allow a small number of AI developers to participate in sandboxes to test new AI when necessary. In practice, regulators will inevitably face the question of how to select sandbox participants. This is actually a problem of determining the access conditions for the sandbox. The most common access condition is "genuine innovation"[25], but this is actually somewhat ambiguous and a challenge for selection of participants in practice.
Second, the AI regulatory sandbox system will also create problems of intersection with other legal systems in practice. As a new tool for AI governance, the regulatory sandbox needs to deal with the intersection with laws in the areas of personal information, data, platforms and cybersecurity[26]. In particular, the intersection between regulatory sandboxes and personal information protection rules needs to be further clarified. The development of AI usually involves the use of personal data on a large scale. Therefore whether exemptions for certain data uses that do not comply with Personal Information Protection Law(PIPL) are needed to incentivize innovation in sandbox trials will be a difficult issue for regulators to judge. As well, the AI regulatory sandbox will also need to operate in compliance with laws such as Data Security Law, while at the same time avoiding imposing unreasonable regulatory burdens on innovators. This is also an important issue that warrants careful consideration.
Most critically, the current lack of unified and operable principles, standards and procedures for the mechanism design of the regulatory sandbox in China has affected the governance utility of the AI regulatory sandbox. The good operation of the AI regulatory sandbox relies on a reasonable mechanism design, otherwise a series of problems will emerge, even expanding technological risks and destroying the order of competition. It can be said that the biggest practical challenge that regulatory sandbox may face in AI governance is the mechanism design issue.
In addition to the access conditions, there are four main design problems: Firstly, the duration of the sandbox test set is too short or too long. A too-short duration results in the testing and assessment of the AI risk cannot be completed on time, while a too long duration affects the AI to be put on the market and commercialized application. Secondly, the exemption clauses are ambiguous, which either fails to alleviate the regulatory burdens and legal liabilities of the testing enterprises and restricts their development of new AI, or excessively alleviates the regulatory burdens and legal liabilities leading to the expansion of risks and affecting the legitimate rights of the consumers. The third problem is the trasparency of the sandbox. Unreasonable provisions of the sandbox information disclosure system may affect the order of fair competition, which is not conducive to fair competition for AI providers not participating in the sandbox. But overly stringent disclosure requirements may distort the test results and create potential uncertainties. Fourthly, the exit and transition mechanisms of regulatory sandboxes have not yet been developed, and further clarification of the subsequent arrangements is necessary[27].
Despite the limitations and practical obstacles of the regulatory sandbox, this does not negate its positive effect on AI governance. Becauseboth the limitations and practical obstaclescan be mitigated through careful sandbox design and assessment. If sandboxes are reasonably designed, and harmonize with PIPL well, they can still contribute to AI governance.
4 The Updated Technology Regulatory Principles of the Regulatory Sandbox for Artificial Intelligence
The AI regulatory sandbox reflects renewed principles of technology regulation. As mentioned above, the pacing problem is posing significant challenges to the regulatory system and driving a shift in regulatory principles. Traditional hard, ex-post and strict regulation can hardly be adapted to the pace of AI technology iteration anymore. The AI regulatory sandbox updates the regulatory principles to regulate AI in an agile, full-cycle and inclusive and prudent manner, while supporting AI innovation.
4.1 The Agile Regulatory Principle Reflected in the Regulatory Sandbox for Artificial Intelligence
Regulatory sandboxes reflect the agile regulation principle by responding to the development trend of AI innovation, quickly identifying risks and ensuring that regulators dynamically adjust governance rules in a timely manner. The agile regulation is a new concept of technology regulation and governance that aims to resolve the contradiction between the lagging nature of traditional policies and the rapid iterative development of technology. The principle was proposed in the 2018 World Economic Forum white paper and introduced into the field of AI governance. China's Principles for the Governance of the New Generation of Artificial Intelligence has absorbed this principle. Agile regulation is a progressive regulatory model with high sensitivity, high adaptability, and broad participation of multiple subjects. Its core essence is to realize rapid response to citizens' demands through iterative innovation in response to dynamic changes and uncertainties in the internal and external environments[28].
One of the tools of agile regulation is the regulatory sandbox. It encourages companies to test new AI products. AI regulatory sandboxes that implement the principle of agile regulation can address the objective challenge of "common ignorance" between regulators and developers in the context of the rapid development of AI technology, thereby enhancing the adaptability of the regulatory framework[29]. The AI regulatory sandbox provides learning opportunities for both regulators and firms to quickly adapt and respond in a timely manner to changes in the context where they have no proven experience to learn from[30]. Based on the trial results and the data and information provided by the sandbox feedback, regulators can adjust the regulatory approach and regulatory requirements in an agile manner and dynamically update the governance framework. In addition, regulators have ample discretion to regulate companies flexibly within the regulatory sandbox, without having to seek frequent changes in the law. This is the outward manifestation of the principle of agile regulation.
4.2 The Full-Cycle Regulatory Principle Reflected in the Regulatory Sandbox for Artificial Intelligence
Regulatory sandboxes provide a controlled environment for the development, deployment, validation, and testing of AI systems, and enable the entire process of AI experimentation to be subject to risk prevention, control, and governance, reflecting the concept of full-cycle dynamic oversight. Since an AI system is a dynamic system, this means that a one-time inspection or audit may quickly become obsolete. Therefore, continuous and full life-cycle dynamic monitoring and auditing of AI systems is necessary[31]. A single ex-post regulatory approach is no longer sufficient to prevent and control the risks arising from dynamically developing AI. As well, the automated nature of AI creates difficulties in risk anticipation and management, especially in the case of catastrophic risks posed by AI systems, and ex-post regulatory measures are highly likely to be ineffective[32]. Therefore, the regulatory system of AI should not be limited to ex-post facto, but run through the whole life cycle of AI. The regulation of AI technology should be transformed from a single ex-post facto regulation to a full life cycle regulation of "ex ante - in - ex post facto".
The AI regulatory sandbox implements theprinciple of full-cycle regulation by placing regulatory measures up front. The sandbox allows regulators to intervene and oversee AI trials before AI is applied in the marketplace, emphasizing risk prevention. This governance route distinguishes itself from traditional single-point, ex-post regulation by embedding regulatory and governance requirements early in the life of AI technologies and products.This stage of guidance is crucial to AI governance, as it helps avoid the dilemma of first generating harmful consequences and then governing them. The regulatory sandbox also plays a key role in articulating the comprehensive, right-process AI governance system. Artificial intelligence governance is a systematic project oriented to the entire technology life cycle, which requires the establishment of a deeply coupled and dynamically articulated governance framework in the ex-ante, mid-ante and ex-post stages[33]. The regulatory sandbox is a tool used primarily in the AI development and testing process, and is essentially ex-ante regulation. But it is also closely linked to the ex-ante and ex-post regulatory phases. Because the test results of the regulatory sandbox are an important reference element and evidence for AI risk assessment. So it is one of the prerequisites for entering the ex-post regulatory stage. In addition, the data and information provided by the sandbox is also the key evidence to be reviewed and referred to in case of hazardous consequences occurring after the AI testing. Therefore, the regulatory sandbox bridges the three phases of AI governance-ex-ante, mid-ante and ex-post stages, which enables regulators to carry out regulation throughout the life cycle of AI development, testing, evaluation and application.
4.3 The Inclusive and Prudent Regulatory Principle Reflected in the Regulatory Sandbox for AI
The regulatory sandbox also calls for timely and appropriate governance of new AI, reflecting the principle of inclusive and prudent regulation. Inclusion and prudence is an inevitable requirement for the government to respond to the uncertainty of AI development, in order to balance efficiency and safety. In the face of a large model of generative AI at a rapid stage of development, legislators and regulators must express their respect for the market, innovation and industrial autonomy with greater modesty, leaving a broader space for the development of new technologies and applications[34]. For developing AI, if regulators lack sufficient patience and tolerance, and fail to embrace the concepts of prudence and rationality, they may hastily formulate stringent regulatory rules driven by technological pessimism. Such an approach risks inhibiting critical AI innovations. Therefore, the principle of inclusive and prudent regulation is more responsive to the evolving requirements of the AI era.
The regulatory sandbox implements a fault-tolerant and error-correcting mechanism for new AI within a controllable range, and creates a relatively loose development environment for AI innovative enterprises. It provides a window for regulators to observe the development trend of AI in order to facilitate the prudent formulation of regulatory rules. In the development of AI, regulators should refrain from hastily formulating regulatory rules, as they cannot accurately predict the development trends of this nascent industry. Instead, it is prudent to suspend the formulation of regulatory rules and allow the market sufficient space to evolve and innovate[35]. Considering one aspect, the AI regulatory sandbox upholds the concept of inclusive regulation and creates a regulatory "safe space" for participating companies, allowing them to conduct AI innovation experiments at low compliance costs. Even if the results of AI trials are poor and damaging, regulators will not immediately impose stringent regulation. For example, when algorithmic services violate PIPL, supervision will be carried out in as flexible a manner as possible by means of administrative guidance, reminders and interviews, avoiding the adoption of rigid measures such as shutting down and reorganizing the business[36]. Considering another aspect, the AI regulatory sandbox also upholds the principle of prudent regulation. When regulators are temporarily unable to judge the potential risks of a particular AI product, they are allowed to explore the scale and scope of regulation within the sandbox, accumulate governance experience, and ultimately form AI regulatory rules based on factual evidence.
Of course, inclusive and prudent regulation does not mean letting AI development go unchecked, but rather formulating commensurate regulatory rules based on the characteristics and risk levels of AI. Once an AI product or service touches the bottom line of social security and national security, the regulatory agency should impose timely regulatory measures based on the factual experience gathered from the sandbox to prevent the disorderly development of AI.
5 The Approach to Constructing and Improving the Regulatory Sandbox Mechanism for Artificial Intelligence in China
Despite the many institutional advantages of regulatory sandboxes as a new AI governance route, their actual efficacy depends on the actual institutional design. Once the design is not adequate and unreasonable, the regulatory sandbox will produce a series of problems. Therefore, how to construct the system of the AI regulatory sandbox and how to realize its positive efficacy as a new route of AI governance is the key issue of applying sandboxes to govern AI. The institutional construction of the AI regulatory sandbox should focus on three aspects, namely, the access and exit mechanism, the coordination mechanism between the regulatory sandbox and personal information protection, and other mechanisms.
5.1 Reasonable Access and Exit Mechanism of the Regulatory Sandbox for Artificial Intelligence
In order to prevent the spread of risk and control the costs of running the sandbox, only a fraction of AI innovators will be authorized to participate in regulatory sandbox trials. Therefore, our AI regulatory sandbox system should have a reasonable access mechanism that specifies the qualifications that need to be met by testers participating in the sandbox. This includes the industry in which the tester operates and its attributes, the nature and stage of development and the potential level of risks of the AI products or services offered by the tester, etc.
The access eligibility conditions will depend on the intended goals of the sandbox. The access eligibility conditions should contribute to the realization of the sandbox's desired goals, otherwise they are not reasonable. In addition, considering the public interest and the order of free competion, the regulatory sandbox can be tilted to be open to two types of enterprises first. Development and applications that serve major public interest areas, such as public safety and public health, should be prioritized. For small and medium-sized enterprises and start-ups, priority access to the AI sandbox should be provided to offer regulatory support for their smooth entry into the market at a lower compliance cost[17].
The AI regulatory sandbox system should provide an appropriate exit mechanism to establish standards for testers to leave the sandbox trials, in addition to the accees mechanism. This is not only to prevent specific players from constantly taking up testing space in the regulatory sandbox and to manage the institutional costs associated with running the sandbox, but also to encourage the commercialization of AI that has undergone successful testing. There are two main types of mechanisms for testers to exit the sandbox: One is the exit mechanism for successful trials, or the expiration of the trial period; the other is the exit mechanism for failed trials. When the test subject's term ends, the desired test effect is not achieved as planned, a bug in the AI system cannot be overcome, the test subject's commitment is violated, or the test subject voluntarily withdraws, the test subject retains the right to withdraw from the test[37].
Therefore, a reasonable exit mechanism needs to clearly stipulate the trial period of the regulatory sandbox, as well as the judgment criteria for trial success and trial failure. Notice on Carrying out Pilot Work on Access and Road Traffic of Intelligent Connected Vehicles clearly puts forward a test-failure type of exit mechanism. The document stipulates that the tester must withdraw from the pilot program if there are serious safety hazards in the vehicle's autonomous driving system that cannot be resolved, or if there are significant changes in the conditions of the pilot vehicle manufacturer or the pilot user that cannot ensure the successful implementation of the pilot program. Once the testers trigger the exit conditions, they should exit the AI regulatory sandbox according to the process and stop the test.
5.2 Effective Coordination Mechanism Between the AI Regulatory Sandbox and Personal Information Protection
Since AI innovation requires the processing of personal data, the AI regulatory sandbox should address the relationship with personal information protection and form an effective coordination mechanism. In order to incentivize AI innovation, regulatory sandboxes usually give innovators more lenient regulatory policies so that they can make full use of personal data to develop AI under the premise of safeguarding personal data. Moreover, the regulatory sandbox is a controlled safe space compared with the real environment, and the danger to personal data is less, therefore there is no need to strictly comply with the rules of personal information processing. China can learn from the exceptions to personal data processing in the EU AI act when developing the regulatory sandbox mechanism. That is, in the artificial intelligence regulatory sandbox for the maintenance of public safety, public health, the environment and other public interests in the development of a specific artificial intelligence system, you can go beyond the original purpose of processing personal data. Although China's PIPL currently also stipulates in Article 13 that personal information can be processed in the public interest without obtaining the consent of the individual, the scope is relatively narrow and cannot meet the demand for full utilization of personal data by AI innovators. Therefore, in the design of the AI regulatory sandbox system in China, we can increase the exceptions to the processing of personal data, and allow more innovators who develop AI for the public interest to use personal data without the consent of individuals under the premise of safeguarding the security of personal data.
5.3 Other Important Mechanisms of the Regulatory Sandbox for Artificial Intelligence
Moreover, the AI regulatory sandbox systemshould stipulate an exemption mechanism to create a "safe space" for testers with relatively lax regulations. The exemption mechanism is a core component of the AI regulatory sandbox system and a necessary incentive for the sandbox to support AI innovation. The regulatory sandbox is able to encourage innovation and tolerate trial and error because it provides for an exemption mechanism. It allows innovators to not have to give up developing new AI products or services out of fear of stringent regulatory requirements. A proper AI regulatory sandbox system must leave room for trial and error for innovators and refine the compliance exemption machanism for AI service providers before the technology matures[38]. In order to stimulate the dvelopment and application of AI technologies, there have been cases where exemption mechanism has been explicitly provided for regulatory sandboxes. For example, Japan's Ministry of Land, Infrastructure, Transport and Tourism has established a flexible regulatory sandbox for self-driving systems, using an exemption system that allows self-driving cars that do not meet ordinary regulatory requirements to be tested.
However, the exemption mechanism of the AI regulatory sandbox in China should be limited and reasonably ease regulatory requirements and liabilities so as to expand risks. One manifestation of the limited nature of the exemption mechanism is the limited period of application of the exemption rules. Testers are entitled to claim exemption rules only during the trial period of the regulatory sandbox. The second manifestation of the limited nature of the exemption mechanism is the partial exemption rather than complete exemption. The AI regulatory sandbox system can generally only exempt testers from administrative legal liability, not civil and criminal legal liability. Because the latter usually involves the fundamental rights of citizens and the major public interests of society. The regulatory sandbox system does not have the power to exempt from legal liability for behavior that violates fundamental rights and significant public interests. Therefore, the exemption mechanism of the regulatory sandbox is usually a limited relaxation of regulatory requirements.
In addition, the AI regulatory sandbox system should build timely and adequate regulatory sandbox disclosure mechanisms. The regulatory sandbox disclosure mechanism is full disclosure of information throughout the pre-sandbox initiation, during the trial, and after the conclusion of the sandbox. Adequate information disclosure can increase the transparency of the regulatory sandbox and help it fulfill its AI governance effectiveness. Subjects burdened with disclosure obligations include both regulators and testers participating in the sandbox. Regulators should disclose regulatory sandbox rules to enhance the certainty of regulatory rules and the predictability of testers. In addition, regulators need to fully disclose the operation of the regulatory sandbox, summarize experience, improve the operation of the sandbox, and ensure fair competition, i.e., establish a fair competitive environment between enterprises entering the regulatory sandbox and enterprises outside the sandbox[39]. The testers also bear disclosure obligations and need to submit to the regulator the performance data of the AI under test, and the safety assessment report after application. Because these data or reports involve the trade secrets of the testers, the regulator should generally keep them confidential and not disclose them to the public. However, on the premise of not jeopardizing trade secrets, the testers should openly share the lessons learned from the sandbox test with innovators who did not participate in the sandbox test, so as to increase the spillover effect of innovation.
Finally, the AI regulatory sandbox system should normalize and institutionalize the communication mechanism and form positive interaction and communication rules. Regulatory sandboxes should be designed to increase both regulators' and testers' access to information.On the one hand, regulators should actively build a platform for communication, such as formulating systems and guiding rules for communication, as well as convening debates and hearings[40].Our regulators may also appoint sandbox advisors with specialized expertise to communicate with and provide guidance to testers in a particular sandbox. The sandbox advisor can provide insights and advice to testers on the results of the trials, or provide guidance on improving the AI products or service based on feedback from regulators and users. On the other hand, an active regulatory sandbox communication mechanism can be used as an up-front process for AI regulatory rulemaking. Regulators must communicate closely and frequently with testers before making or modifying regulatory rules. In this way, regulators can improve their understanding of new AI technologies and develop AI regulatory rules based on active communication.
6 Conclusion
Our regulators may timely upgrade their regulatory toolkit and search for new governance route to supplement the current regulatory tools as the pacing problem of AI governance worsens. The AI regulatory sandbox is a new governance route that can mitigate the pacing problem and ease the conflict between regulation and innovation. By using regulatory sandboxes to govern AI, regulators are able to support AI innovation without sacrificing the legitimate rights and interests of consumers, while the testers participating in the sandboxes are able to develop new AI products or services in the safe space with relatively lenient regulation. Moreover, AI regulators in various countries have realized the potential governance efficiency of regulatory sandboxes and are exploring the adoption of regulatory sandboxes in AI governance. As a responsible AI major country, China has always attached great importance to AI governance, and has already launched the regulatory sandbox for automated driving systems. Despite the existence of an AI regulatory sandbox system, it has not yet been fully systematized and matured. Consequently, this system needs to be reasonably constructed and improved in order to effectively regulate the development and deployment of AI technologies.
References:
[1] SMUHA N A. From a 'race to AI' to a 'race to AI regulation': regulatory competition for artificial intelligence[J]. Law, Innovation and Technology, 2021, 13(1): 57?84.
[2] COLLINGRIDGE D. The social control of technology[M]. London: Frances Pinter, 1980: 1?20.
[3] DENG W J, GAO S P. On the sandbox model of intelligent networked vehicle regulation: an evaluation of relevant local legislation in China[J].Jianghan Forum, 2023(04): 125?128.
[4] CHAI R J. Extraterritorial experience of regulatory sandbox and its inspiration[J]. Law Science, 2017(08): 27?40.
[5] YANG Z C. Institutional elements and conceptual innovation of financial regulatory sandbox[J]. Finance and Economics Monthly, 2020(5): 132?138.
[6] Council of the European Union. Council Conclusions of 16 November 2020 on regulatory sandboxes and experimentation clauses as tools for an innovation-friendly, future-proof and resilient regulatory framework that masters disruptive challenges in the digital age[EB/OL].(2020-01-16)[2024-04-30]. https://data.consilium.europa.eu/doc/document/ST-13026?2020-INIT/en/pdf.
[7] European Parliament. European Parliament legislative resolution of 13 March 2024 on the proposal for a regulation of the European Parliament and of the Council on laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts[EB/OL].(2024-03-13)[2024-04-30]. https://www.europarl.europa.eu/doceo/document/TA-9?2024?0138_EN.html.
[8] OECD. Digital Economy Papers: Regulatory sandboxes in artificial intelligence[EB/OL].(2023-07-13)[2024-04-30]. https://www.oecd.org/sti/regulatory-sandboxes-in-artificial-intelligence-8f80a0e6-en.htm.
[9] ALLEN H J. Regulatory sandboxes[J]. Geo. Wash. L. Rev., 2019, 87: 579.
[10] WARWICK A.ICO selects first innovation Sandbox participants[EB/OL]. (2019-07-29)[2024-04-30]. https://www.computerweekly.com/news/252467504/ICO-selects-first-innovation-Sandbox-participants.
[11] Datatilsynet. Framework for the Regulatory Sandbox[EB/OL]. (2021-01-13)[2024-04-30].https://www.datatilsynet.no/en/regulations-and-tools/sandbox-for-artificial-intelligence/framework-for-the-regulatory-sandbox/.
[12] CéSAR AYALA.Royal Decree 817/2023 on Artificial Intelligence[EB/OL]. (2023-11-16)[2024-04-30].https://www.all-law.es/en/royal-decree-817?2023-on-artificial-intelligence/.
[13] DAVID M.First European regulatory sandbox on Artificial Intelligence[EB/OL].(2023-11-17)[2024-04-30].https://www.connectontech.com/first-european-regulatory-sandbox-on-artificial-intelligence/.
[14] CAO J F. Toward responsible AI: trends and prospects of AI Governance in China[J]. Journal of Shanghai Normal University (Philosophy and Social Science Edition), 2023, 52(4): 5?15.
[15] GURKAYNAK G, YILMAZ I, HAKSEVER G. Stifling artificial intelligence: human perils[J]. Computer Law & Security Review, 2016, 32(5): 749?758.
[16] TRUBY J, BROWN R D, IBRAHIM I A, et al. A sandbox approach to regulating high-risk artificial intelligence applications[J]. European Journal of Risk Regulation, 2022, 13(2): 270?294.
[17] ZHANG X. Data risks and governance paths of generative artificial intelligence[J]. Science of Law (Journal of Northwest University of Politics and Law), 2023, 41(5): 42?54.
[18] HELLMANN T F, MONTAG A, VULKAN N. The Impact of the Regulatory Sandbox on the FinTech Industry[J]. Available at SSRN 4187295, 2022.
[19] FENWICK M, KAAL W A, VERMEULEN E P M. Regulation tomorrow: what happens when technology is faster than the law[J]. Am. U. Bus. L. Rev., 2016(6): 561.
[20] LIAO F. A review of the theory and practice of regulatory sandbox in the context of fintech[J]. Journal of Xiamen University (Philosophy and Social Science Edition), 2019(2): 12?20.
[21] ZHANG Y L. The legalization path of financial regulatory technology[J]. Studies in Law and Business, 2019, 36(3): 127?139
[22] GUIHOT M, MATTHEW A F, SUZOR N P. Nudging robots: innovative solutions to regulate artificial intelligence[J]. Vand. J. Ent. & Tech. L., 2017, 20: 385.
[23] Beijing Youth Net. Beijing Artificial intelligence data Training Base regulatory sandbox results released to explore innovative means of controlled development of artificial intelligence[EB/OL]. (2024-04-26)[2024-04-30].https://m.163.com/dy/article/J0NC95F80514R9KQ.html?clickfrom=subscribe&spss=adap_pc.
[24] LIU S. Jurisprudential logic and institutional expansion of regulatory sandbox[J]. Modern Law Science, 2021, 43(1): 115?127.
[25] YORDANOVA K, BERTELS N. Regulating AI: challenges and the way forward through regulatory sandboxes[J].Law, Governance and Technology Series, 2024, 58: 441-456.
[26] ZHU Y.The regulatory sandbox in the EU's Artificial Intelligence Act[N]. Legalweekly, 2023-08-03(12).
[27] SHEN W, ZHANG Y. The paradox of fintech regulation under the threshold of financial inclusion and the way to overcome it[J]. Journal of Comparative Law, 2020(5): 188?200.
[28] YU W X, LIU L H. Agile governance of computational law regime[J]. New Horizons, 2022(3): 66?72.
[29] ZHANG X. Industry chain-oriented governance: the technical mechanism and governance logic of AI-generated content[J]. Administrative Law Review, 2023(6): 43?60.
[30] ZHANG L H,YU L. From traditional governance to agile governance: governance paradigm innovation of generative artificial intelligence[J]. E-Government, 2023(9): 2?13.
[31] TANG Y J,TANG C H. Risk-based regulatory governance of artificial intelligence[J]. Social Science Series, 2022(1):114?124.
[32] MA Z G,XU J K. Potential risks of artificial intelligence development and legal defense and control regulation[J]. Journal of Beijing Institute of Technology (Social Science Edition), 2018, 18(6): 65?71.
[33] ZHANG X. Governance efficiency, path reflection and countermeasures of Chinese AI technology standards[J]. China Law Review, 2021(5): 79?93.
[34] ZHI Z F. Generative artificial intelligence grand modeling for information content governance[J]. Tribune of Politic Science and Law, 2023, 41(4): 34?48.
[35] LIU Q. The logic of rule of law for inclusive and prudent regulation in the perspective of digital economy[J].Chinese Journal of Law, 2022, 44(4): 37?51.
[36] XU J M. Generative artificial intelligence governance principles and legal strategies[J]. Theory and Reform, 2023(5): 72?83.
[37] ZHAO J B. Regulatory sandbox as a feasible solution for blockchain legal regulation[J]. Journal of Hubei Police College,2021, 34(6): 114?121.
[38] ZHANG L H. Legal positioning and layered governance of generative artificial intelligence[J]. Modern Law Science, 2023,45(4): 126?141.
[39] ZHANG Y L. Analysis of legal issues of regulatory sandbox[J]. China Journal of Applied Jurisprudence, 2020(3): 44?54.
[40] LIU Y C. Legal regulation of scientific and technological innovation[J]. Journal of East China University of Political Science and Law, 2023, 26(3): 37?46.
論人工智能治理的監(jiān)管沙盒路徑與機(jī)制
葉宣含
摘 要:如何在監(jiān)管人工智能風(fēng)險與支持人工智能創(chuàng)新之間尋求平衡點(diǎn),已經(jīng)成為人工智能治理的核心問題。為緩和監(jiān)管和創(chuàng)新的緊張關(guān)系,各國監(jiān)管機(jī)構(gòu)逐漸將發(fā)端于金融領(lǐng)域的監(jiān)管沙盒應(yīng)用于人工智能治理中。人工智能監(jiān)管沙盒不僅有助于控制技術(shù)風(fēng)險,還避免扼殺人工智能創(chuàng)新,為我國人工智能治理開辟一條具有可行性的新路徑。它為尚未投入市場的新型人工智能提供了進(jìn)行開發(fā)和試驗的可控環(huán)境,在監(jiān)管機(jī)構(gòu)可控范圍內(nèi)為創(chuàng)新者保留了必要的試錯容錯空間。相較于傳統(tǒng)自上而下的硬性監(jiān)管、事后監(jiān)管和嚴(yán)格監(jiān)管方式,監(jiān)管沙盒以敏捷與包容審慎的理念對人工智能進(jìn)行全周期的治理。但其也具有固有局限性并且在運(yùn)行中可能會遇到實踐障礙,需要更合理的制度加以克服。我國人工智能監(jiān)管沙盒制度應(yīng)當(dāng)構(gòu)建與完善準(zhǔn)入與退出機(jī)制、沙盒與個人信息保護(hù)的協(xié)調(diào)機(jī)制,以及豁免、披露和溝通交流等機(jī)制,發(fā)揮治理人工智能的積極效能。
關(guān)鍵詞:人工智能治理;監(jiān)管沙盒;敏捷監(jiān)管;包容審慎監(jiān)管;準(zhǔn)入與退出機(jī)制;協(xié)調(diào)機(jī)制
Author Profile: Ye Xuanhan, from Lishui of Zhejiang Province, Ph.D. Candidate, Research Fields: Intellectual Property Law, Science and Technology Law, Data Law.
1 The literature discussing the artificial intelligence regulatory sandbox mainly consists of two papers written by European scholars, based on the EU Artificial Intelligence Act, proposing the formation of a unified and stable framework for artificial intelligence regulatory sandbox within the EU. For specific literature, please refer to TRUBY J, BROWN R D, IBRAHIM I A, et al. A sandbox approach to regulating high-risk artificial intelligence applications. European Journal of Risk Regulation, 2022, 13(2): 270-294;RANCHORDAS S. Experimental regulations for AI: sandboxes for morals and mores. University of Groningen Faculty of Law Research Paper , 2021 (7).
2 The UK's Financial Conduct Authority (FCA) describes it as a safe space where firms can test innovative products, services, business models and delivery mechanisms without immediately incurring all of the normal regulatory consequences of engaging in related activities". Financial Conduct Authority (FCA). Regulatory sandbox. (2015-11-30)[2024-04-30]. https://www.fca.org.uk/publication/research/regulatory-sandbox.pdf.