日本一区二区不卡视频,高清成人免费视频,日日碰日日摸,国产精品夜间视频香蕉,免费观看在线黄色网,国产成人97精品免费看片,综合色在线视频

申請實習(xí)證 兩公律師轉(zhuǎn)社會律師申請 注銷人員證明申請入口 結(jié)業(yè)人員實習(xí)鑒定表申請入口 網(wǎng)上投稿 《上海律師》 切換新版
當(dāng)前位置: 首頁 >> 業(yè)務(wù)研究大廳 >> 專業(yè)委員會 >> 國際法專業(yè)委員會 >> 專業(yè)論文

大變局?揭秘全球AI治理立法趨勢

    日期:2025-12-12     作者:韓海嬌(國際法專業(yè)委員會、北京煒衡(上海)律師事務(wù)所)、黃一埔(北京煒衡(上海)律師事務(wù)所)

The launch of ChatGPT has underscored the increasingly greater participation of Generative Artificial Intelligence (“Generative AI” or “AI”) in our daily lives and the consequential necessity of properly regulating the development and application of Generative AI. Trained on large volume of data, Generative AI, tasked to generate answers in response to prompt by the production of text, photos, and audios, is capable of resolving complex tasks and promoting productivity and innovations across sectors. However, AI has also resulted challenges and risks.

ChatGPT的推出凸顯了生成式人工智能(“生成式人工智能”或“人工智能”)在我們?nèi)粘I钪械娜找鎻V泛參與,因而規(guī)范生成式人工智能的發(fā)展和應(yīng)用顯得日益迫在眉睫。生成式人工智能通過對大量數(shù)據(jù)進行訓(xùn)練,被賦予以文本、圖片和音頻等方式生成答案的任務(wù),能夠解決復(fù)雜問題并推動跨行業(yè)的生產(chǎn)力和創(chuàng)新,但挑戰(zhàn)和風(fēng)險也隨之而來。

I The AI associated risks: why AI must be regulated?

人工智能相關(guān)風(fēng)險:為什么需規(guī)范人工智能?

Firstly, AI may undermine privacy by inadvertently and excessively collecting private information and the collected data often exceed what is necessary for the intended uses, which could lead to unintended exposure or misuse Secondly, AI may lead to the spread of misinformation and disinformation. For instance, AI can generate fabricated references to non-existent resources or false contents with a high level of persuasiveness due to the incapability of verification of the contents generated based on training data. Thirdly, AI poses a risk of bias and discrimination. Consequential upon the training on massive volume of biased data containing discriminatory stereotypes that reflect systematic inequalities of the dominant cultures, AI could result in discriminatory treatments against certain marginalised societal groups based on their social backgrounds. Lastly, AI may cause threats to public safety and security. AI systems may be misused to generate inappropriate contents such as pornography, violence, or even incitation of suicide or acts of self-injury, thereby causing risks to public safety. Concurrently, AI could be maliciously utilized for illegal or terrorist activities.

首先,人工智能可能通過無意或過度收集個人信息,且收集的數(shù)據(jù)往往超出用于預(yù)期目的所需范圍,可能導(dǎo)致意外的信息曝光或濫用。其次,人工智能可能導(dǎo)致錯誤信息和虛假信息的傳播。比如,由于生成內(nèi)容的驗證能力不足,人工智能可能生成說服力雖高但卻指向不存在資源或虛假內(nèi)容的虛構(gòu)引用。第三,人工智能存在偏見和歧視的風(fēng)險。由于對包含反映主導(dǎo)文化系統(tǒng)性不平等的歧視性刻板印象的海量偏倚數(shù)據(jù)進行訓(xùn)練,人工智能可能根據(jù)社會背景對某些邊緣化社會群體進行歧視性處理。最后,人工智能可能對公共安全和安全性構(gòu)成威脅。人工智能可能被濫用以生成不當(dāng)內(nèi)容,如色情、暴力,甚至煽動自殺或自傷行為,從而對公共安全構(gòu)成風(fēng)險。與此同時,人工智能也可能被惡意利用進行非法或恐怖活動。

II The Global Trend of AI Governance

人工智能治理之全球趨勢

At the international level, several international organizations have agreed on numerous initiatives for AI regulation in attempt to regulate AI through developers’ voluntary alignment with certain principles, such as the Bletchley Declaration, Hiroshima AI Principles, and the OECD AI Principles. Notably, the Council of Europe is drafting the world's first AI convention, which, upon entry into force, would oblige contracting states to enact legislation to mandate risk management measures for AI development. These initiatives have substantially reached a consensus on the principles that should govern AI development and application, which include, among others, transparency, fairness, safety, and privacy protection. Under these principles, AI providers must ensure sufficient transparency during the operations of AI system by providing all relevant information in an accessible, clear, accurate, and timely manner, enabling the users to understand on the general functionality, level of accuracy, the associated risks, and the corresponding risk mitigation measures of the AI. The development of AI systems must maintain an adequate level of fairness and respect for rule of law, democratic principles, and human rights. Necessary measures should be adopted to prevent algorithmic discrimination such as equity assessment, utilization of diverse and representative data, safeguarding against demographic features proxies, ensuring accessibility for disabled people, evaluating disparity, and maintaining appropriate human supervision. The developers must conduct the pre-market risk management assessment to identify and mitigate the AI associated risk, adopt cybersecurity measures to ensure the system robustness and stability, and continuously monitor the post-market compliance with the standards. More importantly, regulation on private data protection must be enacted conjunctively to guarantee that the data collection and processing by AI are only permitted under user consent and only to the extent necessary for the intended purposes. However, the unenforceability of these initiatives has rendered the AI substantially unregulated de facto, thereby necessitating the enactment of binding regulation for AI governance.

在國際層面上,多個國際組織已就人工智能監(jiān)管達成多項倡議,試圖通過開發(fā)者自愿遵循特定原則來規(guī)范,比如 Bletchley宣言、廣島人工智能原則和OECD 人工智能原則。值得注意的是,歐洲理事會正在起草全球首個人工智能公約,一旦生效,將要求締約國制定立法,強制規(guī)定人工智能發(fā)展的風(fēng)險管理措施。這些倡議在管理人工智能發(fā)展和應(yīng)用的原則上達成了實質(zhì)性共識,其中包括透明度、公平性、安全性和隱私保護等。根據(jù)這些原則,人工智能提供者必須確保在人工智能系統(tǒng)運行期間提供所有相關(guān)信息,以便用戶以易于獲取、清晰、準(zhǔn)確和及時的方式了解人工智能的一般功能、準(zhǔn)確度水平、相關(guān)風(fēng)險和相應(yīng)的風(fēng)險緩解措施。人工智能系統(tǒng)的開發(fā)必須保持適當(dāng)?shù)墓叫院妥袷胤ㄖ巍⒚裰髟瓌t和人權(quán),必須采取必要措施防止算法歧視,如公平評估、利用多樣化和具有代表性的數(shù)據(jù)、防止使用人口統(tǒng)計特征代理、確保殘障人士的可訪問性、評估差距并保持適當(dāng)?shù)娜祟惐O(jiān)督。開發(fā)者必須進行市場前風(fēng)險管理評估,以識別和減輕人工智能相關(guān)風(fēng)險,采取網(wǎng)絡(luò)安全措施確保系統(tǒng)的穩(wěn)健性和穩(wěn)定性,并持續(xù)監(jiān)測市場后符合標(biāo)準(zhǔn)的情況。更重要的是,必須同時制定關(guān)于私人數(shù)據(jù)保護的法規(guī),以確保人工智能對數(shù)據(jù)的收集和處理僅在用戶同意的情況下,且僅限于預(yù)期目的所需范圍。然而,這些倡議的難以實施使得人工智能在實質(zhì)上幾乎未受到規(guī)范,因此需要制定對人工智能治理具有約束力的法規(guī)。

At the domestic level, many jurisdictions have proposed different approaches on AI regulation. Among these, the UK and US have opted to regulate AI within the scope of existing legislation. Pursuant to the UK AI white paper, regulators will be directed to exercise the powers delegated under the existing legislation to issue guidance obliging AI developers to comply with specified principles. Similarly, President Biden has signed an Executive Order mandating developers to conduct safety tests and report the results to the Federal Government and order relevant authorities to issue standards and guidance to monitor AI development’s compliance with the principles specified under the US AI Bill of Rights . However, this approach is apparently flawed due to the inability of existing regulations on addressing specific risks of AI. For instance, while the recent Online Safety Act in the UK could partially ensure the safety of certain internet services by restricting illegal activities and the production of harmful content, these protections are not directly applicable to AI unless the AI is deployed within specified internet services. Conversely, Canada and EU have opted to enact specialized AI legislation. While a Private Members’ Bill for AI regulation has also been introduced to the UK House of Lords by Conservative Lord Holmes of Richmond, this Bill is overly simplified and substantially resembles the current approach of UK Government due to its reliance on relevant authority to enact delegated legislations to regulate AI in accordance with specified principles. More importantly, this Bill is highly unlikely to proceed due to lack of support from the incumbent Conservative government.

在國內(nèi)層面上,許多司法管轄區(qū)已提出了不同的人工智能監(jiān)管方法。其中,英國和美國選擇在現(xiàn)有立法范圍內(nèi)對人工智能進行監(jiān)管。根據(jù)英國的人工智能白皮書,監(jiān)管機構(gòu)將被指示行使在現(xiàn)有立法下委派的權(quán)力,發(fā)布指南,要求人工智能開發(fā)者遵守特定原則。同樣,拜登總統(tǒng)簽署了一項行政命令,要求開發(fā)者進行安全測試并向聯(lián)邦政府報告結(jié)果,并命令相關(guān)機構(gòu)發(fā)布監(jiān)控人工智能開發(fā)符合美國人工智能權(quán)利法案指定原則的標(biāo)準(zhǔn)和指南。然而,這種方法顯然存在缺陷,因為現(xiàn)有法規(guī)無法解決人工智能的特定風(fēng)險。比如,盡管最近英國的網(wǎng)絡(luò)安全法案可部分確保某些互聯(lián)網(wǎng)服務(wù)的安全,限制非法活動和有害內(nèi)容的制作,但除非人工智能部署在特定互聯(lián)網(wǎng)服務(wù)中,否則這些保護措施并不直接適用于人工智能。相反,加拿大和歐盟選擇制定專門的人工智能立法。盡管保守黨Richmond的Holmes勛爵向英國上議院提出了一項人工智能監(jiān)管的私人議員法案,但由于該法案過于簡化,且實質(zhì)上類似于英國政府目前的監(jiān)管辦法,因此此法案很可能不會獲得支持。更重要的是,由于缺乏現(xiàn)任保守黨政府的支持,這項法案通過的可能性極低。

III The EU AI Act: Benchmark for Global AI Legislation?

歐盟人工智能法案:全球人工智能立法標(biāo)桿?

The EU is known for its strong stance on digital and data regulation, and it has enacted several key regulations such as GDPR and DSA relevant to AI regulation. The GDPR, as the strictest regulation for the protection of personal information in the world, governs the collection, processing, and transfer of private data across the EU. Under Article 5 of the GDPR, private data can only be collected to the extent necessary for a legitimate purpose, with an appropriate level of accuracy and security, in a lawful, fair, and transparent manner. The lawfulness of data processing is contingent upon the satisfaction of at least one of the purposes specified under Article 6 of GDPR, including obtaining informed consent. Articles 12 to 22 of the GDPR protect the rights of data subjects associated with data processing. Conversely, the DSA, similar to the UK Online Safety Act, ensures the safety of digital services by restricting illegal activities, disinformation, and the production of harmful contents. While the GDPR and DSA could partially address the risks to privacy and safety posed by AI, these regulations face similar issues to the UK Online Safety Act, that is their inapplicability in certain uses of Generative AI and the consequential inability to address specific risks associated with AI. Consequently, the enactment of specialized legislation to regulate AI becomes increasingly imminent.

歐盟以其對數(shù)字和數(shù)據(jù)監(jiān)管的堅定立場而聞名,并頒布了諸如通用數(shù)據(jù)保護條例GDPR和數(shù)字服務(wù)法DSA等與人工智能監(jiān)管相關(guān)的關(guān)鍵法規(guī)。GDPR作為全球最嚴(yán)格的個人信息保護法規(guī),管理著歐盟范圍內(nèi)私人數(shù)據(jù)的收集、處理和轉(zhuǎn)移。根據(jù)GDPR第5條,私人數(shù)據(jù)只能在合法、公正、透明的方式下,僅收集到達到合理目的所需范圍,具有適當(dāng)?shù)臏?zhǔn)確性和安全性。數(shù)據(jù)處理的合法性取決于至少滿足GDPR第6條規(guī)定的目的之一,包括獲得知情同意。GDPR第12條至第22條保護與數(shù)據(jù)處理相關(guān)的數(shù)據(jù)主體的權(quán)利。相反,DSA類似于英國的網(wǎng)絡(luò)安全法案,通過限制非法活動、虛假信息和有害內(nèi)容的制作,確保數(shù)字服務(wù)的安全性。雖然GDPR和DSA在一定程度上可應(yīng)對人工智能帶來的隱私和安全風(fēng)險,但這些法規(guī)面臨著與英國的網(wǎng)絡(luò)安全法案類似的問題,即在某些生成式人工智能的使用方面不適用,從而無法解決與人工智能相關(guān)的特定風(fēng)險。因此,制定專門的法規(guī)以規(guī)范人工智能變得日益迫切。

In response to this, the European Commission, on 21st April 2021, published a legislative proposal for a Regulation intending to establish harmonised rules for AI regulation with direct applicability across the EU, which, upon its entry into force, would be the world’s first specialised legislation for AI regulation. The European Commission proposed to regulate Generative AI through a tiered and risk-based approach to ensure that regulated AI systems are subject to rules that are proportional to their associated risks. AI systems are categorised into four classes of risk based on their intended uses, with each class subject to different regulatory obligations. On 8th December 2023, the EU AI Act has received its final approval and will enter into force in early 2024. The final version of EU AI Act retained the tiered and risk-based approach proposed by the European Commission.

作為回應(yīng),歐洲委員會于2021年4月21日發(fā)布了一項立法建議,旨在建立統(tǒng)一的人工智能監(jiān)管規(guī)則,該規(guī)則在歐盟范圍內(nèi)直接適用,一旦生效將成為全球首個專門針對人工智能監(jiān)管的立法。歐洲委員會建議通過分層和基于風(fēng)險的方式來規(guī)范生成式人工智能,以確保受監(jiān)管的人工智能系統(tǒng)受到與其相關(guān)風(fēng)險相稱的規(guī)則約束。根據(jù)其預(yù)期用途,人工智能系統(tǒng)分為四類風(fēng)險,并對每一類都施加不同的監(jiān)管義務(wù)。2023年12月8日,歐盟人工智能法案獲得最終批準(zhǔn),并將于2024年初生效。歐盟人工智能法案最終版本保留了歐洲委員會提出的分層和基于風(fēng)險的方法。

1) Unacceptable-risk AI: Prohibition

不可接受風(fēng)險的人工智能:禁止

The European Commission proposed to prohibit certain AI applications that pose unacceptable risks such as behavioral distortion or manipulation, biometric categorization, social scoring by public authorities, and biometric identification by law enforcement unless necessary for crime prevention. The European Parliament subsequently proposed to expand the scope of prohibition to include any deceptive techniques that may undermine users’ ability to make informed decisions, and AI applications for social scoring, emotion inference, and all biometric identification practices. It is confirmed that the final approved version has extended the list of prohibited “unacceptable risk AI” to encompass the amendments adopted by the European Parliament, with the exception allowing law enforcement to apply remote biometric identification under appropriate safeguards.

歐洲委員會提議禁止某些人工智能應(yīng)用,這些應(yīng)用存在不可接受的風(fēng)險,如行為扭曲或操縱、生物特征分類、公共機構(gòu)進行社會評分,以及執(zhí)法機構(gòu)進行生物特征識別,除非為了犯罪預(yù)防。隨后,歐洲議會提議擴大禁止范圍,包括任何可能削弱用戶做出知情決策能力的欺騙性技術(shù),以及用于社會評分、情緒推斷和所有生物特征識別實踐的人工智能應(yīng)用。確認(rèn)最終通過的版本已經(jīng)擴展了被禁止的“不可接受的風(fēng)險”列表,涵蓋了歐洲議會所采納的修改,但執(zhí)法機構(gòu)在適當(dāng)保障下使用遠程生物特征識別除外。

2) High-risk AI: Pre-market conformity assessment and post-market monitoring

高風(fēng)險的人工智能:市場前符合評估和市場后監(jiān)測

The second class of AI application, termed the “high-risk AI”, is subject to detailed conformity assessment and post-market monitoring requirements instead of prohibition. Under the initial proposal, the AI applications specified under Annex III are classified as “High-Risk”, such as critical infrastructure management, educational training, recruitment & employee management, critical private or public services, migration control, administration of justice, and certain law enforcement systems. The providers of high-risk AI are subject to numerous obligations, including conducting conformity assessment to ensure the compliance with the requirements specified under Title III Chapter 2 of EU AI Act.

第二類人工智能應(yīng)用被稱為“高風(fēng)險”而非被禁止,需受到詳細的市場前符合評估和市場后監(jiān)測要求的約束。根據(jù)最初的提案,列入附件III的人工智能應(yīng)用被分類為“高風(fēng)險”,如關(guān)鍵基礎(chǔ)設(shè)施管理、教育培訓(xùn)、招聘和員工管理、關(guān)鍵的私人或公共服務(wù)、移民控制、司法管理和某些執(zhí)法系統(tǒng)。高風(fēng)險人工智能的提供者需履行諸多義務(wù),包括進行符合評估,以確保符合歐盟人工智能法案第三章第二節(jié)規(guī)定的要求。

3) Transparency obligations for limited risk AI

低風(fēng)險的人工智能之透明度義務(wù)

Certain limited-risk AI capable of generating or modifying images, audio, or video content must ensure sufficient transparency by notifying users that the contents are AI-generated. Limited-risk AI may include deepfakes or chatbots. The European Parliament further proposed to oblige the providers of limited-risk AI to disclose the functionality of the AI systems, the identity of the provider, and availability of human oversight to the users.

對于低風(fēng)險的人工智能,能夠生成或修改圖像、音頻或視頻內(nèi)容,必須確保充分的透明度,即告知用戶這些內(nèi)容是由人工智能生成。低風(fēng)險的人工智能可能包括 deepfake 或聊天機器人。歐洲議會進一步提議,要求低風(fēng)險的人工智能提供者披露人工智能系統(tǒng)的功能、提供者的身份以及用戶是否可獲得人類監(jiān)督。

4) Voluntary code of conduct for minimal-risk AI

最低風(fēng)險的人工智能之自愿行為準(zhǔn)則

The providers of minimal-risk AI are encouraged to develop Code of Conduct and voluntarily align with the conformity assessment requirements specified Title III Chapter 2 of EU AI Act.

鼓勵最低風(fēng)險的人工智能提供者制定行為準(zhǔn)則,并自愿遵守歐盟人工智能法案第三章第二節(jié)規(guī)定的符合評估要求。

5) Governance

治理

Similar to the European Data Protection Board established under GDPR, the European Commission proposed a European Artificial Intelligence Board (“AI Board”) to provide issue recommendations on technical specification, standards, or the implementation of EU AI Act. This body is proposed to be comprised of the relevant authorities from the member states and the European Data Protection Supervisor. While the Council of EU supports this composition, the European Parliament has proposed an alternative: a fully independent AI governance body named “AI Office”. It has been confirmed that both proposed entities are retained by the final version of EU AI Act, with AI Office serving as an enforcement body and AI Board functioning as an advisory body.

類似于GDPR項下建立的歐洲數(shù)據(jù)保護委員會,歐洲委員會提議設(shè)立一個歐洲人工智能委員會(“人工智能委員會”),以就技術(shù)規(guī)范、標(biāo)準(zhǔn)或歐盟人工智能法案的實施發(fā)布建議。這一機構(gòu)擬由成員國的相關(guān)權(quán)威機構(gòu)和歐洲數(shù)據(jù)保護監(jiān)督員組成。雖然歐盟理事會支持這種構(gòu)成,但歐洲議會提出了一種替代方案:一個完全獨立的人工智能治理機構(gòu),名為“人工智能辦公室”。已確認(rèn)歐盟人工智能法案的最終版本將保留這兩個提議的實體,其中人工智能辦公室作為執(zhí)法機構(gòu),而人工智能委員會則作為咨詢機構(gòu)。

6) Regulating foundation models and general-purposes AI: limitation on innovation and competitiveness?

規(guī)范基礎(chǔ)模型和通用型人工智能:對創(chuàng)新和競爭力的限制?

One significant concern of the initial proposal is its failure to account for AI systems designed for a generality of outputs to serve various applications either through direct use or incorporation into other AI systems. Such AI systems, commonly referred to as “foundation models” or “general-purpose AI”, cannot be classified into any of the risk tiers due to the absence of a specific intended use, thereby rendering them substantially unregulated under the initial proposal. To address this issue, the Council of EU has proposed a new Title IA specifically designed to regulate general-purposes AI that may be used for high-risk purposes to comply with the conformity assessment requirements. Conversely, the European Parliament proposed a new Article 28b imposing horizontal obligations on all foundation models, including adopting risk management system, training on appropriately governed datasets to avoid bias and discrimination, and adherence to the transparency obligations under Article 52 of the EU AI Act.

最初提案的一個重要問題是它未能考慮到為多種應(yīng)用提供服務(wù)的、通過直接使用或并入其他人工智能系統(tǒng)的通用輸出的人工智能系統(tǒng)。這種人工智能系統(tǒng)通常被稱為“基礎(chǔ)模型”或“通用型人工智能”,由于缺乏特定的預(yù)期用途,無法被分類為任何風(fēng)險等級,因此在最初的提案下幾乎未能受到實質(zhì)性的監(jiān)管。為解決該問題,歐盟理事會提出了一個新的人工智能章節(jié),專門用于規(guī)范可能被用于高風(fēng)險目的的通用型人工智能以符合符合評估要求。相反,歐洲議會提出了一項新的28b條款,對所有基礎(chǔ)模型施加了橫向義務(wù),包括采用風(fēng)險管理系統(tǒng)、在受到適當(dāng)監(jiān)督的數(shù)據(jù)集上進行訓(xùn)練以避免偏見和歧視,并遵守歐盟人工智能法案第52條透明度義務(wù)。

Following the negotiations, EU legislators initially rejected horizonal rules and agreed on a similar tiered approach on 24th October 2023 to regulate foundation models based on their level of risks. However, on 18th November 2023, three major economies in the EU - Germany, France, and Italy - opted against binding regulation on foundation models citing concerns over potential deterrent effect on innovation and competition, and jointly supported self-regulation through code of conduct. Despite this controversy is resolved by the final version of the EU AI Act’s introduction of horizontal transparency obligations for foundation models and stricter rules for “high-impact” foundation models, a potential disadvantage of overly stringent AI regulation has been highlighted: the additional compliance costs may undermine the competitiveness of AI sector.

在談判過程中,歐盟立法者最初拒絕橫向規(guī)則,并于2023年10月24日同意采取類似的分層方法,根據(jù)基礎(chǔ)模型風(fēng)險級別進行規(guī)范。然而,2023年11月18日,歐盟的三個主要經(jīng)濟體德國、法國和意大利決定反對對基礎(chǔ)模型進行約束性的規(guī)范,理由是擔(dān)心可能對創(chuàng)新和競爭力產(chǎn)生限制效應(yīng),并聯(lián)合支持通過行為準(zhǔn)則進行自我規(guī)范。盡管歐盟人工智能法案的最終版本通過引入針對基礎(chǔ)模型的橫向透明度義務(wù)和對“高影響力”基礎(chǔ)模型制定了更嚴(yán)格的規(guī)則解決了這一爭議,但也突顯出了過度嚴(yán)格的人工智能監(jiān)管的一個潛在劣勢:額外的合規(guī)成本可能削弱人工智能行業(yè)的競爭力。

7) Final approved version of EU AI Act and the material modifications.

歐盟人工智能法案最終批準(zhǔn)版本及實質(zhì)性修改

On 8th December 2023, the EU legislators have approved the final compromise text of the EU AI Act, which, despite the substantial consistency with the initial proposal, has adopted some material modifications including the extension of the list of prohibited AI practices (biometric identification, emotion inference, social scoring, behavioural manipulation), horizontal transparency obligations for foundation models and stricter rules for “high-impact” foundation models/general-purpose AI, and retention of “AI Office” proposed by European Parliament as a supplement to “AI Board” proposed by the European Commission. In addition, the final version of EU AI Act amended the initial proposal to oblige certain public entities to register the applications of high-risk AI systems with regulators. Following this provisional agreement, the EU AI Act will be finalised promptly and enter into force in early 2024.

2023年12月8日,歐盟立法者批準(zhǔn)了歐盟人工智能法案最終妥協(xié)文本,盡管與最初提案在很大程度上保持一致,但采納了一些實質(zhì)性修改,包括擴展了禁止的人工智能實踐列表(生物特征識別、情緒推斷、社會評分、行為操縱)、針對基礎(chǔ)模型的橫向透明度義務(wù)以及更嚴(yán)格的規(guī)則適用于“高影響力”基礎(chǔ)模型/通用型人工智能,并保留了歐洲議會提出的“人工智能辦公室”作為歐洲委員會提出的“人工智能委員會”之補充。此外,歐盟人工智能法案最終版本修正了最初提案,要求某些公共機構(gòu)向監(jiān)管機構(gòu)注冊高風(fēng)險的人工智能系統(tǒng)應(yīng)用。在達成這項臨時協(xié)議后,歐盟人工智能法案將盡快完成起草工作,并于2024年初生效。

8) The ‘Brussel Effect’ and the EU AI Act’s potential influence on global AI governance

“布魯塞爾效應(yīng)”及歐盟人工智能法案對全球人工智能治理的潛在影響

The ‘Brussels Effect’ is generally referred to the global applicability of EU’s regulations and standards. By leveraging its large market size, the EU often adopt high standards in various areas including digital technologies and data privacy. When multinational corporations operate in the EU market, applying the highest standard globally is generally more practical than maintaining various standards in different regions due to the high cost of differentiation. As a notable example is the GDPR, which applies to overseas entities that collect EU citizens’ data, thereby forcing major multinational corporations especially tech giants to adhere to the GDPR globally. This worldwide adoption of GDPR has also encouraged other regions to enact similar legislation, such as the Personal Information Protection Law of China and California Consumer Privacy Act. Similarly, the EU’s Common Charger Directive which obliges all electronic devices sold in EU to adopt USB-C chargers have forced Apple to completely abandon its own Lightning charger for iPhone. As the EU AI Act, upon its entry into force, is set to become the strictest AI regulation globally, a similar Brussels Effect is likely to occur, forcing AI systems such as ChatGPT or Bard, which operate globally, to universally apply EU AI Act. This global applicability could render the EU AI Act as de facto international standard for AI governance and substantially influence the future Australian AI regulations.

“布魯塞爾效應(yīng)“通常是指歐盟法規(guī)和標(biāo)準(zhǔn)的全球適用性。歐盟借助其龐大的市場規(guī)模,在包括數(shù)字技術(shù)和數(shù)據(jù)隱私在內(nèi)的各個領(lǐng)域通常采用高標(biāo)準(zhǔn)。當(dāng)跨國公司在歐盟市場開展業(yè)務(wù)時,在全球范圍內(nèi)應(yīng)用最高標(biāo)準(zhǔn)通常比在不同地區(qū)維持多種標(biāo)準(zhǔn)更為實際,因為區(qū)分化的成本較高。一個顯著的例子是GDPR,適用于收集歐盟公民數(shù)據(jù)的海外實體,因此迫使主要跨國公司尤其是科技巨頭全球遵守GDPR。GDPR的全球采納也鼓勵其他地區(qū)出臺類似立法,如中國《個人信息保護法》和美國加州消費者隱私法案。同樣,歐盟通用充電器指令要求在歐盟銷售的所有電子設(shè)備都采用USB-C充電器,迫使蘋果公司完全放棄了其自身的Lightning充電器用于iPhone。隨著歐盟人工智能法案的生效,預(yù)計將成為全球最嚴(yán)格的人工智能監(jiān)管,類似的“布魯塞爾效應(yīng)”可能發(fā)生,迫使全球運營的人工智能系統(tǒng),如ChatGPT或Bard,在全球范圍內(nèi)普遍遵守歐盟人工智能法案。這種全球適用性可能將歐盟人工智能法案視為事實上的國際人工智能治理標(biāo)準(zhǔn),并對未來的澳大利亞人工智能法規(guī)產(chǎn)生重大影響。

IV Canadian Artificial Intelligence and Data Act (“AIDA”):

more suitable approach for Australia AI legislation?

加拿大人工智能和數(shù)據(jù)法案(AIDA):對澳大利亞人工智能立法更合適的方法?

In June 2022, the Canadian Government introduced the AIDA to the Canadian House of Commons. Under Sections 6 to 12 of this Bill, high-impact AI providers are subject to the self-assessment obligations of adopting mandatory risk mitigation measures, keeping records, providing notifications to the users about the intended uses, types of generated contents, and risk mitigation measures. The providers must report any potential “material harm” of the high-impact AI The responsible Minister may inspect records, order a mandatory audit, or even prohibit the deployment of a specific AI system, if there is a reasonable belief that the AI may produce harmful or “biased output”, infringe Sections 6 to 12, or cause imminent harm.

2022年6月,加拿大政府向加拿大下議院提出了人工智能與數(shù)據(jù)法案AIDA。根據(jù)該法案第6至12節(jié),高影響力的人工智能提供者需承擔(dān)自我評估義務(wù),采用強制性風(fēng)險緩解措施,保存記錄,并向用戶提供關(guān)于預(yù)期用途、生成內(nèi)容類型和風(fēng)險緩解措施的通知。提供者必須報告高影響力人工智能的任何潛在“重大損害”。如有合理理由相信人工智能可能產(chǎn)生有害或“偏見輸出”、侵犯第6至12節(jié)或造成即將發(fā)生的損害,負責(zé)的部長可檢查記錄,下令進行強制審計,甚至禁止特定人工智能系統(tǒng)的部署。

One significant issue of AIDA is that the obligations for high-impact AI under AIDA are less comprehensive than those for high-risk AI under the EU AI Act. Additionally, compliance under AIDA is ensured by self-assessment rather than conformity assessment by an authorized body. Nevertheless, the AIDA model could be a more suitable approach for future Australian AI legislations for several reasons. Unlike the rigid tiered approach under the EU AI Act, AIDA provides the Canadian Government broad discretion on the enforcement by defining key terms such as “biased outputs”, “high-impact AI”, and “material harm”, and establishing risk mitigation measures and penalties. The Minister may order mandatory audit and prohibit the deployment of a specific AI system based on its potential risks. Without burdensome parliamentary scrutiny, this regulatory flexibility could enable continuous evaluation of AI associated risks and accelerate the decision-making process to develop suitable standards for diverse AI applications in a timely manner. Furthermore, unlike EU AI Act which limits penalties to administrative fines, the AIDA imposes criminal liabilities as penalties for severe infringement, potentially ensuring a higher level of compliance by stronger deterrent. This approach could offer the level of standards comparable to the conformity assessment under the EU AI Act, but with potentially reduced compliance costs due to its reliance on self-assessment.

AIDA的一個重要問題在于,其對高影響力人工智能的義務(wù)比歐盟人工智能法案對高風(fēng)險人工智能的要求不夠全面。此外,AIDA項下合規(guī)性是通過自我評估而非由授權(quán)機構(gòu)進行符合評估來確保的。然而,出于幾個原因,AIDA模式可能是未來澳大利亞人工智能立法的更合適方法。與歐盟人工智能法案項下嚴(yán)格分層方法不同,AIDA通過定義關(guān)鍵術(shù)語如“偏見輸出”、“高影響力人工智能”和“重大損害”,并設(shè)立風(fēng)險緩解措施和處罰,賦予了加拿大政府廣泛的執(zhí)行裁量權(quán)。部長可基于人工智能的潛在風(fēng)險下令進行強制審計并禁止特定人工智能系統(tǒng)的部署。在無繁瑣的議會審查情況下,這種監(jiān)管靈活性可持續(xù)評估人工智能相關(guān)風(fēng)險,并加快制定適用于多種人工智能應(yīng)用的合適標(biāo)準(zhǔn)的決策過程。此外,與限制處罰為行政罰款的歐盟人工智能法案不同,AIDA將刑事責(zé)任作為嚴(yán)重違規(guī)的處罰,可能通過更強有力的威懾確保更高水平的合規(guī)性。這種方法可能提供與歐盟人工智能法案項下符合評估相媲美的標(biāo)準(zhǔn)水平,但由于依賴自我評估,合規(guī)成本可能會降低。

V Bibliography

參考文獻

1) Legislation/立法

CA Civ Code § 1798.100 (2018).

Directive (EU) 2022/2380 of the European Parliament and of the Council of 23 November 2022 amending Directive 2014/53/EU on the harmonization of the laws of the Member States relating to the making available on the market of radio equipment.

Online Safety Act 2023 (UK).

Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC.

Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market for Digital Services and amending Directive 2000/31/EC.

《中華人民共和國個人信息保護法》[Personal Information Protection Law of the People’s Republic of China] (People’s Republic of China), National People’s Congress, Order No.91/2021, 20th August 2021.

2) Other Legislative Materials/其他立法文件

Artificial Intelligence (Regulation) HL Bill (2023-24) 11 (UK).

Bill C-27, Digital Charter Implementation Act, 1st Sess, 44th Parl, 2022, pt 3 (Canada) .

European Union, European Commission, ‘Proposal for a regulation of the European Parliament and of the Council laying down harmonized rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts’ COM (2021) 206 final, 21 April 2021 .

European Union, Council of the European Union, ‘General approach adopted by the Council of the European Union on 25 November 2022 on the proposal for a regulation of the European Parliament and of the Council laying down harmonized rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative’, 25 November 2022 .

European Union, European Parliament, ‘Amendments adopted by the European Parliament on 14 June 2023 on the proposal for a regulation of the European Parliament and of the Council laying down harmonized rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts’, 14 June 2023 .

3) Draft treaty/草案條約

Council of Europe, Committee on Artificial Intelligence, ‘Consolidated working draft of the framework Convention on Artificial Intelligence, Human Rights, Democracy, and the Rule of Law’ CAI (2023)18, 7 July 2023 .

4) Secondary Resources/其他資源

Anu Bradford, ‘The Brussels Effect’ (2012) 107(1) Northwestern University Law Review.

AI Safety Summit, ‘The Bletchley Declaration by Countries Attending the AI Safety Summit 1-2 November 2023’ (1 November 2023) .

BBC, ‘ChatGPT banned in Italy over privacy concerns’ (Web Page, 1 April 2023) .

Council of the European Union, ‘Artificial intelligence act: Council and Parliament strike a deal on the first rules for AI in the world’ (Web Page, 9 December 2023) .

Charlotte Siegmann and Markus Anderljung, ‘The Brussels Effect and Artificial Intelligence: How EU regulation will impact the global AI market’ (Cambridge University Press, 2022).

Department for Science, Innovation and Technology (UK), ‘A pro-innovation approach to AI regulation’ (2023) .

Department for Science, Innovation and Technology (UK), ‘Capabilities and risks from frontier AI: A discussion paper on the need for further research into AI risk’ (2023) .

Lilian Edwards, ‘Expert explainer: The EU AI Act proposal’, Ada Lovelace Institute (Web Page, 8 April 2022) .

Organisation for Economic Co-operation and Development, ‘What are the OECD Principles on AI?’ (2020) .

Science, Innovation and Technology Committee, Parliament of the United Kingdom, ‘The governance of artificial intelligence: interim report’ (Ninth Report of Session 2022-23, 31 August 2023) .

The Whitehouse, ‘Blueprint for an AI Bill of Rights: making automated systems work for the American people’ (2022) .

The Whitehouse, ‘Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence’ (Web Page, 30 October 2023) .

The Group of Seven, ‘Hiroshima Process International Guiding Principles for Organizations Developing Advanced AI system’ (30 October 2023) .

The Group of Seven, ‘Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI system’ (30 October 2023) .

Tech Policy Press, ‘Will Disagreement Over Foundation Models Put the EU AI Act at Risk?’ (Web Page, 30 November 2023) .

Reuters, ‘Exclusive: Germany, France and Italy reach agreement on future AI regulation’ (Web Page, 21 November 2023) .