亚洲色怡人综合网站,国产性夜夜春夜夜爽,久久97AV综合,国产色视频一区二区三区

Global EditionASIA 中文雙語Fran?ais
Opinion
Home / Opinion / Chinese Perspectives

China promotes coordination of AI governance

By Han Na | China Daily | Updated: 2024-07-02 06:34
Share
Share - WeChat
[Photo/VCG]

Artificial intelligence technology has become a key driver of social development. But thanks to the rapid development of advanced technologies, the global governance system faces unprecedented challenges, particularly in the field of AI security governance.

Establishing a secure, reliable and efficient AI security governance system has become an important issue that requires all stakeholders to collectively address. Given this fact, adopting a trust-based approach to global AI security governance will not only optimize the existing governance system but also appropriately respond to future security challenges.

AI is closely intertwined with security and development policies. As the core driver of the fourth industrial revolution, AI is plays an irreplaceable role in both development and security.

Little wonder then that stakeholders are striving to edge ahead of each other in AI technology, and economies such as the United States, the European Union and China have already issued AI regulations. In fact, there has been a surge in global AI regulations this year.

On March 21, 2024, the United Nations General Assembly adopted the first global resolution on AI, calling for the development of "safe, reliable, and trustworthy" AI systems to promote sustainable development. This is a significant step the international community has taken to establish a global AI governance framework and lay the foundation for global cooperation in formulating common rules and standards for AI application and development.

The EU has endorsed the AI Act. A year before that, in March 2023, the EU passed the "AI Act", emphasizing the importance of adopting a risk-oriented and end-to-end control approach, classifying regulations based on risk levels while focusing on specific industries and scenarios, and reaffirming the EU's leadership in technology regulation.

In October 2023, the US issued an executive order called "Developing and Using AI Safely, Reliably, and Trustworthily," and in February 2024, it announced the establishment of the AI Security Research Institute Alliance. Supported by more than 200 leading AI stakeholders, the alliance is aimed at establishing a leadership position in the international AI governance framework.

In order to promote the healthy development of generative AI technology, China issued the Interim Measures for the Management of Generative Artificial Intelligence Services in July 2023. China is also actively involved in global AI governance and proposed the "Global AI Governance Initiative" in October last year, advocating for the international community to uphold a shared, comprehensive, cooperative, and sustainable security perspective to promote the beneficial use of AI technology for humanity and advance the building of a community with a shared future for mankind.

Global AI security governance is complex, especially because of the contradiction between the rapid development of technologies and relatively slow pace of construction of the governance system, as well as the diversity and complexity of interests of different stakeholders. Plus, the disparities between developed and developing countries in terms of AI research and application have resulted in uneven development of AI in different economies.

Moreover, the policies of different economies, organizations and enterprises on AI governance are different, making the formation of a global AI governance system very challenging.

Geopolitical factors, too, have had a significant impact on global AI governance. Western economies, led by the US and the EU, are engaged in competitions to set AI policies and regulations, attempting to establish strategic alliances in AI governance and international participation, creating obstacles to the development of AI governance and international engagement for others.

As such, there is a need to urgently promote global AI security governance based on national trust. AI security governance and inter-state trust entail a continuous and positive interaction process.

Trust in the world arena can be categorized into rational trust, value trust, and environmental trust. National trust involves inter-party interactions based on respective interests and strategies, where each party primarily considers its own interests and strategies when making decisions on whether to trust another, serving as the fundamental consideration and prerequisite for establishing trust in AI security governance between different parties.

Value trust refers to interactions between different parties based on emotional cultural and cognitive concepts, serving as the guiding philosophy and key condition for establishing governance trust between them. And environmental trust involves interactions between different parties based on communication mechanisms and governance systems, mainly serving as important channels and safeguards for establishing trust in AI security governance.

Enhancing trust can help manage major country competition while mitigating the adverse effects of such competitions. Similarly, trust enhancement can create room for adjustment of issues arising from major country competition, further strengthening the construction of inter-state trust.

Besides, national trust is the cornerstone of international cooperation and governance, especially when it comes to addressing common global issues such as AI. By strengthening communication and cooperation, stakeholders can set common security standards and governance mechanisms to effectively address the risks and challenges posed by AI. A governance system based on trust can help boost the confidence of all parties in the field of AI.

The trust-based approach to global AI security governance is a complex and systematic endeavor that requires joint efforts of the international community. By setting unified technical standards and ethical norms, strengthening data security and privacy protection, and promoting public participation and social supervision, a foundation of trust can be built for global AI security governance.

Based on this foundation, balancing technological innovation with ethical considerations, establishing effective risk assessment and response mechanisms, exploring ways to reach a consensus on mutual trust, enhancing value trust and environmental trust, and fostering international coordination and cooperation to consolidate consensus can construct an open, fair, and effective governance mechanism, promoting the beneficial use of AI technology for humanity and gradually enhancing rational trust.

In the future, as AI technology continues to advance, global governance will face more challenges. It is crucial to actively support the establishment of a just, transparent and efficient global AI governance system within the framework of the UN to provide solid guarantees for the harmonious development of human society.

The author is an associate professor/doctoral supervisor at, and executive director of, International Governance Research Center for Cyberspace, People's Public Security University of China. The views don't necessarily reflect those of China Daily.

If you have a specific expertise, or would like to share your thought about our stories, then send us your writings at opinion@chinadaily.com.cn, and comment@chinadaily.com.cn.

Most Viewed in 24 Hours
Top
BACK TO THE TOP
English
Copyright 1995 - . All rights reserved. The content (including but not limited to text, photo, multimedia information, etc) published in this site belongs to China Daily Information Co (CDIC). Without written authorization from CDIC, such content shall not be republished or used in any form. Note: Browsers with 1024*768 or higher resolution are suggested for this site.
License for publishing multimedia online 0108263

Registration Number: 130349
FOLLOW US