In recent revelations, it has come to light that U.S. companies specializing in artificial intelligence (AI) have engaged in what is being termed as “secret diplomacy” with China’s AI experts. The Financial Times reported on January 11 that meetings between representatives of U.S. companies, including OpenAI, Anthropic, and Cohere, and Chinese state-backed institutions, notably Tsinghua University, took place in Switzerland in July and October.
The shared concern among these entities revolves around the potential misuse of powerful AI technology, including the spread of misinformation and threats to social cohesion. While the talks allegedly aimed to discuss the risks associated with emerging technology and promote investments in AI safety research, the underlying geopolitical implications and potential security risks cannot be ignored.
Tsinghua University, under the control of the Chinese Communist Party (CCP), is intricately tied to a regime accused of conducting an ongoing genocide against the Uyghurs. The U.S. State Department has raised alarms about the CCP’s totalitarian nature and global ambitions. It becomes imperative, therefore, to scrutinize the implications of U.S. companies engaging in discussions with an authoritarian regime that has a track record of human rights abuses and aggressive geopolitical strategies.
The attendees of these meetings, reportedly including scientists, policy experts, and representatives of both U.S. AI groups and Chinese state-backed institutions, emphasized the talks as a means to find a scientific path forward for the safe development of more sophisticated AI technology. However, the lack of acknowledgment of the CCP’s concerning actions, such as the ongoing genocide and its totalitarian governance, raises questions about the naivete of the U.S. participants.
While the U.S. and UK governments were reportedly informed of these discussions, the issue of insufficient coordination becomes evident. It is essential for the United States and its allies, particularly the Group of Seven countries, to establish a unified negotiating position on AI safety before individual companies pursue private talks with the CCP. Such a collective approach would provide the democratic nations with greater leverage over China, preventing the inadvertent disclosure of critical information and negotiating positions to an adversarial regime.
Furthermore, the potential for espionage during these meetings raises serious concerns. The CCP, known for its strategic exploitation of technological collaborations, could use such engagements to compromise U.S. companies and obtain confidential information through subterfuge or espionage. The lack of diplomatic and intelligence acumen displayed by the AI company representatives in engaging with a totalitarian and genocidal regime is irresponsible and poses significant risks to national security.
The reported discussions covered areas of technical cooperation and policy proposals that fed into international forums such as the United Nations Security Council meeting on AI in July 2023 and the UK’s AI summit in November the same year. However, the naivete of the AI companies is apparent in their failure to address fundamental issues, such as verifying Beijing’s compliance with any agreement and gauging the trustworthiness of a regime that has violated multiple international commitments.
Notably, some major AI wisely chose to avoid these meetings altogether, recognizing the inherent risks and ethical dilemmas associated with collaborating with the CCP.
The Shaikh Group, based in Cyprus, played a pivotal role in convening these talks, with its chief executive expressing a lofty goal of establishing global standards around the safety of AI models. However, in the pursuit of such standards, it is crucial for AI companies to navigate the geopolitical landscape with greater awareness, ensuring that their collaborations do not compromise democratic values or inadvertently aid a regime with a track record of human rights abuses and geopolitical aggression. The companies’ concerns about the existential risk that AI poses to humanity are legitimate. However, they opened a space for China, which lags behind the United States in AI, to transfer critical scientific knowledge to a totalitarian regime, and gave the regime a forum from which it could further attempt to impose international regulations and limits on U.S. technology.
In conclusion, the clandestine nature of AI collaborations between U.S. companies and the CCP demands a re-evaluation of the ethical considerations, security risks, and diplomatic acumen involved. The naivete displayed by these companies in engaging with a regime with a questionable human rights record raises alarms about the responsible development and deployment of AI technology on the global stage. The imperative now lies in fostering a unified and informed approach among democratic nations to safeguard against potential exploitation and to uphold the values that underpin responsible AI development.