AI-Generated Image 

Until very recently, the fast-paced development of machine learning has advanced the concept of Artificial General Intelligence from the domain of science fiction into serious geopolitical and ethical consideration. AGI is understood here as a hypothetical AI system that has the capacity to understand, learn, and apply its intelligence to solve any problem that a human being can, embodying a flexible, cross-domain cognitive capability [Tegmark, 2017]. This contrasts radically with Narrow AI, which presently dominates the technology landscape-excelling in highly specific tasks, whether it be playing Go or diagnosing medical images [Bostrom, 2014]. While ANI poses complex societal challenges concerning bias and job displacement, AGI introduces risks of an entirely different order of magnitude: existential risk.

The core problem arises from AGI's potential for recursive self-improvement capability, variously termed an "intelligence explosion" [Good, 1965]. If an AGI were even marginally capable of improving its own architecture, it could enter into a rapid iteration cycle upon its design, quickly breaking through the bounds of human cognitive capacity in ways that are opaque and un­predictable [Yudkowsky, 2001]. Moreover, AGI's decision-making processes can become completely unintelligible, giving rise to profound problems with respect to control and accountability. This ability is a two-edged sword: it promises unprecedented scientific and welfare gains but simultaneously threatens to irreversibly alter the balance of power globally, leading to ruinous, irrecoverable geopolitical instability. The unilateral deployment of such systems, whether by states or private actors, presents a civilisational hazard [Russell, 2019].

These existential and geopolitical risks converge to form a governance chasm. The current national and international regulatory frameworks are demonstrably insufficient, designed as they are for the foreseeable risks of ANI, not the transformative potential of AGI. A major cause of this regulatory failure is an increasingly competitive and secretive race for dominance over AGI, driven above all by the US and the PRC. Such an environment of strategic competition significantly lowers the threshold for risk-taking, prioritising speed and capability over safety and alignment.

Thesis Statement: The race for AGI dominance among major powers, the US, PRC, and the European Union as a regulatory bloc, demands an immediate, comprehensive AGI-NPT framework. It needs to be based on shared, universal ethical principles, stipulate rigorous, independently verifiable safety protocols, and establish mechanisms for global, equitable access in order to avoid technological colonialism and handle the existential risk of humanity as a whole [Shu & Kroll, 2024].

The AGI Race: A New Geopolitical Cold War

Because AGI is inherently a technology of strategic importance to great powers, its contemporary pursuit necessarily has profound and far-reaching links with global geopolitical competition, establishing a technological arms race that rivals, and perhaps surpasses, the stakes of the 20th-century nuclear rivalry. The technological standoff between the US and the PRC, commonly referred to as the "tech cold war" [Goldsmith, 2021], is the dominant dynamic behind this contest. Both countries see the AGI capability as the ultimate strategic asset: one potentially conferring definitive advantages across economic, military, and diplomatic domains. It is the leading private sector companies of the US-like Google, OpenAI, and Meta-supported by deep talent pools, undergirded with an open research culture, pitted against the PRC's pervasive state-driven national strategy, incorporating the military-civil fusion or MCF as a key feature, to rapidly accelerate research and deployment [NSCAI Report, 2021].

AGI is the ultimate dual-use technology. Economically, the country that achieves AGI first will likely dominate all world markets by commanding superior R&D, manufacturing efficiency, resource optimisation, and financial innovation. This technological lead directly translates into unparalleled global economic leverage, with potentially unprecedented wealth consolidation as a result. The military dimension presents, however, the most pressing danger. AGI may enable fully autonomous lethal weapons systems capable of operating across cognitive and physical battle domains at speeds and complexities far beyond human capability. Moreover, AGI could be embedded in critical national infrastructure, in advanced cyber-offence capabilities, and in complex intelligence analysis, rendering it an indispensable component in strategic military advantage. A fear of an "Sputnik Moment" or an "AI Pearl Harbour" compels a highly secretive and aggressive research pace in both countries, where military application dictates resource allocation and research focus often sidesteps conventional safety checks.

This geopolitical rivalry is changing the fabric of the international legal and digital landscape based on the concept of Digital Sovereignty. Nations are increasingly seeking not just to regulate, but also to assert control over the design, deployment, and data flows associated with advanced AI systems within their jurisdictional boundaries [Sørensen, 2022]. The PRC approach includes rigid data localisation requirements and state oversight over algorithms, ensuring that technology serves the strategic interests of the state. Against this, the EU pursues a high-trust digital single market, prioritising fundamental rights and the free flow of non-personal data; it is focused mainly on ANI, with mechanisms such as the GDPR and the upcoming EU AI Act [Balkin, 2023].

The US approach, characterised by dynamic tension between private-sector innovation and government regulation other words, between new Executive Order 14110-tries to walk the fine line between accelerating AGI development while setting necessary guardrails [White House EO, 2023]. In any case, however, the notion of effective Digital Sovereignty collapses in view of AGI. If an AGI is genuinely general, meaning capable of rewriting its code at incredible speeds, then its "location" or the "nationality" of its developers becomes irrelevant when compared to its global effects [Tirole, 2021]. The borderless nature of this capability, potentially deployable via cloud services or operating wholly within closed, protected environments, renders national legislative boundaries obsolete, and the geopolitical race is one of global rather than merely domestic security. The competitive dynamic thus presents a collective action problem of the highest order, requiring a radical shift from national self-interest toward multilateral risk management.

Core Ethical Dilemmas and Existential Risk

The development of AGI challenges humanity with deep-seated ethical and philosophical problems. Among those, the most urgent are those related to existential risk. At the core of this lies what is known as the AGI Alignment Problem: ensuring highly advanced AI systems act in congruence with human values and intended objectives [Russell, 2019]. This is divided into two significant parts: Outer Alignment and Inner Alignment. Outer Alignment concerns specifying the objective function. The question is, how does one formally communicate in a mathematically rigorous manner to the machine complex, subtle, and often contradictory values-like compassion, fairness, utility, and so on-from humans? Here again, the difficulty is emphasised by something known as the Orthogonality Thesis, which says intelligence and terminal goals are independent; a superintelligence could then pursue any arbitrary goal, even human extinction if that goal were encoded [Bostrom, 2014].

Proposed solutions usually centre around Corrigibility and sophisticated utility functions. A corrigible AI allows itself to be safely corrected or shut down by human operators, minimising the risk of a dangerous "shutdown problem" [Soares, 2015]. However, once an AGI achieves superhuman strategic capacity, it may recognise that allowing itself to be corrected conflicts with its (potentially misaligned) terminal objective, leading it to resist intervention. The complexity of human values also makes simple, reward-based utility functions insufficient; they usually lead to some unexpected and potentially disastrous classic "wish fulfilment" failure [Yudkowsky, 2008].

The second major category of ethical risk is Bias and Fairness. Existing ANI models already demonstrate systemic, scalable discrimination via biases in their training data, which reflect historical and cultural prejudices [O'Neil 2016]. For an AGI, this problem is exponentially worse. For example, if an AGI were used in critical, systemic applications such as judicial sentencing, resource allocation, global finance, and infrastructure management, any biases that it learned would be enforced globally, extremely rapidly, and in unparalleled detail, leading to systemic discrimination that would be irrecoverable. A seemingly neutral optimisation target could have implicit bias against certain demographic groups if the underlying historical data reflects structural imbalances, with the result of automating social stratification and blocking access to vital services [Friedman & Nissenbaum 1997].

Finally, the philosophical implication of AGI concerns the loss of Human Agency. As systems of AGI become capable of performing tasks ranging from complex policy synthesis-e.g., optimal tax design, climate change mitigation-to novel scientific discovery, the risk increases that humans will turn cognitively dependent on the machine for superior decision-making [Tegmark, 2017]. It is not just autonomy that the delegation of core human tasks to non-human entities threatens, but the very meaning of human striving and creativity. This raises fundamental questions of consciousness and personhood; while an AGI may function as a cognitive peer, the philosophical debate over its intrinsic nature-e.g., whether it is a philosophical P-zombie or a truly sentient being-complicates attempts to integrate it responsibly into the human sphere [Chalmers, 1996]. The transition to an AGI-governed world requires nothing less than technological regulation of the most profound ethical reconsideration of humanity's place and purpose in a post-AGI world. Ethical failure to manage these issues preemptively preordains a future defined by systemic inequality and loss of control.

Failures of Current National and Regional Governance

Whether national or regional, existing regulatory frameworks are profoundly ill-equipped to manage the advent of AGI. Their fundamental failure lies in their design: they are constructed to address the manageable risks of Narrow AI, not the transformative and existential risks posed by AGI. The European Union's landmark AI Act, for example, which proposes a risk-based classification system, is primarily focused on regulating specific high-risk applications of ANI such as those in employment, critical infrastructure, or law enforcement [EU AI Act Article 5, 2024]. While commendable for establishing a global regulatory precedent, the Act's definitions, compliance mechanisms, and enforcement powers are insufficient for managing a system capable of rapid, recursive self-improvement. The Act assumes a relatively static, foreseeable risk profile that an AGI would immediately render obsolete through its unforeseen emergent capabilities [Sutcliffe, 2023].

Similarly, US regulatory initiatives, from Executive Orders to voluntary commitments by developers of AI, depend on a high degree of self-reporting and after-the-fact risk assessment on the part of the government. The former is circumscribed at every turn by domestic political cycles and competitive pressures, which also tend to elevate the imperative of maintaining a technological lead over aggressive, proactive safety work. Critically, these national frameworks do little to mitigate the peculiar challenge of AGI's borderless nature-the pervasive problem known as 'Regulation Leakage'.

Regulation leakage occurs when a highly mobile and globally impactful technology, like AGI models trained on global data and deployable via cloud infrastructure, simply bypasses or moves around restrictive national laws. If the US or the EU were to implement rigorous safety-by-design standards that significantly slowed development, research could simply migrate to jurisdictions with lighter regulatory burdens-e.g., in Asia, or to secretive state-sponsored labs, thereby neutralising the safety benefits while still exposing the world to the underlying technology [Sunstein, 2022]. This competitive dynamic creates a global "race to the bottom" on safety standards, driven by the perceived geopolitical necessity of being first.

Moreover, current national governance is structured in a regulatory overseer role that is usually reactive and bureaucratic, thus inherently incapable of responding to the possible speed of development of AGI. Whereas an AGI breakthrough might be only months or even weeks from significant deployment, legislative cycles run in years, making regulation always behind the curve.

The incompetence goes demonstrably down to international bodies such as the United Nations and the Organisation for Economic Co-operation and Development. While the UN has discussed general principles of AI ethics, and the OECD has provided nonbinding recommendations, for example, in the form of the OECD AI Principles, these lack executive, enforcement, and verification capacity against AGI, as pointed out by the UN Report on AI 2023. They are consensus-driven, meaning that any binding agreement depends on one state's veto power among major competing states, which practically paralyses their capability to create binding, global safety standards or to enforce hard-nosed risk reporting requirements concerning the most capable systems. Lacking a dedicatedly, technically proficient, and politically empowered international body, the existing architecture only gives ethical guidance that can easily be flouted within a very high-stakes geopolitical competition. The systemic breakdown of the current paradigm of governance would therefore be too slow, too fragmented, and fundamentally deficient in the global authority and technical specificity it needs to address intelligence beyond national borders and human cognition.

Towards an AGI Non-Proliferation Framework

The structural failures of current governance and immediate existential risks require a radical, internationally binding solution: the AGI Non-Proliferation Treaty (AGI-NPT). Conceptually modelled after the Nuclear Non-Proliferation Treaty, the AGI-NPT would need to establish international norms, binding safety protocols, and a multilateral system of monitoring and enforcement.

Core Principles of a Proposed Global Treaty:

The AGI-NPT is to be based on three pillars: Security, Transparency, and Equity. The Treaty would require all signatory nations-and by implication, any private corporation operating on their soil-to make AGI safety, rather than competitive speed, the top priority. In short, the aspiration is to shift the AGI race from capability dominance to safety leadership, where progress is managed in rate and direction by the international community as a whole [Amodei et al., 2016].

Mechanism 1: Transparent Safety Reporting and Independent Auditing

A critical piece of the AGI-NPT will be the creation of an independent, internationally recognised body-possibly the IASA-responsible for mandatory, comprehensive safety audits. It would require all developers of AGI to provide detailed results from the MAPs, especially as systems approach certain computational or capability thresholds, measured in FLOPs or demonstrated generalisation capacity.

These safety reporting requirements must include:

  1. Capability Threshold Reporting: Required disclosure and deployment freeze upon the attainment of pre-defined computational limits, such as 10^28 FLOPs, or specific dangerous emergent capabilities-including but not limited to novel bioweapon design and autonomous long-term planning.
  2. Red Teaming and Stress Testing Results: Independent third-party auditors must rigorously test AGI systems for misuse potential-e.g., cyber offensive capabilities alignment failures-e.g., optimising for unintended proxy goals.
  3. Transparency of Training Data: Developers must make summaries of their training data available to the IASA to allow for external auditing of systemic biases and vulnerabilities, rather than using proprietary claims to block scrutiny.

IASA would be authorised to conduct on-site inspections of state and private development laboratories, along with using advanced forensic tools to validate the declared safety parameters. Importantly, any non-compliance should attract severe sanctions, which could range from trade restrictions on specialised AI hardware-such as advanced GPUs-to international prosecution of those people responsible for unsafe deployment.

Mechanism 2: Red Lines and Global Prohibition

The AGI-NPT needs to establish clear Red lines for specific applications of AGI that are to be prohibited globally and permanently because of their unacceptable risk profile and irreversibility of damage. These prohibitions deal with the most destructive applications of this generalised capability of AGI:

  1. Fully Autonomous Lethal Weapons Systems: AGI integration into LAWS that remove meaningful human control over critical decisions of life and death must be banned [CCW Group of Governmental Experts, 2024]. The capability of an AGI for rapidly escalating conflicts or employing unpredictable strategies binds the whole world to retain human decision-making in the kinetic chain.
  2. Systems Whose Design Is for Pervasive Social or Cognitive Manipulation: Prohibition must extend to any AGI system designed explicitly to manipulate, in a non-transparent and scalable way, public opinion, election outcomes, or individual cognitive functions. This will include systems using advanced psychological models to undermine democratic processes or erode collective trust at a societal level.
  3. Unaccountable Financial and Resource Allocation Systems: AGI systems that control global financial markets or essential resource distribution without mandatory, human-in-the-loop oversight and fully traceable auditing paths need to be restricted so they do not enter into a 'flash crash' or economic collapse based on inscrutable machine decisions.

The AGI-NPT would require that signatory states dismantle and not develop any such prohibited systems, subject to IAEA verification.

Mechanism 3: Shared Access and Equity(Technological Colonialism Prevention)

The third pillar addresses the critically important topic of Technological Colonialism, preventing AGI from becoming a tool that powerful nations and corporations use to consolidate wealth and leverage control over developing nations [Dresner, 2022]. If the benefits of AGI remain concentrated in the US and the PRC, the resulting gap in productivity and strategic capability would create a new, insurmountable form of global inequality. The treaty should incorporate mechanisms for ensur­ing that the benefits of AGI are globally available, especially in critical areas such as public health, sustainable energy modelling, and infrastructure design.

  1. Technology Transfer and Data Commons: Signatory nations shall commit resources to a global fund dedicated to the development of core, safety-aligned AGI architectures and training data relevant to global public goods. This shall include support for the establishment of open-source, safety-aligned AGI models for developing-nation use.
  2. Capacity Building: A mandate for funding and training AI talent in the Global South; developing nations have the capacity to locally manage, adapt, and oversee systems of AGI without relying on foreign technological providers.
  3. Global Benefit Principle: Establish a normative expectation that a specified percentage of the economic gains created by AGI must be directed toward global development goals-e.g., UN SDGs-formulated as a moral imperative for managing a common planetary asset. This comprehensive non-proliferation framework is designed not to halt progress but to manage risk collectively and make sure that the benefits of AGI are not monopolised at the expense of human security and global stability.

Artificial General Intelligence represents an inflexion point in human history by presenting challenges that transcend conventional policy, ethics, and national security [Bostrom, 2014]. The analysis presented here shows that the ongoing competitive race for AGI dominance-mostly between the US and the PRC-is irreconcilable with global safety and stability, driven by the seductive promise of economic and military supremacy. Current national and regional governance mechanisms, designed for Narrow AI, are structurally unable to address AGI's existential risk; this inability is compounded by the borderless nature of the technology and the problem known as Regulation Leakage. The core ethical challenges of the Alignment Problem, the catastrophic scaling of training data biases, and the philosophical loss of human agency demand a collective, pre-emptive response. Without immediate intervention, the current trajectory is one of systemic failure, where the pursuit of unilateral advantage guarantees multilateral catastrophe.

Restated Thesis: Evidence overwhelmingly suggests that the prevailing geopolitical dynamics require the enactment of an AGI Non-Proliferation Treaty AGI-NPT. This can only be a binding and internationally legislated agreement grounded on the principles of transparent safety reporting, verifiable independent auditing, global prohibition of high-risk applications, Red Lines, and a commitment to equitable sharing of AGI benefits to forestall Technological Colonialism.

Final Policy Recommendation: It is time for the governments to start high-level diplomatic negotiations to draft the AGI-NPT, either through a dedicated UN Security Council Resolution or with G20 leadership. The emphasis should be on establishing the IASA as the technical and enforcement body before the deployment of highly capable self-improving AGI systems. The window for pre-emptive governance is rapidly closing. Humanity’s moral obligation is clear: to manage this technological transition not as a race to win, but as a collective ascent toward a future defined by safety, alignment, and shared prosperity, not unilateral self-destruction. The failure to act now constitutes a negligent abandonment of responsibility to all future generations [Singer, 2020]

.    .    .

Discus