Image by Pete Linforth from Pixabay

The rapid rise of GenAI, including complex models such as LLMs and diffusion systems, presents a fundamental epistemological challenge to IP law worldwide. In allowing machines themselves to generate complex, original works, GenAI structurally severed the historical connection between human creativeness and legal authorship. This crisis is exacerbated by a large Governance Gap and a high degree of Regulatory Lag, in which incremental common law-based legal systems are unable to match the exponential, non-linear development of GenAI capabilities. This maintains the door to vast commercial exploitation without any rules and places the economic burden on creators as a way to establish legal precedent. Indeed, the scale of this new conflict is enormous: the forecast growth of the global GenAI market to almost USD 1 trillion by 2032 (Fortune Business Insights, 2024) is explicitly based on the output of models whose very legality has been fundamentally called into question. At its core, the constitutional aim of U.S. copyright, anchored in the Copyright Clause (U.S. Constitution, Article 1, Section 8, Clause 8), is to advance the arts by securing rights to Authors. Yet, GenAI actively undermines this Constitutional mandate in its relentless pursuit of works’ expressive aspects to create unauthorised derivatives. The crisis is examined from the perspective of a twofold problem: that of the un-copyrightability of outputs and infringement of inputs.

The Doctrine of Human Authorship: The Output Uncopyrightability Deadlock

The output crisis is defined by the strict legal requirement for human originality and human authorship—a requirement structurally incompatible with the collaborative nature of GenAI. The Originality requirement is a constitutional mandate demanding independent creation and a minimal degree of creativity. However, the Creativity Hurdle is high for GenAI output: a simple prompt often fails the Feist test because the complex expressive choices are performed by the algorithm. The exclusion is philosophically rooted in the idea that originality must be tied to human personality, possessing “something irreducible, which is one man’s alone” (Justice Oliver Wendell Holmes). While GenAI output easily meets the Fixation requirement (embodiment in a tangible medium), the debate over Originality remains a major barrier. Furthermore, the Human Authorship Mandate is the primary legal roadblock. The USCO Compendium § 306 explicitly requires that a work be created by a human being, a policy based on the necessity for a human life to calculate copyright duration. This administrative policy was affirmed judicially in the landmark case, Thaler v. Perlmutter (2025), which solidified the judicial position that human authorship is a bedrock requirement. This ruling definitively rules out the AI itself (consistent with Naruto v. Slater) and, by extension, the developer (due to “double dipping” concerns), forcing the debate onto the end user.

The most plausible candidate for authorship is still that of the end user. Their contribution is best defended through the Burrow-Giles Analogy: the Burrow-Giles Lithographic Co. v. Sarony (1884) precedent found authorship in the photographer’s deliberate choices (composition, lighting) despite their use of a machine. The actions that involve crafting a prompt, iterating, and selecting a final output by the AI user are the modern equivalent of this intellectual labour, which now makes the user the “originator.” Philosophically, they also satisfy core theories of authorship: Lockean autonomy satisfied by the labour in the prompt, Hegelian personality by the infusion of creative intent into the work, and Kantian communication by virtue of the final image constituting a creative act of speech. Thus, even if there is an AI executor, the end user is the intellectual creator.

The Input Crisis: Mass Reproduction and Liability Exposure

The input crisis is marked by the parasitic relationship of AI with copyrighted works, resulting in mass unauthorised use of data and a threat to the knowledge ecosystem. This has been an overt targeting of copyrighted materials at scale in blatant disregard for ethical conventions. For example, the Books3 dataset-nearly 200,000 copyrighted e-books-have been used to train the systems by companies like Meta. That use has been roundly condemned as unauthorised ingestion by a raft of authors who say it represents the “biggest act of copyright theft in history.” Similarly, the Large-Scale Artificial Intelligence Open Network (LAION)-used by commercial firms-provides access to billions of training images without permission, perpetuating the problem of visual plagiarism. While providers have generally relied on the Fair Use defence as covering this process, they are highly legally exposed vis-à-vis the outputs generated by users. Providers will be subject to claims of secondary liability, specifically vicarious infringement based on their right to supervise user activity and financial interest in that activity, and contributory infringement based on their knowledge and material contribution to the infringing output. This exposure requires a regulatory solution, given that courts will seek to hold the financially capable party accountable. Moreover, the continuing suits by The New York Times and Getty Images highlight the catastrophic substitution effect, wherein AI outputs directly compete with and replace the market for the original copyrighted content. Beyond compensation, the critical long-term danger is the Quality Crisis: the proliferation of low-cost, synthetic data unmoored from direct human experience eventually gets recycled into future training sets, leading to model collapse and degradation of the knowledge ecosystem, subverting the core Constitutional goal of advancing the arts.

Policy and Regulatory Divergence: Towards a New Synthesis

The failure of the classical, binary legal system demands an immediate conceptual and regulatory overhaul. The inadequacy of simple binary systems requires a theoretical shift toward the Quantum Authorship Framework. This framework recognises that creative works exist in a state of superposition (entanglement) until legal judgment forces the collapse into simplified categories. Human-AI collaboration exhibits entanglement whereby the user’s and machine’s contributions are inseparable. The legal attempt to detect human authorship thus acts as an observer, creating the distinction that it claims to discover and misrepresenting the work’s collaborative reality. This framework compels the system to eliminate the futile search for singular authorial origins and instead focuses evaluation on the quality and nature of the collaborative interaction, examining human expertise, judgment, and creativity.

The most viable policy path is evolutionary expansion of the term “author,” incorporating a proximate cause element based on human intellectual contribution. The authorship formula must include: labour, personality, and communication. This expansion confers critical advantages: it legitimates the human contribution and, importantly, makes the ecosystem more ethical by perhaps encouraging developers to become more conscious of training data, thereby restoring market share and creative integrity for original artists. This moves U.S. law into alignment with International Precedent, as countries such as the United Kingdom, India, and New Zealand already recognise the copyrightability of AI-assisted works, often legislating that the author be the person who made the necessary arrangements. Any solution must grapple, however, with the practical impediments of the input crisis. The Technical Attribution Challenge means that any proposed Licensed Compensation schema confronts the formidable obstacle of tracing an output to individual inputs, further compounded by the high prevalence of “Orphan Works.” Due to this challenge, transparency mandates, like the EU AI Act’s requirement of a “sufficiently detailed summary of the content used for training,” are the absolute minimum regulatory floor necessary to even attempt to implement some sort of viable compensation schema.

Reaffirming the Value of Human Creativity

The Algorithmic Authorship Paradox requires a legislative solution that abandons the inflexible, classic conception in favour of conceptual flexibility provided by the Quantum Authorship Framework. It does so by legally recognising the end user as an intellectual author through the expansion of the concept of authorship to include the necessary labour, personality, and communicative intent. This expansion offers a crucial ethical advantage in forcing developers to be significantly more thoughtful regarding training data. Failure to adapt the notion of authorship into the twenty-first century runs the real risk of transforming this technological revolution into a tragedy of the creative commons, one that subverts the Constitutional mandate to promote the progress of the useful arts. Stability within the future creative economy relies upon both the speed and wisdom of regulatory intervention.

.    .    .

Discus