IP in the Age of GenAI: Why Boards Must Reimagine Intellectual Property Governance

By Faisal Khan

Artificial Intelligence—particularly generative AI—has ushered in a new era of creativity, automation, and possibility. But it has also destabilised one of the pillars of modern commerce: intellectual property. The systems now generating text, images, music, software code, and strategic insight at industrial scale were themselves trained on vast oceans of human-made content—much of it copyrighted, most of it scraped without permission, and nearly all of it under legal frameworks never designed with AI in mind. The implications for organisations, regulators, and boardrooms are profound. 

Today intellectual property is simultaneously more valuable, more vulnerable, and more ambiguous than at any point in recent history. The questions facing boards are not philosophical niceties; they go to the heart of competitive advantage, risk governance, brand integrity, and long-term value creation. 

A New Foundation Built on Other People’s Work 

Most generative AI models were built on vast amounts of scraped data—books, films, journalism, art, source code, academic research, photography, and millions of pieces of creative work uploaded to the internet over two decades, without meaningful consent or compensation. Technology companies argue that this is lawful under “fair use” or text-and-data mining provisions; creators argue it is large-scale appropriation. Both are partly right, underscoring the problem: the law was never prepared for models that ingest and recombine the world’s intellectual output. 

For creative industries (publishing, art, design, music, film, broadcasting, gaming) the shock has been severe. Many artists and writers now see AI reproducing their styles, voices, and techniques with uncanny accuracy. Some identify fragments of their own work resurfacing in AI-generated output. Others face market displacement as AI-generated books, images, and songs flood digital marketplaces. The balance of economic power has shifted sharply towards companies controlling the models, and away from the people whose work made those models possible. 

The risk for organisations is not limited to external creators. A company’s own content, code, design patterns, logos, documentation, or proprietary data may already have been ingested into commercial AI systems—especially if it was ever publicly accessible. IP leakage now happens at the speed of scraping! 

The Ownership Paradox 

A second challenge is the unsettled question of who owns AI-generated output. In most jurisdictions, copyright requires human authorship. If a model autonomously generates a new design or text, the output may not qualify for protection at all. A company using such content could find it difficult to prevent competitors from copying it. In extreme cases, entire AI-generated product lines could lack any enforceable IP rights. 

Ironically, while the company may hold no exclusive rights to the output, it might still be exposed to claims if the model inadvertently reproduces copyrighted material—phrases from a novel, segments of source code, signature elements of a song, or stylistic trade dress. This produces a difficult dual risk: AI-generated output may be unprotectable when we want to claim ownership, and infringing when we want to avoid liability. 

Boards must understand that this is not hypothetical. Lawsuits have already begun—Getty Images vs Stability AI, The New York Times vs OpenAI, and multiple actions by authors including Margaret Atwood and George R.R. Martin. 

A Regulatory Vacuum 

Regulators are only beginning to address the implications of GenAI. Proposals range from transparency obligations for training data, to opt-out rights for creators, to licensing models for data ingestion. Some advocate strict controls on scraping; others focus on liability for harmful or infringing outputs. Courts in the US, UK, and EU are becoming the battleground for defining boundaries of acceptable use. 

In September 2025, Saudi Authority for Intellectual Property issued its first publicly reported enforcement action for AI-related copyright infringement. A fine was imposed for using generative-AI to alter a personal photograph and publishing it without the original rights-holder’s consent — a breach of existing copyright law rather than a newly created “AI-copyright” statute. This decision effectively treats AI-enabled derivative works under the same legal framework as traditional infringements, rejecting the notion that AI transforms or nullifies copyright obligations. The case sets an important precedent for organisations operating in the Kingdom: using AI to modify or generate content does not exempt entities from IP liability, and boards must regard AI-enabled content handling with the same seriousness as other critical IP governance domains. 

The Board’s Governance Challenge: Navigating Ambiguity 

The complexity of GenAI does not relieve boards of responsibility. Instead, it heightens it. Directors must understand the risks and opportunities of AI-generated content, implications for their existing IP portfolio, vulnerabilities in their development pipelines, and exposure created by staff use of external tools. 

Effective governance in this environment requires deep cross-functional understanding. Boards must work with management to establish strong internal policies defining how AI can be used, what data can be shared with external systems, and what forms of output require human review before publication. Contracts with AI vendors must specify data-handling rules, guarantee that customer data is not used for training, and set out clear ownership provisions for generated content. 

Future strategy will also require boards to reassess their IP portfolios: which assets remain protectable, which may have been compromised, and which may need new forms of defence 

in an era where competitors can use AI to analyse, replicate, or approximate innovations with unprecedented speed. 

Towards a New Intellectual Property Framework 

The Age of GenAI calls for a fundamental rethinking of intellectual property laws, but until those reforms arrive, boards must operate in the grey zone between innovation and uncertainty. The companies that thrive will be those whose governance frameworks recognise that IP is no longer static, human-created, or easily delineated. It is fluid, collaborative, machine-augmented, and increasingly difficult to trace. 

This shift does not diminish the value of IP—it magnifies it. But it also raises the stakes for protecting and governing it wisely. Boards that embrace this challenge will lead their organisations confidently through the GenAI transition. Those that ignore it risk seeing their competitive advantage scraped, replicated, recombined—and ultimately lost.

 

About the Author

Faisal Ali Khan is a veteran technology entrepreneur and Chartered Director, specializing in the application of emerging technologies to corporate governance. He chairs the UK Institute of Directors’ Expert Advisory Group on Science, Innovation and Technology and serves as Chairman of Senebria Digital Ltd, a governance advisory firm based in the Dubai International Financial Centre.
LinkedIn: linkedin.com/in/khanfaisalali 

 

 

Disclaimer: The views expressed in this article are those of the author and do not necessarily reflect the opinion or position of the Center for Governance.