Copyright Eligibility Calculator for AI-Generated Content
This tool helps you determine if your AI-assisted creative work might be eligible for copyright protection under U.S. law, based on the level of human input and creative direction.
Answer the following questions to determine if your AI-generated content might qualify for copyright protection. Remember: The U.S. Copyright Office requires that human creativity be the driving force behind the work, not just the AI.
- Select the percentage of human input in your creative process
- Indicate if you're creating commercial content
- Choose your type of creative work
Result will appear here
Understanding Your Result
According to the U.S. Copyright Office, works generated "solely by a machine" cannot be copyrighted. However, if a human provides significant creative direction—selecting prompts, editing outputs, arranging elements, or adding original content—you may be able to claim copyright on the final version. This calculator assesses your potential eligibility based on the information provided.
Generative AI isn’t just changing how content is made-it’s rewriting the rules of who owns it, who gets paid, and what even counts as creation. In music, film, writing, and visual arts, artists are watching their styles cloned, their voices replicated, and their livelihoods undermined-all without permission, credit, or compensation. The systems powering these AI tools were trained on decades of human creativity, scraped from websites, databases, and archives without consent. Now, those same tools are producing content that competes directly with the original works, flooding markets with cheap, AI-generated alternatives that undercut real creators.
Who Owns What When AI Makes It?
In the United States, the law is clear: if a machine does the work, there’s no copyright. The U.S. Copyright Office has repeatedly stated that works generated solely by AI, with no meaningful human input, cannot be protected. That means a fully AI-written novel, an AI-composed symphony, or a photorealistic image made from a text prompt has no legal owner. But here’s the twist: while the output can’t be copyrighted, the inputs sure can. AI models are trained on millions of copyrighted images, songs, books, and scripts-often taken without asking. So you’ve got a system where the training data is legally protected, but the output isn’t. And the people who made the training data? They get nothing. This isn’t theoretical. In January 2026, the New York Times sued OpenAI and Microsoft for using thousands of its articles to train AI models that now generate summaries, answers, and even rewritten versions of Times stories. A federal judge recently denied the defendants’ motion to dismiss the case. That’s huge. It means courts are finally willing to consider whether training AI on copyrighted material counts as infringement-even if the output isn’t a direct copy. If the Times wins, it could force AI companies to pay licensing fees or stop using protected content altogether.The Economic Toll on Creators
The numbers don’t lie. According to CISAC, generative AI could cost music creators €10 billion in lost revenue by 2028. In film and TV, creators stand to lose €12 billion over the same period. That’s not a drop in the bucket-it’s a mass transfer of value from artists to tech companies. AI-generated content is projected to hit €64 billion in market value by 2028, up from just €3 billion in 2023. That growth isn’t happening because AI is inventing something new. It’s happening because AI is copying what humans already made. Consider illustrators. Many freelance artists rely on platforms like DeviantArt, ArtStation, or Instagram to showcase their work and land commissions. But those platforms are now being mined by AI companies to train models that can mimic their style in seconds. A single artist’s unique brushstroke or character design can be replicated thousands of times by AI, then sold as stock art or used in ad campaigns. The artist doesn’t get paid. They don’t even get credit. And suddenly, clients don’t want to hire them-they just ask for "something like that" and generate it themselves. Ben Zhao, a computer science professor at the University of Chicago, put it bluntly: "Artists are literally being replaced by models that have been trained on their own work." That’s not innovation. That’s exploitation.
The CLEAR Act and the Push for Transparency
In January 2026, Senators Schiff and Curtis introduced the CLEAR Act-a bill designed to force AI developers to be honest about what they’re using to train their models. Under this law, any company releasing a generative AI platform must file a detailed notice with the U.S. Copyright Office listing every copyrighted work used in training. If the training data is publicly available online, they must also provide the URL. This notice must be submitted 30 days before the AI tool is released to the public-or even used internally within a company. The law also requires the Copyright Office to build a public database of all these notices. Think of it like a public ledger of stolen work. If you’re a musician and you find out your song was used to train an AI that now generates pop ballads in your voice, you’ll be able to look it up. This isn’t about blocking AI. It’s about accountability. If companies can’t hide what they’re using, they can’t pretend they didn’t know. This is a direct response to the lack of transparency in the industry. Until now, AI companies have refused to disclose their training datasets, claiming proprietary secrets. But if you’re building a product that competes with human creators, you don’t get to keep your recipe secret.Who’s Doing It Right?
Not everyone is fighting this battle in court. Some companies are trying to fix the system instead of ignoring it. Bria, a generative AI startup, trains its models only on datasets where creators have agreed to participate-and they get paid every time someone uses their style. If you generate an image that looks like a specific artist’s work, Bria calculates a royalty based on how much of that style was used and splits the revenue. The artist sets the terms. It’s not charity. It’s a business model built on consent. Shutterstock took a similar path. They started paying photographers and illustrators a small percentage every time their work is used to train AI models or when AI-generated images based on their style are sold. Disney, one of the world’s biggest content owners, made headlines by becoming a major customer of OpenAI. Their deal? Disney gets access to ChatGPT and OpenAI’s tools to build new content for Disney+, and in return, they get equity and influence over how their IP is used. It’s a win-win: Disney protects its characters, and OpenAI gets a powerful partner who can help shape ethical AI use in entertainment. These aren’t just PR moves. They’re blueprints. If AI companies want to survive long-term, they’ll need to build trust with creators-not just legally, but economically.
The Bigger Question: Is Copyright Even Enough?
Here’s the uncomfortable truth: copyright law was never designed for this. It was built for books, songs, and paintings-things humans made with their hands and minds. Generative AI doesn’t copy. It remixes. It predicts. It extrapolates. It doesn’t steal a song; it learns the pattern of a hundred songs and stitches together something new. That’s why legal scholars like Mark MacCarthy at Brookings argue that copyright alone can’t solve this. You can’t sue your way out of a technological shift this big. Some experts suggest compulsory licensing-where AI companies pay a set fee to use copyrighted material, regardless of permission, like how radio stations pay to play songs. Others push for opt-out systems, like the one being considered in the UK, where creators are assumed to have given consent unless they explicitly say no. Then there’s the idea of letting creators form collectives-like unions-to negotiate licensing deals with AI firms, similar to how musicians negotiate with streaming services. But here’s the most promising shift: redefining authorship. Instead of asking "Who made this AI image?" we should be asking, "Who directed it?" The human who writes the prompt, selects the style, tweaks the output, and edits the final result-that’s the creator. The AI is a brush, a camera, a typewriter. It doesn’t have intent. It doesn’t have vision. It doesn’t have a right to own anything. Copyright should protect the human who uses AI as a tool, not the tool itself.What Comes Next?
The creative industries are at a turning point. Either we build a system where AI helps artists make more, earn more, and reach more-or we let tech companies hollow out creativity and turn art into a commodity generated by machines, for machines. For creators, the message is simple: document everything. If your work is online, assume it’s being scraped. Use watermarks, metadata, and opt-out tools where available. Join creator collectives. Support platforms that pay artists fairly. And demand transparency from every AI tool you use. For policymakers, the path is clearer: enforce accountability, protect human authorship, and ensure creators aren’t forced to subsidize the next tech boom. The CLEAR Act is a start. But it’s just the first draft. The real work begins now.One thing is certain: if we don’t fix this, the next generation of artists won’t just be competing with AI. They’ll be competing with a system that trained itself on their work-and never paid them a cent.
Can I copyright a work made with generative AI?
In the United States, you cannot copyright a work generated entirely by AI. However, if a human provides significant creative direction-selecting prompts, editing outputs, arranging elements, or adding original content-you may be able to claim copyright on the final version. The U.S. Copyright Office requires that human creativity be the driving force behind the work, not just the AI. For example, a person who writes a detailed script, chooses specific visual styles, and edits multiple AI outputs into a final comic book may hold copyright on the final product.
Are AI companies breaking the law by training on copyrighted material?
It’s legally unclear, but courts are starting to say "yes." The U.S. Copyright Office has ruled that using copyrighted works to train AI models that generate outputs competing with the originals goes beyond fair use. Lawsuits like the one filed by the New York Times against OpenAI suggest courts may agree. However, AI companies argue their training is fair use because they’re creating new tools, not copying content. Until courts issue clear rulings, this remains a gray area-but the legal tide is turning toward requiring permission or compensation.
What is the CLEAR Act, and how does it affect creators?
The CLEAR Act, introduced in January 2026, requires AI developers to disclose every copyrighted work used to train their models. They must file this information with the U.S. Copyright Office and make it public in an online database. This gives creators the ability to know if their work was used-and potentially seek compensation or legal action. It doesn’t ban training on copyrighted material, but it ends secrecy. For the first time, creators can see what’s being used and take action.
Do companies like Shutterstock and Bria actually pay creators?
Yes. Shutterstock pays royalties to artists whose work is used in AI training and when AI-generated images based on their style are sold. Bria goes further: it pays creators a share of revenue every time someone generates content in their style. These companies are building ethical AI models that recognize the value of human creativity. They’re not perfect, but they’re proof that AI doesn’t have to exploit creators to succeed.
Why should I care if AI generates art if I’m not an artist?
Because creativity isn’t just about art-it’s about culture. When AI floods the market with cheap, uncredited content, it devalues original work and makes it harder for real creators to survive. That means fewer diverse voices, fewer unique stories, and eventually, less innovation. If you enjoy music, films, books, or games, you’re benefiting from human creativity. If we let that system collapse, the content you love will become homogenized, algorithmically generated, and stripped of soul.