The Ethics of Writing with AI
As I sit here to write this, I'm joined by my wife who's unfortunately recovering from a cold. To soothe her malady, she purchased a box of crackers on Amazon that look suspiciously like the ubiquitous golden round variety that we love to put in our soup. Because her sense of taste eludes her at the moment, she asked me to try one.
It's fine, I reported. I didn't realize the name brand had a flavor until I tried these and realized the lack of one. But the texture was fine and it would do what she needed it to do. I told her it was just alright.
As I reflect on AI, these crackers seem relevant.
In September, Anthropic agreed to pay $1.5 billion to creators, confirming what we know to be true: AI is a machine that takes in prompts and spits out copyright infringement. What it produces is rarely on par with what the median human can produce, and has a tendency to make less sense as it goes. It is undoubtedly the off-brand cracker of the writing world, if off-brand crackers were trained on real crackers, and then refused to compensate them for it.
Nobody is happy with how the off-brand cracker entered our lives. They showed up before we completely understood them, and a lot of us started to enjoy them before we knew what they were made of (a sort of crumbly cardboard, I'm here to report).
And yet, the cracker seems quite popular.
This past summer, the US announced an investment in AI that may well be north of $500 billion. OpenAI alone reports $12 billion dollars in annual revenue and, whether you believe there is an AI bubble or not, that number will almost certainly grow as OpenAI develops more and more varied products.
The cardboard cracker is here to stay, no matter how displeased we are with the way in which it came here. And even if we as writers collectively agree to forgo it, we would be fooling ourselves to think that our boycott could stem the tidal wave before us.
So do we eat the off-brand cracker?
I'm asking this question because I have to. I built Novelium, a tool that uses AI to analyze manuscripts. Every day I wrestle with whether what I'm building honors the millions of creators whose work trained these models or just perpetuates the harm. Most of what Novelium does works without modern AI. If I wanted to rip out AI and sell a slightly worse version of the product, I could. I'm not sure the impact to sales would be all that high.
So that's the answer, right? Just Say No. Every AI-generated novel flooding Amazon makes the theft worse, doesn't it?
And yet, when my wife was sick and just really wanted soup and crackers that she didn't have to go to the store for, she had an answer that was at her door in under six hours. And yes, that answer likely infringes on IP (and if it doesn't, it should). And yes, that answer makes the entire cracker world a little darker for its lack of salt, but it was what she needed when she needed it.
The theft happened and we, as a writing community, need to take a minute to mourn that loss. It cannot be undone. Millions of creators, from fan fiction writers on AO3, to novelists who poured years into their craft, to bloggers sharing their knowledge freely had their work scraped without consent to build commercial products they'll never profit from. That cannot be undone and it should not be forgotten. But maybe this tool, armed with humanity's collective knowledge, with very poor articulation and a penchant for em-dashes, has its place. If I use AI to brainstorm, to organize my thoughts, to gather knowledge of a topic where I need just enough information to be dangerous, I can write something better than I'd have written otherwise. And that's different from AI slop.
And that's the line, isn't it?
If AI helps me create something genuinely better, something that makes a reader's day a little nicer, helps them understand themselves, makes them feel less alone… Then maybe the theft becomes meaningful instead of purely extractive.
And similarly, if I use AI to pump out novels I didn't write, flooding Amazon with content nobody wants to read, then I'm compounding the harm. I'm making the theft worse.
The standard can't be 'never use AI.' We can't afford for that to be the standard. Whether or not the more principled among us take a stand, there will always be writers (indeed, more each day) using AI in some fashion to assist their writing.
Equally, the standard cannot be 'use AI however you want.' Morality is not, after all, beholden to capitalism. The standard has to be this: If you use tools built on millions of creators' stolen work, what you owe them is an homage to their work. You owe them quality.
If we can agree to use AI for research and inspiration, maybe what we create will be all the better for it. If we can agree that we, as writers, are responsible for the words we publish regardless of which intelligence generates them, maybe we can foster a spirit of ownership in our work.
It's the next day. My wife is feeling better, and I'm happy to report we threw away that box of crackers. Now that she's on the mend, she's cooking something Italian with pasta and beef that makes the whole house smell like garlic. It's going to take her an hour, and it's going to be worth it.
The crackers were fine when she needed something quick, but what we're eating tonight is the real thing. The better thing.
I think we should use AI when we need it. Use it to research, brainstorm, to get unstuck… But what we put in front of readers must be the full Italian meal, not the cardboard crackers. If we're going to build on stolen knowledge, the least we owe is something worth reading.
The theft already happened. Now we decide: Do we make it meaningful, or just make it worse?
Try Novelium
See how Novelium helps you organize your manuscript, track characters, and maintain continuity—so you can focus on writing quality stories.
Start Writing Better