← Build Log

What We're Actually Arguing About When We Argue About AI Writing

April 17, 2026

The week I paid attention to Twilight

I was sick for a week, I was stuck on the couch, and I ended up watching Twilight, which I admit, is not my normal fare, but I figured it would be harmless entertainment (I did consider it was the cold medicine at one point).

And I noticed something. Almost nothing happens. Two people meet. They stare at each other a lot. There's a diner, a field, a hospital, a couple houses... The plot could fit on an index card. But, oddly, the central characters drew me in, I was genuinely interested.

When it finished, I wanted to know why it had worked. Out of curiosity, and after some googling about I tried Vampire Diaries, which was supposed to be the same. I turned it off after less than 30 minutes, clearly the deciding factor was not the vampire aspect.

Something specific had worked in the first case, and was, in my view, absent in the second, and I wanted to know what. That's the kind of question I tend to be drawn to, and the rest of the project grew out of that week.

The question this essay is about

I'm not going to argue that AI is good or bad for writing. That's the wrong question and it produces the wrong conversation. The right question is narrower and less comfortable:

What are we afraid of when we name the Boogie Man: AI writing. Is our fear even about the right thing?

Personally, I think the answer is no. I think we're fighting the wrong battle and losing the one that matters.

The case that changed the conversation

I won't recap the entire Shy Girl saga here. The merits of the underlying disputes will be resolved in court, in settlement, or in the tail of online attention.

What won't be resolved is the signal the cancellation sent to the rest of the industry. A major traditional publisher pulled a contracted title based on evidence that, by the publisher's own account, required a lengthy investigation to evaluate, indicating the "evidence" was not self-evident. The message to every agent, editor, and debut author in the industry was unambiguous. The appearance of AI involvement is now a career-ending liability, and the safest response is zero tolerance.

The Authors Guild ran a parallel response: a Human Authored certification program, launched in beta for members in January 2025 and opened to all US authors in March 2026, with a partnership with the UK Society of Authors. Authors can license a trademarked mark declaring their book is human-written. The certification is granted on the honor system — the Guild's CEO has publicly acknowledged that no AI detection tool currently exists that is reliable enough to vet manuscripts, and that the program therefore relies on authors' honesty. Let me just repeat for the dramatic effect… A major publishing house cancelled a contract, after the New York Times brought "evidence" about the use of AI. After which, the CEO of the Authors Guild publicly states that they can't verify the applications for their "certificate" because no tool exists which can reliably detect AI writing in manuscripts (pause again so it sinks in…).

Setting aside the cowardly behavior of Hachette, consider the real worry: this gives the industry a protocol for acting on suspicion without clear evidentiary standards. The certification gives readers a seal that means only what an author's signature on a form means. Neither tells us anything about the craft of the books involved. Neither distinguishes between a novel generated by a large language model and a novel where a human writer used AI as a drafting and revision instrument. The conversation has collapsed the distinction without even acknowledging it exists.

What ghost writers tell us about authorship

Consider for the sake of argument a set of books nobody will argue about (probably).

James Patterson publishes dozens of titles a year, most of them drafted by co-authors whose names appear on the cover in smaller font. Nobody calls this fraud. Most celebrity memoirs are ghostwritten; the named author often hasn't written a word of the manuscript. Political autobiographies are almost entirely the work of hired writers. Stephen King published under Richard Bachman for years before the pseudonym was exposed, and nobody thought the Bachman books became less valid as literature when it came out that King had written them. The industry has a large and stable set of precedents for how we interpret authorship when someone other than the named author did a significant portion of the writing.

The criteria for the interpretation are (roughly):

The named author made the creative decisions. They chose what the book would be about, what the voice would sound like, what the argument was, who the characters were. They read and approved every line. They are the person who must live with the finished work professionally, legally, and reputationally.

The person who did the drafting worked within the named author's creative authority. They weren't the author. They were, in this sense, a creative instrument.

By these criteria, an AI-assisted book where the writer has designed the world, built the characters, locked the series architecture, approved every line, and rewritten extensively IS authored by that writer. The AI is the instrument. The writer is the authority. This is the structure under which ghostwritten books already operate, unquestioned, across the industry.

If the objection to AI-assisted writing were really about authorship, the ghost writer comparison would be fatal. The two situations are structurally identical in all the ways that matter if the question is who wrote the book.

Which means the objection is about something else.

Where the analogy breaks

The ghost writer parallel isn't perfect, and honest arguments name their own weak points.

A ghost writer is a human being. They're compensated for their labor. They made a choice to take the work. AI was trained on material that included copyrighted material whose authors were not consulted and not paid. That's a real and legitimate grievance. It's at the center of multiple active lawsuits. Any honest account of AI writing should acknowledge that the training data question isn't settled and that the economic model under which these tools exist is still being contested.

There's also a labor concern. If AI tools make it possible for fewer writers to produce more books, the working writers who would otherwise have done that labor lose income. That's true. It's also true of almost every productivity-increasing technology in every creative field for the last two centuries, which doesn't make it less painful for those impacted but it does put it in a recognizable historical category rather than the recent insistence on a singular moral crisis.

Ok, so let's address the elephant in the room then because he's getting impatient. Cultural anxiety. A lot of what people mean when they say "AI writing is bad" are really saying "I'm afraid of what this is doing to the idea of humans as creative beings." That's a real feeling. It deserves to be taken seriously. But it isn't the same concern as "this book is fraudulent because it was drafted with AI," and collapsing the two produces worse outcomes for everyone — including the people expressing the concerns in the first place.

Here's the thing. The legitimate objections to AI writing are labor and training data. The objection that gets used in public — this book is fake because a machine helped write it — is neither. It's a proxy for the real concerns, dressed up as a craft argument. And because it's dressed up, we can't address it directly. We can only argue about books.

The Shy Girl cancellation wasn't about training data, let's be honest, and it wasn't about labor either. It was about the appearance of AI involvement, arrived at through amateur online investigation and a publisher review, while the author contested. Whether AI was involved or not, the decision to cancel was made on grounds that require the publisher and the public to be able to tell the difference between AI-generated prose and prose that wasn't. And that's a distinction nobody has a reliable method for making.

That's not a sustainable editorial standard. It's reputational risk management performing as integrity.

What AI did to my project and why I'm interested in the topic

I started writing the Nascent series because I wanted to understand what was possible. I did what any writer does — I built characters, I designed a world, I roughed out a four-book arc. I used AI for idea generation and early discussions, which is what most people mean when they say they use AI for writing.

That would have been the whole story, except I work in a quantitative field where AI workflows are standard. It was obvious to me this was the floor of what was possible, not the ceiling. So I started building architectural files: character documents, a world-building document, a series arc, tone guardrails, a chapter-by-chapter beat sheet for book one, an epigraph progression, a chapter title grammar. Thirty-odd documents, totaling more than fifty thousand words of architecture before a single chapter of the manuscript was drafted.

With that infrastructure in place, something amazing happened. I could interact with the entire four-book story in real time.

I could ask questions about character interactions I hadn't yet written. I could run hypothetical changes and see which beats they broke. I could seed an idea at chapter three and trace its implications through chapter twelve without having written either chapter. I could hold the whole shape of the series in working memory and examine it from any angle.

This is the part I want people to understand, because I think it's the part the current conversation is genuinely missing.

The master storyteller's research layer

Established novelists who write sprawling, internally consistent series don't manage that consistency by accident. They build research infrastructure. Character bibles. World-building notebooks. Maps. Timelines. Location files. Relationship charts. Terry Brooks has built up Shannara across four decades and dozens of books. George R. R. Martin maintains rooms full of notes for a series he still hasn't finished. Sanderson runs what amounts to a small business of research assistants, beta readers, and continuity checkers. The scaffolding that makes ambitious fiction possible takes them years to build and is invisible to readers.

AI didn't give me prose. AI gave me real-time access to that scaffolding on day one.

That's the reframe I'd like to leave you with, because it's the one I think matters most. The debate about AI writing keeps focusing on whether the machine can generate a good sentence. That focus is a mistake. The machine cannot generate a good sentence without a human who already has a good sentence in mind. What the machine can do, with the right architectural input, is make the research, continuity, and consistency layer of serious fiction navigable for a debut novelist in a way that used to require years of back-office work (and maybe a small staff).

The prose, in my case, gets substantially rewritten. That's the necessary and non-negotiable part. Every line passes through my hands multiple times before it stays. The voice is mine. The architecture is mine. The decisions are mine. What AI contributed was the ability to hold the whole project in working memory at once, which is what novelists with twenty-year careers have been doing the long way around since the novel was invented.

If that's fraud, the category is larger than anyone's willing to admit.

What we should be talking about

The standards by which we evaluate a book should be the standards by which we evaluate any book. Is it good? Is the voice consistent? Are the characters earned? Does the story work on its own terms? Is the author accountable for the finished product?

Those are the questions that matter. The question of which tool was used to generate first-draft prose belongs in the same category as the question of which word processor the writer used, or whether they dictated, or whether they wrote longhand and transcribed, or whether they worked with a developmental editor who reorganized three chapters for them. It's a process question. It isn't an authorship question.

Treating it as an authorship question has given us a publishing industry that cancels books under contested circumstances, a certification scheme running on the honor system, and a generation of writers who are using these tools privately and not saying so publicly because the professional cost of honesty is too high.

A 2025 BookBub survey of over 1,200 authors found that about 45% of respondents were using generative AI in some part of their work. Of those who use it, 74% do not disclose that use to readers. Call that what you want. I'll call it a professional culture where everyone has agreed to pretend together.

I'd rather be open about it. The alternative is the complicit acceptance of a culture in which everyone is doing the same thing and agreeing not to notice. That's worse for writers. It's worse for readers. It's worse, eventually, for the books themselves.

Final thoughts

I'm not asking anyone to approve of how I'm writing this series — it continues to be one of my most enjoyable dives into the melding of technology and creativity and I'm happy to share that. I'm suggesting you notice what questions you're answering if you disapprove.

If the question is is this craft? then it must be evaluated book by book on its own terms.

If the question is is this fair to working writers?

That's a legitimate question, and it deserves conversation, not a certification sticker backed by a mob of virtual pitchforks.

If the question is does AI-assisted mean AI-generated?

No, and the refusal to distinguish between the two is how a debut novel and a cancellation scandal end up in the same sentence.

If the question is am I afraid of what this means about being human?

I understand that feeling and I don't dismiss it. AI will continue to pose existential questions which will not go away and from which, we as a species, cannot hide. But take a moment and recognize this as a separate question from the others — and the answer to that one probably can't be found in a publishing company's policy department.

The question isn't whether AI belongs in fiction. The question is whether we can tell the difference between the work of the tool and the work of the writer. If we can, the category holds. If we can't, we have a much bigger problem than AI.

Quinn Frost is a pen name. The books will be self-published. The How I Work page on the website will tell anyone interested, in more detail than any legal disclosure requires, how the sausage gets made. From there you, the reader, decides — not the invented controversy that isn't about the thing it thinks it's about.