01 · How do you make LLM output that doesn't read as LLM output?
The Challenge
Surfer's content tools generate AI-written drafts at scale. The drafts are useful - but downstream, customers run them through AI-detection tools (Originality.ai, GPTZero, Copyleaks, Turnitin), and the same characteristics that make LLM output coherent also make it easy to flag.
The product question was direct: build a rewriter that takes a draft, produces an output that passes the major detectors, and keeps the meaning and structure intact. Treat it as a research problem, because the obvious approaches don't work.





