I don’t care whether you use ChatGPT to write
I couldn’t care less whether you use ChatGPT or any other generative AI to write. In the end, it doesn’t make a difference. The preciousness with which many treat the subject (including myself, until recently) — as if there were some intrinsic quality worthy of preservation in purely human text — is unfounded.
I know it’s a controversial opinion. I ask for your patience and that you approach it in a strict sense, meaning to set aside other issues regarding the subject, such as ethics and environmental concerns. With that said, I will now try to make my case.
If you think ChatGPT writes better than you, you’re probably right. From my perspective as a reader, it’s (still?) easy to tell when a text is generated by AI and when it’s the product of a human being’s imagination — or mostly, at least.
Still in that role, it’s obvious that I take this perception into account when deciding whether to read something or not. And, also almost needless to say, I much prefer human-written text, or those in which the humanity of the writer manifests itself.
The preciousness of those averse to AI-generated texts stems from a flawed premise: that 100% human text is always better.
Not only is there a lot of very low-quality text, but there’s also a lot of text that, even when written by capable humans and coming out as intended, is still bad. It resembles a machine-generated one. This predates ChatGPT. Websites and blogs that love Google’s algorithm, for example, have been publishing texts for years that hardly differ from the slop generated by AI. Corporate emails, reports, earnings news, and sports results… there are many examples.
Sometimes we have no choice but to read bad texts. Often, it’s a choice — I would ignore that birthday note that Alex received, for instance.
Life is not a grand prose or poetry contest, unfortunately.
In these situations, does it matter whether the author used AI? Whether the text was partially or fully written by a human or by AI? What matters is the quality of the text presented to the reader, which doesn’t change. It remains a bad text.
A specific example. The other day I came across this news on the site Phone Arena, titled “The unannounced Samsung Galaxy S25 Edge is already available for pre-order in the UK at a sky-high price.” It’s written by a person, and nothing on the webpage suggests the aid of generative AI.
The news has 664 words. The price only appears after 457 words, meaning the “introduction” (fluff) occupies 68.8% of the text. I could bet it’s because Google would penalize texts with too few words.
It’s ironic, but AI did a much better job of answering the question that the Phone Arena article aims to address (even if drawing from sources like Phone Arena itself). Perplexity delivers the price of the phone after 58 words.
AI has entered a game in which we, humans, cannot win. What game? The game of mediocre, bad text.
***
I understand the feeling of betrayal that the press has regarding generative AI. Newspapers around the world have had their “corpus” absorbed and regurgitated by chatbots that, as the silly example above demonstrates, often do a better job than the source from which they draw.
Even so, it’s hard to find a newsroom that isn’t at least experimenting with AI-based solutions.
This new world exposes some inconvenient truths that journalism and, more broadly, all creative sectors have been slow to face: that words are tools. That the raw material of a newspaper, a blog, or anyone who writes is not the written word, but what they express.
The purism that disparages AI-generated text can be taken to the extreme. Why do we use word processors with spell checkers, auto-saving, versioning, and other artificial aids? Is digital text worthy of our reading, as printed text has been for centuries?
Perhaps, at some point, AI-generated text will become sophisticated enough to rival that of a human who masters language. Translation is an indicator of this uncertain but likely future. AIs translate colloquial arguments from Portuguese to English and vice versa very well, and I’m not ashamed to say that I used one to translate this text you are reading.
I also don’t feel obligated to disclose that AI was used for anything. I see a huge effort from some newsrooms to give disclaimers for every single use of AI. In this context, AI is a tool, or it is used as such. I don’t care if you used Word, Photoshop, or anything else, including AIs.
And do you know why it doesn’t matter? Those who read this blog can feel, deep down, that an AI could not generate what is written here. And if the reader is unable to grasp this kind of subtlety, I’m sorry to say, but they will ignore the disclaimer that AI was used — at most, finding it a bit funny.