⭐   SENSE membership renewal season is here! If you’re a SENSE member, please log in and click the link in the banner.   ⭐

By Santiago Gisler, 27 November 2025

Scientific writer and AI 

How should writers use generative AI? Not at all, if possible.

The cabaret-like entry of ChatGPT into the public stage left us in awe. We were suddenly confronted with a technology accompanied by waves of contradictory promises. On the one hand, we were told that AI would transform society, improve medicines, or help us create better content. On the other hand, many warned it would replace professionals, introduce harder-to-control misinformation and plagiarize content.

Whether we should use generative AI for writing is a nuanced question. I remain skeptical, but the more I use and learn about it, the more I’m inclined to recommend that content creators avoid AI whenever possible.

My skepticism derives from the numerous problems associated with the technology. I won’t even delve into its broader ethical issues, such as environmental impact, human rights abuses or its use in developing armaments. Because even if we focus solely on our work as writers, the problems of excessive AI use still outweigh its benefits.

As writers, we have valid concerns: Will AI content creation and usage replace human work? How can we use it responsibly and accurately? These are critical and valid questions. Still, since I haven’t fully embraced a purist approach, I won’t advocate abandoning AI altogether. Instead, I’ll share my current perspective on generative AI and how to approach it cautiously in writing.

AI is really just a large language model

Given the almost anthropomorphic qualities people tend to attribute to generative AI – even in academic circles – it’s worth clarifying these tools’ alleged intelligence. Their human-like characteristics have created immense marketing potential, portraying them as intelligent, emotional or objective.

At a recent philosophy workshop on AI ethics, the event organizers repeatedly attributed god-like properties to ChatGPT, including future motives and feelings. Some attendees had personalized their ChatGPT to behave like famous characters, such as Harry Potter. This excessive decoration highlights the over-the-top marketing behind generative AI tools and reminds us to treat them with caution.

In reality, generative AI tools like ChatGPT are large language models (LLMs), sophisticated mathematical models built on powerful Transformer architecture that recognize patterns in language. They don’t think; instead, they predict the next likely token (a piece of a word or a whole word) based on patterns they have learnt from their training data, context and probability. This process continues until the model meets a predefined stop condition, resulting in a coherent, human-like response.

Although impressive, these outputs lack intelligence in the sense of logic, reflection or problem-solving. LLMs merely repeat data they’ve been trained on – data from other writers – and provide a statistically probable outcome. With this in mind, we begin to see where an over-reliance on LLMs becomes problematic in writing.

AI weakens our writing and thinking

   Loosing the chance to develop

I have a love-hate relationship with my old articles. All their grammatical mistakes, clumsy formulation and misused expressions make me cringe. Still, embarrassing as they are, those hair-raising mistakes also highlight my writing progress and stylistic improvements over time.

As we increasingly rely on generative AI tools for writing, we lose these auto-reflective feedback processes, and with that, the opportunity to develop from them. This AI trap specifically affects new writers who haven’t had the chance to develop a personal writing style.

AI-related hallucinations are also a big problem for new writers or anyone new to a topic. LLMs are prone to make things up – a lot. They deviate from facts, contradict themselves or the prompts, or include nonsensical information. These hallucinations result from issues with the quality of the training data, generation methods or input quality, and are challenging to identify unless the writer is somewhat familiar with the topic.

   A feedback loop of mediocre and erroneous content

Other critical drawbacks of generative AI models relate to our future information landscape and our ability to think critically and solve problems. Writing and everything around it requires an ability to think critically while organizing and structuring our perspectives to make an impact.

Excessive AI use strips away these critical aspects of impactful writing and traps the text, language and opinions within a generic, all-pleasing framework.

The more we rely on AI-generated content, the greater the likelihood that future AI model training will depend on dull and sometimes flawed data. It becomes a positive feedback loop of generic, impersonal and blunt content that is just… there.

Generative AI bots accounted for more than half of all web traffic in 2024, a figure that is expected to increase each year. LLMs may initially help us create coherent and seemingly credible content. However, as the information landscape becomes increasingly reliant on AI-generated content, it draws audiences away from our human-made, personal and engaging content, ultimately reducing our online visibility and readership.

What we’re left with are accumulations of LLMs trained on LLM content, with fewer personal experiences and more hallucinations.

How to use generative AI

So, how do we turn all this skepticism and negativity into a constructive approach? My answer would be that, if we necessarily need to use generative AI models for our writing, we’re better off using them sparingly and intentionally. 

I’d recommend approaching it in the following way:

  1. Understand the topic by researching literature and videos. Although selective and sometimes unreliable, applications like Copilot can help you with your first references if you’re unfamiliar with the topic.
  2. Use your own words when drafting, and highlight any statements, expressions, phrases or sentences you’re unsure of.
  3. Ask AI models targeted and specific questions about your highlighted sections. Instead of asking ‘Does AI hallucinate?’ ask, ‘What are the most common factual errors or hallucinations that occur when writing about quantum computing?’
  4. Use multi-shot prompting, in which you submit several prompts with comprehensive context and examples before submitting your specific request.
  5. Specify your prompts: ‘Proofread this text for grammatical errors and factual inaccuracies only. Do not change the style or phrasing unless it is incorrect. Flag any sections that seem to lack supporting evidence.’ Asking AI tools to ‘improve the text’ will always prompt them to suggest excessive changes, regardless of how well you write.
  6. Explicitly highlight all possible answers when asking a closed-ended question. Instead of asking, ‘Does this summary miss any key points?’ ask, ‘Does this summary miss any key points, or is it complete and accurate?’ This reduces the risk of the tool conforming to what it thinks you want.
  7. Use neutral language and avoid suggestive phrasing such as ‘Isn’t this a great sentence?’
  8. Approach all the information you receive from AI models with sound skepticism.

Eventually, we may realize that all these processes cost more time than simply researching and writing on our own. I’m not here to discourage anyone from using AI tools. But perhaps a relevant question is whether we really need AI at all from a linguistic, professional and ethical perspective.

     Blog post by: Santiago Gisler
     Website: www.ivoryembassy.com
     Blog
: blog

     LinkedIn
: santiago-gisler-phd