Are you curious about what SENSE is and does?
Sign up for the next SENSE Orientation session on Thursday 5 March via Zoom.

By Jan Klerkx, 22 January 2026

TeachingSciWri 

On 28 November 2025, SENSE member David Barick, experienced teacher, editor and translator of academic research texts, presented a lively interactive Zoom talk under the catchy title of ‘Do they still need me?’ on the fraught question of whether AI will replace humans as teachers, editors and translators of academic research writing.

David started by referring to some workshops on the topic that he had recently attended. A European Association of Science Editors (EASE) panel discussion concluded that ‘ChatGPT has a weak ability to differentiate between good/excellent and weak/OK research.’  At the same panel discussion, James Zhou presented results of a survey among researchers from 110 institutions, asking whether they found LLM tools helpful for their academic writing. Nearly 60% of respondents found them ‘helpful’ or ‘highly helpful’, and about 20% found them ‘much more helpful than most human feedback’.

David then went on to discuss some of his own experiences of what LLMs, specifically ChatGPT, can do when it comes to editing scientific papers. He had asked ChatGPT to do some of the editing exercises he uses in his own teaching. He commented on paragraph structure and paid specific attention to coherence techniques, including given-new patterns, repetition of key words, grammatical parallelism and the use of transitional phrases and cohesive markers. He asked the audience to comment on the examples too, using the Zoom chat function.

In an example on thermonuclear energy production, ChatGPT did indeed improve many of the coherence problems, but it also produced longer sentences than in the original, even though it generally recommended shorter sentences.

When asked to comment on the use of coherence techniques in a sample text on endometriosis, ChatGPT correctly identified the use of repetition of key terms and the use of linking devices and parallel structures. The program’s editorial judgement was that ‘minor changes would likely be enough to make the text publishable for a specialist readership’, but that ‘more intervention would be beneficial’ for a more general audience. However, the suggestions it made for such interventions were minimal and not very helpful. Under the heading ‘Break up long, dense sentences’ it produced a suggestion that actually resulted in a less concise paragraph! It also suggested breaking up a nine-sentence paragraph using subsections with separate headings, which journal editors would probably not appreciate.

The final example concerned a longer text (an entire introduction section) on alcohol consumption patterns, which was judged by the audience to be poorly written. They suggested it had not been written by a native speaker of English. ChatGPT also recognized this and even correctly surmised that the text was written by an author from Spain or France (the author was in fact Spanish). It also correctly identified many of the problems of coherence, grammar and collocations. Other suggestions, however, were less helpful and often involved introducing words or phrases that did not add useful content and actually hampered the flow. Some of the additions sounded very generic, not specifically relating to the topic of the text, e.g. ‘This review aims to synthesize recent findings, identify consistent patterns of impairment, and highlight methodological limitations to guide future research.’

Some of ChatGPT’s comments showed that it failed to distinguish between closely related words, e.g. ‘binge drinking’ and ‘hangover’, which it both referred to as patterns of alcohol intake.

Finally, David showed us what ChatGPT had to say when it was asked: ‘ChatGPT can give extensive information on how to write a scientific research article. Do you think that it is a satisfactory substitute for human teachers of this subject, or that it will become so in the future?’ ChatGPT’s answer was rather diplomatic: it claimed that ChatGPT (or AI in general) was already good at explaining structure and conventions clearly, providing quick feedback and editing assistance, generating tailored practice tasks and summarising or explaining complex research writing guides. In contrast, it suggested that human instructors would still be better at mentorship and judgment, understanding nuance and emotion, evaluating scientific reasoning and teaching through dialogue and modelling.

David’s final conclusion was therefore that AI would not replace writing instructors any time soon, as humans will still be better at critical thinking and social learning. AI may function as a complement to human teaching, but a good teacher will always add useful extras to what AI can do.

     Blog post by: Jan Klerkx
     LinkedIn: JanKlerkx