By Jennifer Ledoux (pen name Zephyr Trillian), 10 March 2026
When I was asked to write about fiction editing, perhaps reflecting on how it differs from non-fiction editing, I had to laugh – I’ve never edited a non-fiction manuscript in my life. But the prompt itself contains the answer. The fact that fiction editing is its own distinct discipline, with its own rules, instincts and particular brand of chaos, is precisely the point.
So, forgive me if I tell you what you already know, or if I review ideas you find tiresome or self-evident. I live less in the real world and more between the clouds, under the ocean, or on a spaceship to the Andromeda Galaxy. But perhaps your understanding of fiction editing mirrors my foggy understanding of non-fiction editing. Perhaps I can bring yours into focus.
Fiction editing is typically divided into distinct service levels, each with a different scope. Beta reading evaluates the reader experience: what’s working emotionally, what’s confusing and where momentum lags. Developmental editing zooms out further to assess structure: plot, character arcs, pacing and whether the story holds together as a whole. Copyediting and proofreading then work at the most granular level, cleaning grammar, consistency and accuracy only after the larger structural questions are resolved. These tiers exist because the focus required at each level is fundamentally different. Trying to complete all three levels of editing simultaneously usually means doing none of them well, so instead each manuscript must journey through the whole three-tiered process, one level at a time.
These service levels seem to structure fiction editing into a macro-to-micro format. However, when working within any level, it’s still useful to consider the work from both the macro and the micro at once. Does each word and sentence clearly communicate a point? Does each scene and character drive the story forward – or at least deepen immersion? Does each chapter matter? Every element should earn its place. The nature of this complexity means a good fiction editor must function as both a technical editor and a reader simultaneously – always asking not just whether prose is correct, but whether it’s working to serve the story, and in what capacity.
Editing fiction also requires the ability to judge the difference between stylistic flair and choices that obscure communication. There are fundamental English rules such as tense consistency, basic grammar, and subject-object agreement, but all rules can be bent and broken in service to a story. Consider ‘Flowers for Algernon’ by Daniel Keyes, in which basic spelling and punctuation rules are deliberately shattered to showcase the protagonist’s lack of mental acumen. The story wouldn’t be the same without that stylistic choice, a circumstance the editor no doubt had to work around – maintaining enough spelling and grammatical errors to showcase the protagonist’s state of mind, while simultaneously maintaining enough clarity to allow the story through.
The fiction editor is therefore always charged with considering what rules have been broken on purpose, what rules have been accidentally or needlessly cracked in half, and how to explain all of this respectfully to their clientele. Sometimes they’ll even be tasked to act as an unofficial mentor to their clients, who often express a deep wish to improve and will ask questions. I’ve spent many hours explaining fundamental concepts to authors who have learnt their craft independently of any school or study programme. It’s common for authors to be very skilled in some areas, such as character development, and to lack in others, such as action scenes. It’s important to remind authors that writing is a learnable skill, just like playing piano or baking.
Here’s a small sample of issues I regularly flag:
- Overuse of individual words (e.g. ‘gently’, ‘smirked’, ‘eyes’, ‘fingers’, ‘whispered’)
- Tense confusion (present tense slipping into past tense and vice versa)
- Portraying the same concept repetitively (e.g. ‘Her lips were rubies, almost crimson red, like ripe cherries’)
- Overpacking sentences with modifiers (e.g. ‘His gray, crumbling, ivy-coated castle in the west of the country, along the coast, was set against the golden, shimmering, and life-giving air of the wheat-covered Great Plains’)
- Weak verb choices (e.g. ‘walked’ vs. ‘strode’ or ‘cried’ vs. ‘sobbed’)
- Weak adjective choices (e.g. ‘Her legs were perfect’)
- Spatial continuity errors (e.g. a character leaves to go inside, but later on is then still present in the car without explanation)
Interfacing with authors is perhaps the most difficult part of the job. Authors are often compelled to write stories due to strong emotions or beliefs that flow from their innermost identities. This means that many authors feel like they’re exposing their hearts and are uniquely concerned about their work’s reception. Some authors understand that even the most brilliant writing can always be tweaked, polished and adjusted based on artistic preferences. Others meet any mention of edits with argumentative behaviour, defending themselves against what they perceive as an onslaught of criticism rather than the recommendations of a professional they hired precisely to recommend improvements. I’ve learnt to get ahead of this by providing a disclaimer at the top of my deliverables. It sets the tone, both reminding the author that I’m there to help them and that they’re welcome to discard all recommendations if it suits them. This has proven to help anxious authors keep their feet more firmly on the earth, where we can work together successfully.
Fiction editing is a strange discipline – part technical analysis, part emotional intelligence, part diplomacy – but that strangeness is exactly what makes it worthwhile. Every fiction manuscript is a leap of faith, a heart unlocked for the world to see. It’s my privilege to polish those hearts and then release them, shining, back to their keepers.
|
Blog post by: Jennifer Ledoux (pen name Zephyr Trillian) |
By Claire Bacon, 24 February 2026
Many of us in SENSE work with academics whose native language is not English. This sometimes involves helping our clients get research papers ready for publication in scientific journals. Often, these texts have structural problems that need to be fixed before we can deal with more minor language errors like spelling, grammar and word choice. This is because a good overall structure gives the paper cohesion – it ensures that the paper tells a compelling story with a clear message. In this blog post, I explain the main structural problems that lead to poor coherence in scientific papers and how to manage them.
Wrong information in the wrong section
Research papers typically contain Abstract, Introduction, Methods, Results and Discussion sections. In theory this should make research papers easy to write – but scientists often put the wrong information in these sections. Here are the common mistakes:
- Abstract: The Abstract needs to summarize the entire research story. That means it needs to define the knowledge gap, state the research question, describe the main methods and results, answer the research question, and outline the main implications. Often, the author pays too much attention to one point at the expense of another. So they may give a lot of background information but neglect the results – or they may focus completely on their findings without giving any context. Encourage them to spend one or two sentences at the most on each point so that the Abstract is complete.
- Introduction: The Introduction needs to introduce the topic, explain the study rationale, describe the current state of the knowledge, specify the problem being addressed, and ask the research question. A common mistake is giving too much general background information that is not relevant to the study question. Ideally, the author will start off fairly broad, gradually narrowing the background information down to focus on the specific research question. Help the author by flagging any information that does not seem directly relevant to the research question. Another common problem is not specifying the research question. This is central to the focus of the paper so it must be included at the end of the Introduction.
- Methods: The Methods section needs to give the reader all the information they need to understand what was done and to repeat any experiments. A common mistake is leaving out experimental information or not giving enough details. So let the author know if they describe data without saying how they collected it. Another problem is describing results – here we can remind the author that the Methods section is for describing what we did, not what we found.
- Results: The Results section needs to present the findings in a logical order using narrative text, tables, and figures as appropriate. A common mistake is not referring to all tables and figures in the narrative text and in the right order. Something else to watch out for is interpretation of the data – the author should objectively describe their findings in this section and leave the interpretation for the Discussion section. Look out for verbs like suggesting and showing, which indicate the author has moved beyond a simple description of the findings.
- Discussion: This is where the author should answer the research question – preferably in the opening paragraph. A common mistake is starting the Discussion by going back to the beginning and repeating the background information. This is not necessary – advise the author that simply repeating the research question before answering it gives the reader enough of a reminder of what the study is about. Also watch out for excessive repetition of results in the Discussion. Here the author should be focusing on explaining and interpreting the findings, so a very brief reminder of the data is sufficient. Flag any sentences that contain specific data with P values and remind the author that these details are for the Results section.
Not asking or answering the research question
The research question is central to the cohesion of a research paper because each section centres around it: the Introduction asks the question, the Methods explains how the question was answered, the Results gives the information needed to answer the question, and the Discussion answers the question and justifies the answer. Not asking the research question at the end of the Introduction leaves the reader wondering what the purpose of the study is, so add a note for your author if they have left it out. Another problem is not providing a clear answer to the research question at the beginning of the Discussion. The author needs to answer the question in the opening paragraph of the Discussion to provide the basis for the justification of that answer that will follow. Something else to watch out for is that the research question is consistent throughout the paper – that the author actually answers the question they asked at the start rather than providing an answer to a completely new question (it happens more than you may think!).
Not structuring and linking paragraphs properly
Paragraphs are an essential tool for structuring ideas and arguments clearly and logically. Scientists struggle to structure and link their paragraphs properly in research papers and the result is that they often do not fully develop and conclude each topic before moving on to the next, which contributes to poor coherence. Checking that each paragraph deals with one topic is the best place to start, and explaining that each paragraph needs to introduce, develop and conclude one topic is also often helpful. Scientists sometimes need a lot of expert intervention to help them with this, so be prepared to restructure paragraphs and add helpful topic sentences where needed.
Help is at hand
Expert editors with a sound knowledge of how a research paper should be structured can help academic authors get their work published. Offering clear guidance on the issues outlined in this post will help academics communicate their research in a clear and compelling way.
|
Blog post by: Claire Bacon |
By Pierke Bosschieter, 5 February 2026

The index is one of the most underestimated parts of a book. When it is missing, readers notice immediately; when it is poorly made, they notice even more. A bad index sends readers on wild goose chases, points them to pages that say nothing useful, or simply fails to acknowledge what the book is actually about. A good index, paradoxically, disappears entirely. It does its job so quietly that few readers stop to consider that it was designed, structured and written by a human who made hundreds of small, deliberate decisions.
For editors and translators, this should sound familiar. We all work with texts in ways that are meant to be seamless and invisible. When the work is good, no one comments on it; when it is bad, it becomes impossible to ignore. Indexing belongs to this same family of text-based crafts, even though it often sits at the very back of the book and well outside the spotlight.
The key difference is that indexers do not work on the narrative itself, but on the infrastructure around it. If the text is the building, the index is the floor plan and the signage. Readers may admire the architecture, but when they are lost – or in a hurry – they reach for the map. A well-made index does not ask for attention; it simply gets the reader where they need to go.
Words versus concepts
One of the most persistent misconceptions is that an index is merely an alphabetical list of words extracted from a text. This idea is reinforced by software that can generate something index-like at remarkable speed. What such tools cannot do is understand what a text is about. Indexing is not about words; it is about concepts. It involves deciding what is significant, what is secondary, how ideas relate to one another, and which terms readers are most likely to use when searching for information.
That conceptual focus makes indexing closely related to editing and translating. Like editors and translators, indexers read analytically and critically. They interpret meaning, register nuance, and consider the expectations of a specific audience. The additional layer is usability. Indexers constantly shift perspective from writer to reader, asking not ‘what did the author mean?’ but ‘how will someone try to find this?’.
This shift in perspective also explains why authors themselves are usually not the obvious choice to create the index for their own work. Authors are deeply involved in their text. They know its structure, terminology, and internal logic too well. That familiarity makes it difficult to step back and see the book as a user would. Indexing requires distance and a willingness to question the text’s assumptions. Much like self-editing or self-translating, author indexing often prioritizes intention over accessibility.
Indexing requires training
What is less widely known is that indexing is not an improvised activity guided by personal preference. Professional indexers work according to established rules and conventions, including international standards such as ISO norms. These norms address matters such as structure, consistency, cross-referencing, and clarity. They exist for the same reason editorial style guides exist: to ensure that readers can rely on predictable, intelligible navigation. A recent example of this ongoing standardization work can be found in the NISO recommended practice ‘ANSI/NISO Z39.4-2021 Criteria for Indexes’. Indexing may look creative, but it is creativity exercised within a clearly defined framework.
The craft can also be learnt. There are structured training routes, including well-established online courses in the UK and US, and indexing is supported by professional societies across the world. These organizations provide education, guidance, mentoring, and a shared understanding of best practice. In the Netherlands, indexers are represented by the Netherlands Indexers Network (NIN) while many international indexers are affiliated with bodies such as the Society of Indexers (SI) and the American Society for Indexing (ASI). Their existence underlines a simple fact: indexing is a discipline with its own standards, not an afterthought to be tacked on at the end. If you are considering adding indexing to your professional portfolio, a short course with Sylvia Coates can offer a practical introduction and help you assess whether indexing is a good fit for you.
The skills required reflect this. Indexers need excellent reading comprehension, strong analytical abilities, and the capacity to think in systems rather than sentences. They must be consistent without being rigid, precise without becoming pedantic, and flexible without losing structure. They also need patience, concentration, and a certain tolerance for working in obscurity. When an index functions perfectly, it rarely attracts praise.
Sister crafts
Indexers also share challenges that editors and translators will recognize immediately. One of these is competition from AI. Automated tools can produce indexes quickly and cheaply, and in some contexts, they may appear adequate at first glance. What they lack is conceptual understanding. They recognize surface language rather than meaning, and they have no sense of how readers search, hesitate, or misunderstand. As with machine translation and automated editing, AI can be a useful aid, but it cannot replace informed human judgment.
Another familiar challenge is competition from untrained providers – the beunhazen – who believe that ‘anyone can make an index’. In a narrow sense, this is true. Anyone can produce something that looks like an index. The problem is that poor indexing damages books, frustrates readers, and undermines the profession itself. Editors and translators have long experience with this dynamic and its consequences.
Despite these pressures, the index remains essential. In an age of information overload, access matters as much as content. A well-constructed index increases a book’s usability, extends its lifespan, and supports serious engagement with complex material. It turns information into something navigable rather than overwhelming.
Indexing is therefore not a marginal activity, but a sister craft to editing and translating. It is governed by standards, supported by training, and sustained by the same conviction: quality is the result of expertise, not automation or convenience. And, like so much human work done well, it is mostly invisible – until it isn’t there.
Pierke Bosschieter has been a professional indexer since 2005. She specializes in Middle Eastern studies. She’s one of the coordinators of ICRIS (the international coordinating body for indexers) and is on the editorial board of The Indexer (an international academic journal). She’s a driving force behind NIN, and is mentoring beginning indexers and trying to bring awareness of indexing to the Dutch publishing industry.
|
Blog post by: Pierke Bosschieter |
By Jan Klerkx, 22 January 2026
On 28 November 2025, SENSE member David Barick, experienced teacher, editor and translator of academic research texts, presented a lively interactive Zoom talk under the catchy title of ‘Do they still need me?’ on the fraught question of whether AI will replace humans as teachers, editors and translators of academic research writing.
David started by referring to some workshops on the topic that he had recently attended. A European Association of Science Editors (EASE) panel discussion concluded that ‘ChatGPT has a weak ability to differentiate between good/excellent and weak/OK research.’ At the same panel discussion, James Zhou presented results of a survey among researchers from 110 institutions, asking whether they found LLM tools helpful for their academic writing. Nearly 60% of respondents found them ‘helpful’ or ‘highly helpful’, and about 20% found them ‘much more helpful than most human feedback’.
David then went on to discuss some of his own experiences of what LLMs, specifically ChatGPT, can do when it comes to editing scientific papers. He had asked ChatGPT to do some of the editing exercises he uses in his own teaching. He commented on paragraph structure and paid specific attention to coherence techniques, including given-new patterns, repetition of key words, grammatical parallelism and the use of transitional phrases and cohesive markers. He asked the audience to comment on the examples too, using the Zoom chat function.
In an example on thermonuclear energy production, ChatGPT did indeed improve many of the coherence problems, but it also produced longer sentences than in the original, even though it generally recommended shorter sentences.
When asked to comment on the use of coherence techniques in a sample text on endometriosis, ChatGPT correctly identified the use of repetition of key terms and the use of linking devices and parallel structures. The program’s editorial judgement was that ‘minor changes would likely be enough to make the text publishable for a specialist readership’, but that ‘more intervention would be beneficial’ for a more general audience. However, the suggestions it made for such interventions were minimal and not very helpful. Under the heading ‘Break up long, dense sentences’ it produced a suggestion that actually resulted in a less concise paragraph! It also suggested breaking up a nine-sentence paragraph using subsections with separate headings, which journal editors would probably not appreciate.
The final example concerned a longer text (an entire introduction section) on alcohol consumption patterns, which was judged by the audience to be poorly written. They suggested it had not been written by a native speaker of English. ChatGPT also recognized this and even correctly surmised that the text was written by an author from Spain or France (the author was in fact Spanish). It also correctly identified many of the problems of coherence, grammar and collocations. Other suggestions, however, were less helpful and often involved introducing words or phrases that did not add useful content and actually hampered the flow. Some of the additions sounded very generic, not specifically relating to the topic of the text, e.g. ‘This review aims to synthesize recent findings, identify consistent patterns of impairment, and highlight methodological limitations to guide future research.’
Some of ChatGPT’s comments showed that it failed to distinguish between closely related words, e.g. ‘binge drinking’ and ‘hangover’, which it both referred to as patterns of alcohol intake.
Finally, David showed us what ChatGPT had to say when it was asked: ‘ChatGPT can give extensive information on how to write a scientific research article. Do you think that it is a satisfactory substitute for human teachers of this subject, or that it will become so in the future?’ ChatGPT’s answer was rather diplomatic: it claimed that ChatGPT (or AI in general) was already good at explaining structure and conventions clearly, providing quick feedback and editing assistance, generating tailored practice tasks and summarising or explaining complex research writing guides. In contrast, it suggested that human instructors would still be better at mentorship and judgment, understanding nuance and emotion, evaluating scientific reasoning and teaching through dialogue and modelling.
David’s final conclusion was therefore that AI would not replace writing instructors any time soon, as humans will still be better at critical thinking and social learning. AI may function as a complement to human teaching, but a good teacher will always add useful extras to what AI can do.
|
Blog post by: Jan Klerkx |
By Tracy Brown, 5 January 2026

‘I write to know what I think.’ (Flannery O’Connor)
O’Connor’s insight captures a truth at the heart of writing: the act itself is a journey of discovery. Each sentence we wrestle with, each paragraph we revise, brings clarity, not just to our readers, but to ourselves. Writing is thinking made tangible. But what happens when artificial intelligence enters the picture? Can AI become a partner in this process, or does it risk short-circuiting the very mechanism through which writers discover their own thoughts?
AI has undeniable appeal. For experienced writers, it can generate ideas, suggest phrasing and help navigate the occasional bout of writer’s block. But for new writers, particularly those who have never written without it, AI can be more of a crutch than a catalyst. The difference lies in the relationship between thinking and writing, which is a relationship AI, however sophisticated, cannot replicate.
The value of writing without AI
At its core, writing is a process of self-clarification. When we write unaided, we confront our ideas in their raw, unfinished form. Struggling to articulate a thought forces reflection: we wrestle with ambiguity, untangle contradictions and confront gaps in understanding. This struggle is where voice is born. Style emerges not from polish, but from persistence, from returning to the page again and again until the words sound like us.
Consider a beginner drafting an essay or story. They pause, scratch out sentences, reconsider word choice and sometimes abandon an idea entirely. This friction, the mental resistance encountered when shaping thought into language, is essential. It is not just about grammar or flow; it is about discovering what we think and how we feel. Writing teaches us our own minds.
When AI steps in too early, it smooths over this friction. It offers fluency without struggle. And while that can feel productive, it can also bypass the very work that makes writing meaningful.
How AI can help writers
This is not an argument against AI. Used consciously, AI can be a genuinely useful tool, especially for experienced writers who already have a sense of voice, perspective and purpose. In those cases, AI functions less as a replacement for thinking and more as a support for execution.
Some of the ways AI can help include:
- Outlining and structuring ideas
AI can help organize complex material, suggest logical flows or surface gaps in an argument. This is particularly helpful when a writer already knows what they want to say but needs help shaping it. - Editing and revision
AI can identify awkward phrasing, repetition or unclear sentences. Crucially, this only works if the writer approaches its suggestions critically. Without that critical stance, AI becomes a mirror that simply validates whatever is already on the page. - Organizing scattered thoughts
For drafts that exist as notes, fragments or rough paragraphs, AI can help cluster related ideas and propose a clearer structure. - Ensuring consistency of voice and tone
For longer projects, AI can flag inconsistencies in tone or terminology, helping writers maintain coherence across chapters or sections.
In all these cases, AI works best as a secondary tool. The thinking still originates with the writer. The judgment still belongs to the writer. The writer remains in control.
What AI cannot do
What AI cannot do is give you insight into yourself.
It cannot tell you what you actually believe, or why a particular idea matters to you. It cannot help you arrive at a position you did not already hold. It cannot replicate the internal shift that happens when, halfway through a paragraph, you realize you were wrong or that the real point is something else entirely.
Finding your own voice is not just about sounding distinctive. It is about discovering your own ideas, your own opinions, your own way of seeing the world. That discovery happens through effort. Through uncertainty. Through writing sentences that don’t quite work and staying with them anyway.
AI also cannot give you the thrill of a breakthrough, the moment when something clicks, when a vague feeling crystallizes into a clear thought. Those moments are not incidental to writing; they are the reward. And they are intrinsic.
Intrinsic vs. extrinsic rewards
This is where the deeper risk lies, especially for new writers.
Writing offers intrinsic rewards: discovery, clarity, the quiet satisfaction of understanding something more deeply than you did before. These rewards emerge slowly, through effort.
AI, by contrast, offers extrinsic rewards: speed, completion, polish. A finished paragraph. A clean draft. The sense of being ‘done’.
When writers rely too heavily on AI, the balance shifts. Completion replaces discovery. Output replaces insight. The work may look finished, but the writer has learnt less from it.
For beginners, this matters enormously. If you skip the struggle, you skip the growth. You may produce text, but you do not develop the muscle of thinking through writing. Over time, that loss compounds.
Tool or crutch?
For experienced writers, AI can be a powerful tool ‒ one that accelerates, supports, and occasionally challenges their thinking. For new writers, especially those who have never written without it, AI risks becoming a crutch that dulls curiosity and replaces the hard but necessary work of self-discovery.
The distinction is not about technology. It is about intention.
If writing is merely a means to an end, AI may be enough. But if writing is how you come to know what you think, how you find your voice, refine your ideas, and understand your own position in the world ‒ then no machine can do that work for you.
In the end, the writer still has to write.
|
Blog post by: Tracy Brown |
By Claire Bacon, 17 December 2025

It’s that time of year again – time to renew my SENSE membership. I have now been a member for 10 years, so what better time to look back on how SENSE has supported me in my work over the years?
The people
For me, one of the biggest advantages of SENSE is the camaraderie within the Society. I have always felt very welcome in SENSE, even though I live in Germany and do not speak Dutch. When I joined SENSE back in 2015, I was in the early stages of leaving academia to set up my language editing business. Everything was new and a little overwhelming, but my fellow SENSE members, many of whom had been in the business for years, helped me enormously with their advice and encouragement. As time passed, networking within the Society helped me to build up my client base and a thriving editing business. It also introduced me to exciting new work opportunities, including teaching scientific writing, which I enjoy tremendously. When harder times hit us a few years ago, it was extremely helpful to be able to discuss these challenges openly with fellow members at in-person and online events.

Getting involved
A great way to network is to get involved! At my first SENSE event (the 25-year Jubilee in 2015) I agreed to write an article about the event for the SENSE magazine. Shortly after, I joined the SENSE Content Team, which is responsible for producing and editing the content that SENSE publishes. Writing and editing articles for the SENSE magazine and later the SENSE Blog not only helped the Society but also increased my visibility among fellow language professionals. This has led to many referrals of work over the years. You can find out more about volunteering for the Society here.

Professional development
SENSE also offers a variety of opportunities for professional development, and there really is something for everyone! Over the years I have attended a number of professional development days, conferences, and workshops organized by the Society – both online and in person – and have always been impressed by how much these events cater to my needs as a language editor for academics in the health and life sciences. As well as learning what’s new in the industry and sharpening my skills, these events were a valuable opportunity to meet old friends, make new contacts and exchange useful ideas. There is always something going on in SENSE and you can find out more about upcoming events here.

Special interest groups
SENSE caters to the specific needs of its members through special interest groups (SIGs). Like many editors, I sort of stumbled into the profession by accident! An advantage of this is that I became a very niche editor, using my background in scientific research to shape the kind of editing work I do – mainly scientific research articles and grant proposals. Fortunately for me, SENSE has two great SIGs that offer meetings for editors working with academics: SenseMed and UniSIG. These meetings have been a great way to support my specialized editing work over the years. You can find out more about the many SIGs that SENSE has to offer here.
Join the community!
I doubt I would have gotten as far as I have as a language professional if I hadn’t joined SENSE. The Society welcomes all language professionals and offers a vibrant and supportive community. To learn more about the many benefits of joining the Society and what it can offer you, visit the SENSE website.
|
Blog post by: Claire Bacon |
By Santiago Gisler, 27 November 2025
How should writers use generative AI? Not at all, if possible.
The cabaret-like entry of ChatGPT into the public stage left us in awe. We were suddenly confronted with a technology accompanied by waves of contradictory promises. On the one hand, we were told that AI would transform society, improve medicines, or help us create better content. On the other hand, many warned it would replace professionals, introduce harder-to-control misinformation and plagiarize content.
Whether we should use generative AI for writing is a nuanced question. I remain skeptical, but the more I use and learn about it, the more I’m inclined to recommend that content creators avoid AI whenever possible.
My skepticism derives from the numerous problems associated with the technology. I won’t even delve into its broader ethical issues, such as environmental impact, human rights abuses or its use in developing armaments. Because even if we focus solely on our work as writers, the problems of excessive AI use still outweigh its benefits.
As writers, we have valid concerns: Will AI content creation and usage replace human work? How can we use it responsibly and accurately? These are critical and valid questions. Still, since I haven’t fully embraced a purist approach, I won’t advocate abandoning AI altogether. Instead, I’ll share my current perspective on generative AI and how to approach it cautiously in writing.
AI is really just a large language model
Given the almost anthropomorphic qualities people tend to attribute to generative AI – even in academic circles – it’s worth clarifying these tools’ alleged intelligence. Their human-like characteristics have created immense marketing potential, portraying them as intelligent, emotional or objective.
At a recent philosophy workshop on AI ethics, the event organizers repeatedly attributed god-like properties to ChatGPT, including future motives and feelings. Some attendees had personalized their ChatGPT to behave like famous characters, such as Harry Potter. This excessive decoration highlights the over-the-top marketing behind generative AI tools and reminds us to treat them with caution.
In reality, generative AI tools like ChatGPT are large language models (LLMs), sophisticated mathematical models built on powerful Transformer architecture that recognize patterns in language. They don’t think; instead, they predict the next likely token (a piece of a word or a whole word) based on patterns they have learnt from their training data, context and probability. This process continues until the model meets a predefined stop condition, resulting in a coherent, human-like response.
Although impressive, these outputs lack intelligence in the sense of logic, reflection or problem-solving. LLMs merely repeat data they’ve been trained on – data from other writers – and provide a statistically probable outcome. With this in mind, we begin to see where an over-reliance on LLMs becomes problematic in writing.
AI weakens our writing and thinking
Loosing the chance to develop
I have a love-hate relationship with my old articles. All their grammatical mistakes, clumsy formulation and misused expressions make me cringe. Still, embarrassing as they are, those hair-raising mistakes also highlight my writing progress and stylistic improvements over time.
As we increasingly rely on generative AI tools for writing, we lose these auto-reflective feedback processes, and with that, the opportunity to develop from them. This AI trap specifically affects new writers who haven’t had the chance to develop a personal writing style.
AI-related hallucinations are also a big problem for new writers or anyone new to a topic. LLMs are prone to make things up – a lot. They deviate from facts, contradict themselves or the prompts, or include nonsensical information. These hallucinations result from issues with the quality of the training data, generation methods or input quality, and are challenging to identify unless the writer is somewhat familiar with the topic.
A feedback loop of mediocre and erroneous content
Other critical drawbacks of generative AI models relate to our future information landscape and our ability to think critically and solve problems. Writing and everything around it requires an ability to think critically while organizing and structuring our perspectives to make an impact.
Excessive AI use strips away these critical aspects of impactful writing and traps the text, language and opinions within a generic, all-pleasing framework.
The more we rely on AI-generated content, the greater the likelihood that future AI model training will depend on dull and sometimes flawed data. It becomes a positive feedback loop of generic, impersonal and blunt content that is just… there.
Generative AI bots accounted for more than half of all web traffic in 2024, a figure that is expected to increase each year. LLMs may initially help us create coherent and seemingly credible content. However, as the information landscape becomes increasingly reliant on AI-generated content, it draws audiences away from our human-made, personal and engaging content, ultimately reducing our online visibility and readership.
What we’re left with are accumulations of LLMs trained on LLM content, with fewer personal experiences and more hallucinations.
How to use generative AI
So, how do we turn all this skepticism and negativity into a constructive approach? My answer would be that, if we necessarily need to use generative AI models for our writing, we’re better off using them sparingly and intentionally.
I’d recommend approaching it in the following way:
- Understand the topic by researching literature and videos. Although selective and sometimes unreliable, applications like Copilot can help you with your first references if you’re unfamiliar with the topic.
- Use your own words when drafting, and highlight any statements, expressions, phrases or sentences you’re unsure of.
- Ask AI models targeted and specific questions about your highlighted sections. Instead of asking ‘Does AI hallucinate?’ ask, ‘What are the most common factual errors or hallucinations that occur when writing about quantum computing?’
- Use multi-shot prompting, in which you submit several prompts with comprehensive context and examples before submitting your specific request.
- Specify your prompts: ‘Proofread this text for grammatical errors and factual inaccuracies only. Do not change the style or phrasing unless it is incorrect. Flag any sections that seem to lack supporting evidence.’ Asking AI tools to ‘improve the text’ will always prompt them to suggest excessive changes, regardless of how well you write.
- Explicitly highlight all possible answers when asking a closed-ended question. Instead of asking, ‘Does this summary miss any key points?’ ask, ‘Does this summary miss any key points, or is it complete and accurate?’ This reduces the risk of the tool conforming to what it thinks you want.
- Use neutral language and avoid suggestive phrasing such as ‘Isn’t this a great sentence?’
- Approach all the information you receive from AI models with sound skepticism.
Eventually, we may realize that all these processes cost more time than simply researching and writing on our own. I’m not here to discourage anyone from using AI tools. But perhaps a relevant question is whether we really need AI at all from a linguistic, professional and ethical perspective.
|
Blog post by: Santiago Gisler |
By Jackie Senior, 13 November 2025
Given the recent developments in AI, I carried out a survey to discover how SENSE members view generative AI tools, the changes in their language work, and their future. The survey was carried out online in February-March 2025.
1. Survey results
Demographics
- There were 79 anonymous responses (33% of SENSE members), with expertise spread across the SENSE spectrum (editing, translation, teaching, copywriting, transcreation, etc.).
- 86% of respondents have been language professionals for more than 10 years.
- The data confirmed that SENSE is an ageing society, with 53% of respondents older than 60 years and 42% between 40 and 60 years (as compared to 30% and 59% in a SENSE survey in 2014). In the free text responses (49/79), 15% of respondents said they were already receiving a pension or would be soon.
Generative AI: use and perceptions
- Most respondents (86%) already use older tools like DeepL, Google Translate, SDL Trados, MemoQ or PerfectIt, either frequently or some of the time.
- The majority of respondents (63%) reported not using generative AI tools in their work, although 9% reported using them ‘a lot’.
- Respondents more often reported negative feelings about what AI would bring to their profession (41% apprehensive; 17% pessimistic), with only 16% being optimistic/fairly positive.
- Only 17% reported having been asked to use generative AI tools by a client, publisher or agency. I did not enquire whether they had actually used generative AI for that particular job.
- How responders feel about generative AI was quite evenly split: 11% were excited, 27% interested, 21% neutral, 28% apprehensive and 13% not interested.
- 74% reported using these tools for personal tasks at least sometimes.

The changing ‘work-scape'
- 66% of respondents had had increased their range of language-based work in the last 5 years, and 77% had added a completely different kind of service.
- 30% have other interests/skills they could develop (into a service or money-earner), or a plan B, while another 42% reported they may have other options.
- 40% were the sole earner in their household, 38% a shared earner, and 23% a minor earner. 54% had other sources of income outside their language services (e.g. pension).
- Slightly more than half (52%) have seen a drop in income from their language work in the past two years.

Statistical analysis
- There seems to be a relatively small group who have a positive attitude to AI and use it both at work and for personal tasks, and a larger group who are neutral or negative and not using AI at all.
- No significant correlation was found between the use of or attitude toward generative AI and the respondents’ age group, but there were very few younger (under 40 years old) respondents.
- Use of AI and feelings about it at work were largely unrelated to the kind of language work being done.

2. Implications for SENSE
- SENSE was an ageing society in 2015, and is now 10 years older, with the number of members dropping fast.
- So what do the working members want to see in their professional society?
- How relevant can SENSE be for its members in these changing times?
- How does the Society need to change – or should it just retire quietly?
- SENSE must determine its members’ age groups. (At the moment this information is no longer collected because of the new privacy law. Each member must give permission for their age to be processed in Society information.)
3. What can we do?
- Keep in mind this is not the first time the field has faced a major change in our mode of working.
- Build personal relationships with clients, keeping the human face in your work. This can be challenging for freelancers, but it is clearly something that differentiates us from machines.
- Take courses and follow resources that improve our awareness of how to use generative AI tools, and develop hands-on experience that helps us understand what they can and can’t offer.
- Be able to show clients you can work with these tools, but also that you offer skills AI tools do not have.
- Adopt better pricing strategies that reflect the changes in the field, e.g. fee per hour, valuing your time, adding administration charges, and raising rates each year.
- Build reciprocal relationships with other professional colleagues that improve both the quality and continuity of your services: someone to watch your back, share skills and take up the work that you can’t.
- Look to expand the services you offer, whether in language services (e.g. offering workshops) or in new directions.
Credits go to Kate Mc Intyre, who compiled this blog post, and to Clare Wilkinson, who did the statistical analysis of the survey results. The PowerPoint PDF from Jackie Senior’s presentation to the SENSE 35-year Jubilee Conference on 20 June 2025 is available to SENSE members in our Library. Please send any comments to Jackie (email address is in the membership directory on the SENSE website).
List of resources
- The SENSE Blog
https://www.sense-online.nl/sense-publications/blog - ‘AI in Medical Writing and Editing’ training course
Emma Nichols
https://www.aimwecourse.com/ - AI Tools Boot Camp
Avi Staiman
https://www.aclang.com/ai-bootcamp.php - Generative AI in learning, teaching and assessment
Open University (UK)
https://about.open.ac.uk/policies-and-reports/policies-and-statements/generative-ai-learning-teaching-and-assessment-ou - BBC news and reports on AI
https://www.bbc.com/innovation/artificial-intelligence - Business coaching, workshops, a blog, newsletter
Lion Translation Academy (Joachim Lépine & Ann Marie Boulanger)
https://www.liontranslationacademy.com/ - Leaving academia: becoming a freelance editor
Paulina S. Cossette
https://AcadiaEditing.com/BecomeAnEditor - How to build a global academic editing business
(podcast interview by Paulina Cossette with Marieke Krijnen)
https://www.youtube.com/watch?v=jNfzf1wkvyk - Editing synthetic text from GenAI: two exploratory case studies
Michael Farrell (2024)
http://dx.doi.org/10.13140/RG.2.2.16045.81128 - Survey on the use of GenAI by professional translators
Michael Farrell (in ‘Translating and the Computer 46’, 2024, pp 23‒34;
©AsLing, the International Society for Advancement in Language Technology
https://www.tradulex.com/varia/TC46-luxembourg2024.pdf#page=23 - Henley Business School poll of 4500 people (2025)
https://www.bbc.com/news/articles/c3rpx1rl2nlo - https://societyofauthors.org/2024/04/11/soa-survey-reveals-a-third-of-translators-and-quarter-of-illustrators-losing-work-to-ai/
- https://www.theguardian.com/technology/article/2024/sep/07/if-journalism-is-going-up-in-smoke-i-might-as-well-get-high-off-the-fumes-confessions-of-a-chatbot-helper
- https://www.theguardian.com/books/2024/nov/04/dutch-publisher-to-use-ai-to-translate-books-into-english-veen-bosch-keuning-artificial-intelligence
- https://www.theguardian.com/books/2024/apr/16/survey-finds-generative-ai-proving-major-threat-to-the-work-of-translators
- The Chartered Institute of Editing and Proofreading’s (CIEP, UK) knowledge hub has a few items on AI
https://www.ciep.uk/knowledge-hub/search-the-knowledge-hub.html?searchQuery=AI
|
Blog post by: Jackie Senior |
By Maria Sherwood Smith, 30 October 2025
On 26 September 2025, UniSIG came together online for a presentation by Joy Burrough-Boenisch on ‘Dealing with maps in scientific and scholarly texts’. The talk was based on the presentation Joy gave last year at METM24 in Carcassonne, and she had updated it to include some recent cartographic debates.
Joy started by going back to the basics, introducing us, via an article by geographer Caitlin Dempsey, to eight elements that make up a map. The most important ones for the ensuing presentation were the map legend, scale bar, north arrow, and inset (locator) map. Joy reminded us that the convention of north-oriented maps is not self-evident or universal, referring to medieval Christian maps (east-oriented – a worldview crystallized in the very concept of ‘orientation’) and south-oriented early Islamic and Chinese maps. A more recent south-oriented map is the McArthur’s Universal Corrective Map of the World, from Australia, published in 1979.
Having armed us with the basic knowledge we needed, Joy invited us to consider an array of maps she had been presented with in her editing practice. All of these maps were in need of improvement to make them clear for the reader. Often they lacked one or more of the basic elements discussed above. We considered maps with no legend, for instance, or where the legend assumed knowledge that the reader might not have (e.g. an unexplained ‘NAP’ in a map of the elevation of the Netherlands: a participant enlightened us with the correct English translation ‘Amsterdam Ordnance Datum’). Many maps relied on unexplained assumptions, like a colour-coded system of gradations from green (good) to red (bad), or a system of darker colours to indicate intensity, without providing a clear legend. In some cases, simply changing the orientation of a map or adding a scale bar could immediately make the map more informative.
In other cases, Joy had uncovered more complex issues, such as a map referring in the legend to 17 sites, but only actually showing 13, because ‘some symbols overlap due to the proximity of the sites’. Here Joy had advised the author to use a ‘callout’: a line from the symbol in the map to further explanation in a text box. Other delicate matters Joy has had to advise on included a map of the Wadden Sea and adjoining countries, in which the German state of Schleswig-Holstein had been shown as belonging to Denmark. In all, the message was not to take maps at face value when editing.
In the final section of her talk, Joy discussed the broader issue of the political implications of maps, neatly summarized in a quotation from El País (English edition): ‘Maps are not innocent drawings’. Here Joy touched on recent moves to replace the Mercator projection traditionally used in cartography with the more realistic ‘Equal Earth’ projection. The latter shows countries and continents in their true proportions: Africa, for instance, is much larger than the Mercator projection would suggest. But new maps can also reflect more sinister political aspirations. Joy pointed to the inset map that Chinese researchers are obliged to include in all their maps of China: when enlarged, this apparently ‘innocent drawing’ can be seen to designate Taiwan and other islands as Chinese territory, in contravention of the UN-agreed boundaries.
All in all, Joy’s presentation gave us plenty of material for discussion. At one point we considered the differences between a ‘contour map’ (terrain indicated using contour lines) and a ‘relief map’ (visual representation of terrain). I feel that Joy’s talk as a whole filled in the gaps in my very blurred and sketchy concept of a map, and made me more aware of maps’ potentially serious implications.
|
Blog post by: Maria Sherwood Smith |
By Percy Balemans, 13 October 2025

Some clients may ask you to ‘transcreate’ or ‘adapt’ a text instead of translating it. But what is transcreation?
Transcreation basically means recreating a text for the target audience, in other words ‘translating’ and ‘recreating’ the text. Hence the term ‘transcreation’. Transcreation is used to make sure that the target text is the same as the source text in every aspect: the message it conveys, the style, the images, the emotions it evokes and its cultural background. You could say that transcreation is to translation what copywriting is to writing.
One could argue that every translation job is a transcreation job, since a good translation should always try to reflect all these aspects of the source text. This is of course true. But some types of texts require a higher level of transcreation than others. A technical text, for example, will usually not contain many emotions and cultural references, and its linguistic style will usually not be very challenging. However, marketing and advertising copy, which is the type of copy to which the term transcreation is usually applied, does contain all these different aspects, making it difficult to create a direct translation. Translating these texts therefore requires a lot of creativity.
In her book on transcreation1, Nina Sattler-Hovdar explains the difference between translation and transcreation as follows: a translation is mainly intended to inform the reader, whereas a transcreated text must motivate the reader (for example to buy a product or service).
Required skills
In addition to creativity, a transcreator should also have an excellent knowledge of both the source language and the target language, a thorough knowledge of cultural backgrounds, and be familiar with the product being advertised, while at the same time being able to write about it enthusiastically. In addition, it certainly helps if the transcreator can handle stress and is flexible, since advertising is a fast-paced world and deadlines and source texts tend to change frequently.
Types of texts
The types of texts offered for transcreation vary from websites, brochures, and TV and radio commercials aimed at consumers to posters and flyers for resellers. They could be about any consumer product or service: digital cameras, airlines, food and drink, clothing and shoes, and financial products. Transcreators are often asked to deliver two or three alternative translations, especially for taglines, and a back translation (a literal translation back into the source language), to help their client, who typically does not understand the target language, get an idea of how the message was translated. Transcreators are also expected to provide cultural advice: they should tell their client when a specific translation or image does not work for the target audience.
What makes transcreation difficult?
In addition to the difficulties posed by creating a target text containing all the aspects of the source text (message, style, images and emotions and cultural background), marketing and advertising copy often poses other difficulties for the transcreator as well. Taglines, for example, often contain puns or references to imagery used by the company. They tend to be incorporated in a logo or image, with limited space and a fixed layout for the text. In addition, they are often used for multiple target groups: not just consumers, but also resellers and stakeholders, which means the text should appeal to all of them.
Can transcreation be done using AI?
If the Big Tech people are to be believed, AI can do ‘anything’. The AI tools used for translation and related tasks consist of so-called large language models (LLMs). LLMs are algorithms that basically ‘link together word patterns they’ve calculated from their training data’2. LLMs do not understand language, so they do not write texts – they simply combine words based on algorithms.
An LLM could potentially be used for brainstorming, but using them to try and transcreate a text is not recommended, as they do not understand cultural references, idiom or word play. They may get it right in the case of commonly used references, but it is not safe to rely on this. Creating a customized transcreation for a specific target audience still requires the skills of a professional human translator.
Also, doing your own research by browsing dictionaries, thesauri, and other trusted sources, instead of getting answers from a machine, stimulates your creativity and helps you find plenty of creative options.
Sources
1. Get Fit for the Future of Transcreation: A handbook on how to succeed in an undervalued market by Nina Sattler-Hovdar.
2. The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want by Emily M. Bender and Alex Hanna.
|
Blog post by: Percy Balemans |









