On 1 June 2023, Rachel Pierce, a US-based freelance translator and copywriter, spoke to a packed Zoom meeting of the SENSE Tech SIG about ChatGPT. While many of us have dabbled with ChatGPT, it was Rachel’s recent article for the American Translators Association (ATA), ‘5 Tedious Non-Translation Tasks ChatGPT Can Do Amazingly Well’, that prompted the invitation to her so we could learn some practical ways to take advantage of this rapidly evolving technology. In the meeting, Rachel outlined how to use ChatGPT’s capabilities for tasks that can help language professionals maximize their productivity, while also acclimatizing to the inevitable changes coming to language professions as a result of generative artificial intelligence (AI) technology.
Language professionals are the most qualified users
Rachel acknowledged from the start that it’s quite overwhelming to keep up with what is happening. This technology is spurring the introduction of new tools and integrations every week. But she emphasized that ‘As linguists, we’re uniquely qualified to assess how good the tools are, and to find unique ways to use them.’ We’re encouraged to jump in and start testing their limits. A lot of the tools we use every day like Microsoft Word have incorporated AI functionality already. Think about Microsoft Editor or the auto-complete feature. Rachel mentioned Microsoft 365’s new digital assistant, Copilot. This is a ChatGPT-powered interface that Microsoft is integrating into its Office applications this year. Translators have been using machine translation tools that incorporate AI technology for several years. Following the launch of ChatGPT, many professionals are now looking at how they can incorporate new AI tools into their language-related workflows. As Rachel says, ‘It’s coming whether we want it to or not.’
Tools that automate ChatGPT
Rachel discussed a few tools she’s been sampling that interface with ChatGPT. She mentioned Wordscope, a web-based multi-functional translator’s toolkit that uses buttons to interact with ChatGPT and send it pre-written prompts for various tasks. Gaby-T is another app built to automate ChatGPT interaction for language professionals. These are but two of many tools now available either as a plug-in within ChatGPT or as a browser app or web-based app. In short, these tools automatically generate complex prompts so we don’t have to be ‘prompt engineers’ to leverage the chatbot’s ability to process sophisticated instructions. But Rachel also suggested that it’s good to learn some prompt-writing skills, reminding us that we all learnt how to use search engines 20 years ago – ‘We’re already good Googlers, so we can learn how to talk to a chatbot.’
GPT 3.5 versus 4.0
The second half of the presentation focused on best practices, caveats and demonstrations of different tasks that applications using GPT can do. Rachel briefly mentioned some differences between the two language models currently available on the software site: GPT 3.5 is free and faster, but trained on less data so best used for simpler tasks; GPT 4.0 (subscription only) is trained on exponentially more data, so it handles complex instructions better and its responses are much more accurate.
ChatGPT’s shortcomings are notable. An important one is the lack of training on data after September 2021, which limits its ability to generate responses containing more recent information. Another is its tendency to ‘hallucinate’ – to generate responses that sound true but are riddled with fabrications. For example, it is known to make up citations. Hence, a key best practice is to verify any information it reports. And because it has no privacy safeguards, it’s important to check our non-disclosure agreements before pasting anything into ChatGPT that shouldn’t be shared.
ChatGPT performs best when given context for the instructions and also a format for the output. We must describe the purpose of the text/response desired and how long or short it should be, as well as the style or tone. And make use of the different ways to refine responses – either by providing further instructions, or using the regenerate button.
Shortcomings aside, Rachel took us through a demonstration of how she uses ChatGPT with a dozen simple prompts. We watched ChatGPT generate its responses as she went through various scenarios that any of us might do in the course of our work, such as finding alternative/substitute words; making grammar and style decisions; comparing and contrasting at a higher level; summarizing an article, book or other text; creating and then redacting; organizing your thoughts; and clearing writer’s block.
One of the most interesting of these was summarization. Need to research a topic quickly? Rachel demonstrated how to paste in the text of an article and ask ChatGPT to summarize it into one or several key points. (Note the limits: up to 3,000 words per prompt with GPT 3.5 and 25,000 words with GPT 4.) She also used the browser with Bing plug-in to retrieve the text of two academic articles from the web for ChatGPT to summarize. The plug-in automated the retrieval, submission and summarization of all the requested articles into one command. Other big time-savers were having it organize a set of talking points for a networking event and a presentation. It also can help clear writer's block. In one example, Rachel asked it to generate captions for a dozen images of a fitness centre. It provided a list of options so she didn’t have to come up with a dozen on her own. These are a few ways ChatGPT can help writers get ‘creatively unstuck’.
The results were impressive and gave many of us new ideas to experiment with. We appreciated her key takeaways for language professionals in navigating this moment:
- Clients are also overwhelmed – it’s a good time to start a conversation with them about their AI plans.
- Stay on top of how your preferred tools are incorporating AI – follow their companies, and take advantage of the free webinars that are happening.
Questions and comments were diverse, from ‘What is it doing when it regenerates?’ and ‘Why does it hallucinate?’ to whether the output generated can be detected by plagiarism checkers. One academic copy-editor asked whether it really saves time if we have to check the accuracy of everything it generates. Perhaps the best answer is the well-known adage ‘user experience may vary’, but as language professionals, it’s important to understand both the capabilities and shortcomings of the tools we use. Especially when they are changing the landscape in which we are working.
Blog post by: Susan Jenkins