The Risky Words That Might Make School Admissions Suspect AI Wrote Your Essay

When the ChatGPT-mania kicked off last year, the first uproar emerged from the academia. Teachers were worried that students now had a potent tool to cheat on their assignments, and like clockwork, multiple AI plagiarism detectors popped up with variable degrees of accuracy. Students were worried that these AI plagiarism detectors could get them in trouble even if the error rate were low. Experts, on the other hand, opined that one needs to rely on intuition and natural language skills to detect signs of AI by looking for signatures such as repetitive phrases, an out-of-character use of words, a uniformly monotonous flow, and being more verbose than is needed in a regular human conversation.

Advertisement

No method is infallible, but the risk avenues keep spiraling out of control while the underlying large language models get even more nuanced in their word regurgitation skill. Among those avenues is the all-too-important essay required for college applications. According to a Forbes report, students are using AI tools to write their school and college essays, but academics and people on the admission committee have developed a knack for spotting AI word signatures. For example, one of the words that seems to pop up frequently in essays is "tapestry," which, honestly, is rarely ever used or heard in a conversation or even text-based material, save for poetry or works of English literature.

"I no longer believe there's a way to innocently use the word 'tapestry' in an essay; if the word 'tapestry' appears, it was generated by ChatGPT," one of the experts who edit college essays told Forbes. Unfortunately, he also warns that in the rare scenarios where an applicant inadvertently, and with good intentions, ends up using the word, they might face rejection by the admission committee over perceived plagiarism.

Advertisement

What to avoid?

The Forbes report compiles responses from over 20 educational institutions, including top-tier names like Harvard and Princeton, about how exactly they are factoring AI while handling applications. While the institutions didn't provide any concrete answers in terms of a proper policy, members handling the task hinted that spotting AI usage in essays is pretty easy, both in terms of specific word selection, which they described as "thin, hollow, and flat," as well as the tone. Some independent editors have created an entire glossary of words and phrases that she often sees in essays and which she tweaks to give "human vibes" to the essays.

Advertisement

Some of the code-red AI signatures, which don't even require AI detection tools to spot them, include:

  • "leadership prowess"
  • "stems from a deep-seated passion"
  • "aligns seamlessly with my aspirations"
  • "commitment to continuous improvement and innovation"
  • "entrepreneurial/educational journey"

These are just a few giveaways of AI involvement. Moreover, they can change and may not even be relevant soon as more sophisticated models with better natural language capabilities arrive on the scene. Plus, people from non-academic domains appear to have established their own framework to detect AI-generated work. "If you have enough text, a really easy cue is the word 'the' occurs too many times," Google Brain scientist Daphne Ippolito said to MIT Technology Review

Ippolito also pointed out that generative AI models rarely make typos, which is a reverse-engineered way to assess if a piece of writing is the result of some AI tool. "A typo in the text is actually a really good indicator that it was human written," she notes. But it takes practice to be good at identifying the pattern, especially at reading aspects like unerring fluency and the lack of spontaneity.

Advertisement

It's all still a big mess

An AI text generator is essentially a glorified parrot, which is exceptional at echoing but not so much at delivering surprises. Indeed, drafting an invitation email or shooting a message to your pals might seem like you're following a script, yet there's a whimsical flair to our human way of chatting that's quite the trick to nail down for an AI. Despite all the advancements that Google has made with its PaLM 2 or whatever it is that Meta or OpenAI continue to achieve with Llama 2 or GPT-4, it is simply not worth the risk to be using AI for college, work, or any other high-stakes task. 

Advertisement

One of the biggest reasons to avoid relying squarely on AI chatbots is their tendency to hallucinate, which is essentially an AI model cooking up an imaginary scenario and serving it as fact. Next, there is always a risk that the work can be flagged down the road, either by a keen human mind or the makers of these AI tools using some proprietary AI fingerprinting tool. There are already tools out there, such as GPTZero, that can spot AI plagiarism. However, those tools are also far from infallible, so there's a tangible risk that even an original work can be flagged as AI-generated garbage.

To avoid such a scenario, the best way is to enable a progress history feature, one that tracks how a piece of work moved ahead, one small at a time. For example, if you are into writing, products like Google Docs and Microsoft Word offer a version history system that essentially saves different versions of an ongoing work every time some change is made. The progress is saved, essentially creating a time-stamped proof of each stage. 

Advertisement

Recommended

Advertisement