In a study recently published in the journal Patterns, researchers demonstrate that computer algorithms often used to identify AI-generated text frequently falsely label articles written by non-native language speakers as being created by artificial intelligence. The researchers warn that the unreli
In a study recently published in the journal Patterns, researchers demonstrate that computer algorithms often used to identify AI-generated text frequently falsely label articles written by non-native language speakers as being created by artificial intelligence. The researchers warn that the unreliable performance of these AI text-detection programs could adversely affect many individuals, including students and job applicants.
According to the article grammatical errors are not the reason. The reason is that AI uses simpler vocabulary to mimic a regular conversation of average people.
A lot of non-native speakers can show higher command of the language, because they took the time to study its rules. Just look at how people type on social media.
Completely disagree - a lot of non-native speakers have excellent grasp of grammar, precisely because they have learnt the rules. Native speakers rely on stuff sounding right, rather than necessarily knowing the rules. But following grammatical rules rigidly is exactly what I would expect both from a genAI and a non-native speaker (as well as avoiding figurative speech and idioms).
Sorry I might have overly generalised based on my personal experience. I have been a non-native English speaker for over 30 years, and I keep making grammatical mistakes.
Everyone is different and it depends heavily on how the person learned/acquired the language.
hey just to give some validation, I'm an esl teacher and this doesn't stick out as non-native at all. They're all just taking the piss, correcting anything they can find for the joke of it.
maybe saying "am" instead of "I am" but that's kinda just meme speech right