If LLM continue to be the dominant branch of AI development what affects will they have on spoken language?
The ubiquity of audio commutation technologies, particularly telephone, radio, and TV, have had a significant affect on language. They further spread English around the world making it more accessible and more necessary for lower social and economic classes, they led to the blending of dialects and the death of some smaller regional dialects. They enabled the rapid adoption of new words and concepts.
How will LLMs affect language? Will they further cement English as the world's dominant language or lead to the adoption of a new lingua franca? Will they be able to adapt to differences in dialects or will they force us to further consolidate how we speak? What about programming languages? Will the model best able to generate usable code determine what language or languages will be used in the future? Thoughts and beliefs generally follow language, at least on the social scale, how will LLM's affects on language affect how we think and act? What we believe?
It'll be interesting to see how it affects the average person's written communication. When we know technology can handle something for us, our brains seem to let it carry the load. Think of all the people who aren't great communicators or might not be confident in their English who would love to rely on this already.
I guess it's a matter of perspective whether you view it as a crutch or a boon, which I'm sure has been a conversation about many pieces of technology over the years:
People were better at remembering phone numbers before cell phones stored them. People were better at remembering how to spell words before spell check/autocorrect. People were better at writing by hand before typewriters/keyboards. etc
People who rely too heavily on autocorrect do already now cause misunderstandings by writing something they did not intend to.
I had a friend during uni who was dyslectic, and while the words in his messages were written proper you still had to guess the context from the randomly thrown together words he presented you with.
Now that we can correct not only a single word or roughly the structure of a sentence, but instead fabricate whole paragraphs and articles by providing a single sentence, I imagine we will see a stark increase in low-quality content, accidental false information, and easily preventable misunderstandings - More than we already have.
Damn. I hadn't made that connection yet, that's actually quite worrying.
If reliance on LLMs does begin to affect language skills negatively, it could become a significant problem. Let alone the political implications, I believe that people are more capable of navigating personal relationships when they have a strong command of language.
Each generation thinks they had it the right way and younger ones have it easy. You can go back centuries with people pushing each other down.
What should be encouraged is the exchange of ideas and healthy debate. Words are just a tool for that, and spelling and grammar and " not knowing Latin" are components of it.
A couple generations down the road we would be able to accurately transmit our thoughts to other people and calibrate for their culture and growing up biases, and the generation immediately before it will whine when LLM was the right way to communicate.
Eh, LLMs do have a significant problem in how they can generate false information by themselves. Every other tool prior requires a person to make said false information, but LLMs can just generate it when asked a question
I'd much rather have just one unhinged uncle at St. Martin's Day than having everybody come off as the unhinged uncle by lack of supervision of the LLMs talking in your place, making it seem like being unhinged is normal and thereby creating artificial peer pressure in a truly wicked exercise of laziness.