You might think using a chatbot to think for you just makes you dumber — or that chatbots are especially favored by people who never liked thinking in the first place. It turns out the bot users re…
While browsing the references of the paper, I found such a perfect evisceration of GenAI.
We have confused what we can write down with what we usefully know and compounded the error by supposing that because computers can help us write down more they can obviously help us know more.
The marks are on the knowledge worker - Kidd, Alison
That's from 1994 folks, they were talking about the wonder of relational databases.
Big Data was never exactly my fave, but I still liked them better than Genai after he went solo. Some people just never learn that it's never about the size, but how you use it.
Whoops, I dropped my monster Hadoop that I use for my magnum datalake.
Qualitatively, GenAI shifts the nature of critical thinking toward information verification, response integration, and task stewardship.
Information verification is super important and probably just as important as raw critical thinking. However, when a person is stuck only validating shit output from genai, I could see that as a negative.
I get forced to do more critical thinking than I want to faster than I normally would, since im getting the fast responses, its tiring, if I do something myself I go at my own pace, with ai theres always more stuff to check and do and think about, so I burn out
It was already known that “users with access to GenAI tools produce a less diverse set of outcomes for the same task.”
Why is this portrayed as a bad thing? Correct answers are correct answers. The only thing LLMs typically are bad at, are things that are seldom discussed or have some ambiguity behind them. So long as users understand the limitations of AI and understand when and where to trust them - then why is their diversity in output a bad thing?
Regularly we seek uniformity in output in order to better handle its output in tasks further down. I don't see this as a bad thing at all.
Correct answers are correct answers. The only thing LLMs typically are bad at, are things that are seldom discussed or have some ambiguity behind them.
Lol what, how many questions you ask in your life are entirely unambiguous and devoid of nuance? That sounds like a you issue.
I have read the paper, how about not immediately jumping to the condescending, patronizing tone?
Also, you didn't answer the question. It simply says "users with access to GenAI tools". You've added your own qualifications separate from the question at hand.