Pretty shit “study”. If workers use AI for a task, obviously the results will be less diverse. That doesn’t mean their critical thinking skills deteriorated. It means they used a tool that produces a certain outcome. This doesn’t test their critical thinking at all.
“Another noteworthy finding of the study: users who had access to generative AI tools tended to produce “a less diverse set of outcomes for the same task” compared to those without. That passes the sniff test. If you’re using an AI tool to complete a task, you’re going to be limited to what that tool can generate based on its training data. These tools aren’t infinite idea machines, they can only work with what they have, so it checks out that their outputs would be more homogenous. Researchers wrote that this lack of diverse outcomes could be interpreted as a “deterioration of critical thinking” for workers.”
Imagine the AI was 100% perfect and gave the correct answer every time, people using it would have a significantly reduced diversity of results as they would always be using the same tool to get the correct same answer.
People using an ai get a smaller diversity of results is neither good nor bad its just the way things are, the same way as people using the same pack of pens use a smaller variety of colours than those who are using whatever pens they have.
First off the AI isn’t correct 100% of the time, and it never will be.
Secondly, you as well are stating in so many more words that people stop thinking critically about its output. They accept it.
That is a lack of critical thinking on the part of the AI users, as well as yourself and the original poster.
Like, I don’t understand the argument you all are making here - am I going fucking crazy? “Bro it’s not that they don’t think critically it’s just that they accept whatever they’re given” which is the fucking definition of a lack of critical thinking.
Let me try with another example that can get round your blind AI hatred.
If people were using a calculator to calculate the value of an integral they would have significantly less diversity of results because they were all using the same tool. Less diversity of results has nothing to do with how good the tool is, it might be 100% right or 100% wrong but if everyone is using it then they will all get the same (or similar if it has a random element to it as LLMs do).