One of the main reasons was how detailed ChatGPT’s answers are. In many cases, participants did not mind the length if they are getting useful information from lengthy and detailed answers. Also, positive sentiments and politeness of the answers were the other two reasons.
Man, this answer is long, detailed, polite... it's great!
Sure, but it's wrong. It's just complete bullshit.
• A Purdue University study found that the ChatGPT OpenAI bot gave incorrect answers to programming questions half the time.
• However, 39.34% of participants preferred the ChatGPT responses due to their completeness and well-formulated language style.
• The study also showed that users can only identify errors in ChatGPT responses when they are obvious.
• Participants prefer ChatGPT's responses because of its polite language, articulated textbook-style responses, and comprehensiveness.
• The study is intended to complement the in-depth guidance and linguistic analysis of ChatGPT responses.
• The authors note that ChatGPT responses contain more "driving attributes" but do not describe risks as often as Stack Overflow posts.
Probably because the model has seen thousands of possible solutions to those exact Leetcode problems. Actual questions people ask on StackOverflow tend to be much more specialized.