Something’s been bugging me about how new devs and I need to talk about it. We’re at this weird inflection point in software development. Every junior dev I talk to has Copilot or Claude or GPT running 24/7. They’re shipping code faster than ever. But when I dig deeper into their understanding of wh...
As someone who has interviewed candidates for developer jobs for over a decade: this sounds like “in my day everything was better”.
Yes, there are plenty of candidates who can’t explain the piece of code they copied from Copilot. But guess what? A few years ago there were plenty of candidates who couldn’t explain the code they copied from StackOverflow. And before that, there were those who failed at the basic programming test we gave them.
We don’t hire those people. We hire the ones who use the tools at their disposal and also show they understand what they’re doing. The tools change, the requirements do not.
I think that LLMs just made it easier for people who want to know but not learn to know. Reading all those posts all over the internet required you to understand what you pasted together if you wanted it to work (not always but the barr was higher). With ChatGPT, you can just throw errors at it until you have the code you want.
While the requirements never changed, the tools sure did and they made it a lot easier to not understand.
Have you actually found that to be the case in anything complex though? I find it just forgets parts to generate something. Stuck in an infuriating loop of fucking up.
It took us around 2 hours to run our coding questions through chatgpt and see what it gives. And it gives complete shit for most of them. One or two questions we had to replace.
If a company cannot invest even a day to go through their hiring process and AI proof it, then they have a shitty hiring process. And with a shitty hiring process, you get shitty devs.
And then you get people like OP, blaming the generation while if anything its them and their company to blame... for falling behind. Got to keep up folks. Our field moves fast.
I find ChatGPT to sometimes be excellent at giving me a direction, if not outright solving the problem, when I paste errors I'm to lazy to look search. I say sometimes because othertimes it is just dead wrong.
All code I ask ChatGPT to write is usually along the lines for "I have these values that I need to verify, write code that verifies that nothing is empty and saves an error message for each that is" and then I work with the code it gives me from there. I never take it at face value.
Have you actually found that to be the case in anything complex though?
I think that using LLMs to create complex code is the wrong use of the tool. They are better at providing structure to work from rather than writing the code itself (unless it is something simple as above) in my opinion.
If a company cannot invest even a day to go through their hiring process and AI proof it, then they have a shitty hiring process. And with a shitty hiring process, you get shitty devs.
But how do you find those people solely based on a short interview, where they can use AI tools to perform better if the interview is not held in person?
And mind you the SO was better because you needed to read a lot of answers there and try to understand what would work in your particular case. Learn how to ask smartly. Do your homework and explain the question properly so as not to get gaslit, etc. this is all now gone.
Pretty easy to come up with problems that chatGPT is useless at. You can test it pretty easily. Throw enough constraints at it and the transformer starts to loose attention and forget vital parts.
With a bit of effort you can make problems where chatGPT will actuallt give a misleading answer and candidates have to think critically.
Just like in the past it was pretty easy to come up with problems which werent easily found on SO.
Same landscape. If you put in the time and the effort to have a solid recruitment process, you get solid devs. If you have a lazy and shitty process, you get shitty devs.
Evil me: Ask questions to which there is no solution but ChatGPT will happily give incorrect solutions to and will run itself in circles trying to answer correctly as you feed it error messages.