It's great for things like "How do I write this kind of loop in this language" but when I asked it for something more complex like a class or a big-ish function it hallucinates. But it makes for a very fast way to get up to speed in a new language
It's a lot less in my opinion, because you can just ask it a question rather than having to read and interpret things. Every programming tutorial in every language is going to waste my time explaining how loops and conditionals work, when all I want is how this language does them.
Right, but you can't give it the variable names you're using and have it fill them in, and if you want to do something inside that loop with
I can ask ChatGPT "Write me a loop in C# that will add the variable value_increase to the variable current_value and exit when current_value is equal to or greater than the variable limit_value, with all the variables being floats"
You won't find that answer immediately on the Internet, and you're more likely to make errors synthesizing the new syntax.
But you do you, I'll keep using ChatGPT and looking like a miracle worker.
Right, but you can’t give it the variable names you’re using and have it fill them in, and if you want to do something inside that loop with
Why are you actively trying to avoid learning how to write the loop? Are you planning to have ChatGPT fill in your loop templates for the rest of your life?
But you do you, I’ll keep using ChatGPT and looking like a miracle worker.
It's going to be slower overall than just using the reference and learning how to do it. I really, really am skeptical that a developer at the level where they need that feature is going to seem like a miracle worker to anyone other than people who are just impressed when you can do anything with a computer.
Why are you actively trying to avoid learning how to write the loop? Are you planning to have ChatGPT fill in your loop templates for the rest of your life?
First, how is this different from having your IDE fill in your loop templates?
Second, no, of course I learn how to do it and then copy/paste from my existing code like a normal person.
Third, this is much more customizable. The example I gave is pretty simple, but you can explain algorithms to ChatGPT and have it figure it out.
Finally, I'm usually doing this for a customer in a language I'll never use again. Last week it was LabView. My role has me writing proofs-of-concept for customers frequently so I'm not going to learn something I'll never use again.
It’s going to be slower overall than just using the reference and learning how to do it.
Not when you're not familiar with the syntax and don't have an IDE set up for it.
other than people who are just impressed when you can do anything with a computer.
This happens in my job a lot more than I'm comfortable with.
First, how is this different from having your IDE fill in your loop templates?
I don't do that actually, but I think there are some differences.
One is if there's a loop template in your IDE, you know it's going to work. With LLMs you have to double check stuff (or just have it be wrong some of the time).
You don't have to type in a bunch of instructions to use a loop template. You also don't really have to wait for the filled in template to get generated.
People don't usually use that because they just don't know how to write the loop themselves, it's a convenience feature.
That said:
I’m usually doing this for a customer in a language I’ll never use again.
Maybe you're the one in a million exception where this approach is a benefit. Most of the time when you talk to people on the internet, they're going to assume you're a reasonably typical case and not the extremely rare exception.
It's not that writing loops does it, it's that I can ask ChatGPT to hand me pre-assembled parts that I can snap together instead of typing them out with my squishy human fingers. And I can do it for pretty much any language without too many syntax errors.
I'm a senior software developer (Currently .NET backend with DevOps). Writing code is probably less than 10% of my work day. And in that 10% Visual Studio autocomplete does most of the typing. It's frequently wrong, but it's good enough plenty of the times.
Actually working on software consists of writing specifications, security concerns, architecture, talking management out of dumb decisions, having meetings with stakeholders or other companies, working on automatic deployments, writing unit and integration tests, refactoring, performance optimizations, database migrations, bugfixing, ...
Green field writing new code is rare and that's mainly what AI can do (80% correct, maybe). Most of real programming work happens on existing code.
I'm not saying AI will write entire applications, but it is really useful at writing small bits of code for a human being to assemble which can greatly improve productivity.
Though if we could get it to handle stakeholder meetings I'll never use it for programming again.
The study said 86.66% of the generated software systems were "executed flawlessly."
But...
Nevertheless, the study isn't perfect: Researchers identified limitations, such as errors and biases in the language models, that could cause issues in the creation of software. Still, the researchers said the findings "may potentially help junior programmers or engineers in the real world" down the line.
🎵🎵 99 little bugs in the code, 99 bugs in the code, Fix one bug, compile it again, 101 little bugs in the code. 101 little bugs in the code, 101 bugs in the code, Fix one bug, compile it again, 103 little bugs in the code. 🎵🎵
And how long did it take to compose the “assignments?” Humans can work with less precise instructions than machines, usually, and improvise or solve problems along the way or at least sense when a problem should be flagged for escalation and review.