Skip Navigation

AI and Coding.

How reliable is AI lke ChatGPT in giving you code that you request?

53

You're viewing a single thread.

53 comments
  • The biggest issue here is that people aren't differentiating between models. gpt-4 is probably 20-30 higher IQ than gpt-3.5-turbo. Also your question could be interpreted to include LLMs in general. Most LLMs are absolutely horrible at programming. OpenAI's actually can do it given some limited specific task. Again, gpt-4 is much better at programming.

    Also OpenAI just released new models. They now have one with 16k token context which is four times larger than before. So it can understand more instructions or read more code.

    For something specific like writing basic SQL queries or even embedded Chart.js charts to fulfill a user request for a simple report on a table, gpt-4 can be very effective, and gpt-3.5 can often do the job. The trick is that sometimes you have to be very insistent about certain gaps or outdated information in it's knowledge or what you want to do. And you always need to make sure you also feed it the necessary context.

    For something a bit complex but still relatively limited in scope, gpt-4 can often handle it when gpt-3.5 screws it up.

    What those models are good at doing now especially with the version just released, is translating natural language requests into something like API calls. If there is not a lot of other stuff to figure out, it can be extremely useful for that. You can get more involved programs by combining multiple focused requests but it's quite hard to do that in a fully automated way today. But the new function calling should help a lot.

    The thing is, wait 3-6 months and this could be totally out of date if someone releases a more powerful model or some of these "AGI" systems built on top of GPT get more effective.

53 comments