Yeah 3.5 was pretty ass w bugs but could write basic code. 4o helped me sometimes with bugs and was definitely better, but would get caught in loops sometimes. This new o1 preview model seems pretty cracked all around though lol
ChatGPT keeps mixing up software versions which is understandable considering the similarities between versions and the way gen AI works
I asked for help on GTK 4 once and responses were a mix of GTK 4 and 3 code. Some of them even contained function names which didn't exist in any version of GTK
And when you point that out to the AI, those code snippets get replaced with even more spaghetti that is maybe 1% closer to actually working, at best. Been there!
Iâve asked for help finding API endpoints that do what I want because Iâm feeling too lazy to pour over docs and itâll just invent endpoints that donât exist
Maybe we could do better with smaller ais that are fine tuned (or RAG idk I'm not a programmer) on a specific code base + documentation + topical forum
Yeah but this is a "needle in a haystack" problem that chatgpt and AI in general are actually very useful for, ie. Solutions that are hard to find but easy to verify. Issues like this are hard to find as it requires combing through your code , config files and documentation to find the problem, but once you find the solution it either works or it doesn't.
Usually it doesn't solve my problems but it gives me a few places to start looking. I know some models are capable of this but to get a perfectly accurate and useful response would probably require it to recall a specific piece of input it was given and not just an "average" of the inputs.
And it replies "you're right! That argument has never been a part of package x. I've updated the argument to fix it:" and then gives you the exact same bleedin command....
Iâve done similar things for mismatched python dependencies in a broken Airflow setup on GCP, and got amazingly good results pointing me in the right direction to resolve the conflicting package versions. Just dumped a mile long stack trace and the full requirements.txt on it. Often worth a shot, tbh
This is one of the first things I did a year or so ago to test chatgpt. I've never trusted it since. Chatgpt is fucking less than useless. The lies it tells... It's insane.
Well how was I supposed to figure out that my docker node running on libreelec won't connect to the swarm because the kernel was compiled with out the The Berkeley Packet Filter protocol.
I learned C++, python, how stuff in the Linux kernel works, how ansible works and can be tuned, and a lot more using the help of AI (mostly copilot, but when it fails to help, I use my free prompts of OpenAI 4.o, which is way better than copilot, right now)
Not tested o1 yet, but I heard it is mind blowing good, since it got way better with logic stuff like programming and Mathematic
The best code its given me I'd been able to search for and find where it was taken. Hey it helped me discover some real human blogs with vastly more helpful information.
(If you're curious, it was circa when there was that weird infight at openclosedAI with altman, I prompted to give code to find the rotational inertia per axis and to my surprise and suspicion the answer made too much sense. Backsearching I found where I believe it got this answer from)