Skip Navigation

"So, rather than just tell students that they’re in for a rough ride if they cram my prompt through ChatGPT, I show them."

chelseatroy.com How does AI impact my job as a programmer?

Four ginever glasses with different flavors of liqueur sit on a mirrored table at Distilleerderij’t Nieuwe Diep in Fevopark, Amsterdam. Only one manufacturer makes these traditional glasses, …

How does AI impact my job as a programmer?

this is AI but it felt a lot more guy with broken gear

16
16 comments
  • While I agree mostly with the blunt of the thesis - 80% of the job is reading bad code and unfucking it, and ChatGPT sucks in all the ways - I disagree with the conclusions.

    First, gen AI shifting us towards analysing more bad code to unfuck is not a good thing. It's quite specifically bad. We really don't need more bad code generators. What we need are good docs, slapping genAI as a band-aid for badly documented libraries will do more harm than good. The absolute last thing I want is genAI feeding me with more bullshit to deal with.

    Second, this all comes across as an industrialist view on education. I'm sure Big Tech would very much like people to just be good at fixing and maintaining their legacy software, or shipping new bland products as quick as possible, but that's not why we should be giving people a CS education. You already need investigation skills to debug your own code. That 90% of industry work is not creative building of new amazing software doesn't at all mean education should lean that way. 90% of industry jobs don't require novel applications of algebra or analytical geometry either, and people have been complaining that "school teaches you useless things like algebra or trigonometry" for ages.

    This infiltration of industry into academia is always a deleterious influence, and genAI is a great illustration of that. We now have Big Tech weirdos giving keynotes on CS conferences about how everyone should work in AI because it's The Future™. Because education is perpetually underfunded, it heavily depends on industry money. But the tech industry is an infinite growth machine; it doesn't care about any philosophical considerations with regards to education; it doesn't care about science in any way other than as a product to be packaged and shipped ASAP to grow revenue, doesn't matter if it's actually good, useful, sustainable, or anything like that. They invested billions into growing a specialised sector of CS with novel hardware and all (see TPUs) to be able to multiply matrices really fast, and the chief uses of that are Facebook's ad recommendation system and now ChatGPT.

    This central conclusion just sucks from my perspective:

    It’s how human programmers, increasingly, add value.

    “Figure out why the code we already have isn’t doing the thing, or is doing the weird thing, and how to bring the code more into line with the things we want it to do.”

    While yes, this is why even a "run-of-the-mill" job as a programmer is not likely to be outsourced to an ML model, that's definitely not we should aspire the value added to be. People add value because they are creative builders! You don't need a higher education to be able to patch up garbage codebases all week, the same way you don't need any algebra or trigonometry to work at a random paper-pushing job. What you do need it to is to become the person that writes the existing code in the first place. There's a reason these are Computer Science programmes and not "Programming @ Big Tech" programmes.

    • It didn't read to me like she was a fan of this shit at all, but was despairing of it and looking for ways to teach actual competence despite it

      • I'm probably projecting a baggage of dozens of conversations with people that unironically argue that a CS university should prepare you for working in industry as a programmer, but that's because I can't really discern the author's perspective on this from the text.

        In either case,

        to teach actual competence despite it

        I think my point is that "competent programmer" as viewed by the industry is a vastly different thing than a "competent computer scientist" in a philosophical sense. Computer science really struggles with this because many things require both being a good engineer and a good scientist? For an analogy, an electric engineer and a physicist specialising in electrical circuits are two vastly different professions, and you don't need to know what an electron is to do the first. Whereas in computer science, like, you can't build a compiler without knowing your shit both around software engineering and theoretical concepts.

        Let me also add that I think I never wrote a post where I would more like people to come and disagree with me. I might be very well talking some bullshit based on my vibes here, since all of this is basically vibes from mingling around with both industry and academia people...

    • I didn't get the vibe she agreed with it, I got the sense she was exasperated but practical about it. Her students are career driven, in a world that told them until two years ago that this expensive credentialing was the key to becoming silicon valley rich.

      Separately, it's a well-established point of concern that a computer science degree is inapplicable to the work of the vast majority of people who become working, non-academic software engineers, and that while there are valuable things an academic program could teach pre-professional developers that too few engineers understand, that's not the focus of CS. The reality (in the US at least) is that a CS degree is sold as vocational program by the universities, and many jobs list a CS degree as a requirement or a desired skill. The author's students paid almost $7000 for her course alone. Whether those facts should be true is up for debate, but that's the reality in which the author is teaching.

      The author is open that she became a programmer for financial stability, which is the world most of us live in. I enjoy writing code and being creative, but I work in software development to eat.

      • The reality (in the US at least) is that a CS degree is sold as vocational program by the universities, and many jobs list a CS degree as a requirement or a desired skill. The author’s students paid almost $7000 for her course alone.

        Well, it's very hard for me to have a discussion about philosophical merits of education when the context is the USA where education is so fundamentally fucked. It might as well be that the best course of action for the well-being of students is to make sure they at least get bang for their buck, but that's a systemic problem one level below what I'm talking about even. I don't want to discount this as a reality for actual people on the ground - I think then the correct position is not my waxing philosophical about contents of courses, but rather nailing everyone against free public education in the US government to a fucking wall.

        and many jobs list a CS degree as a requirement or a desired skill

        This is, I think, a symptom of this push-and-pull between industry and academia. The industry would want to have a CS degree mean that they're getting engineers ready to patch up their legacy code, because they would much rather have the state (or the students themselves in the USA case) pay for that training than having to train their employees themselves. But I suggest that the correct default response to industry's wants is "NO." unless they have some really good points. Google can pay for their employees to learn C++, but they won't pay a dime to teach you something they don't need for their profit margins. Which is precisely the point of public education, teaching you stuff because it's philosophically justified to have a population that knows things, not because they lead to $$$.

    • From the pov of a slightly exhausted prof who just wants a short-ish answer for her students, the conclusion sorta makes sense, I guess. The students want to convince themselves they aren't wasting their time with genAI and she's not in a position to convince them otherwise, so the next best thing is showing them what industrial life with genAI will be like.

      "The future you're dreaming of sucks, so get used to it." isn't a satisfying answer, but its a forced perspective.

    • The artlicle certainly feels blasé ^^, I think the most objectional part is:

      Large language models shift even more of that time into investigation, because the moment the team gets a chance to build, they turn around and ask ChatGPT (or Copilot, or Devin, or Gemini) to do it. When we learn that we need to integrate with google cloud storage, or spaCy, or SQS Queue, or Firebase? Same thing: turn around and ask the LLM to draft the integration.

      Now clearly (to me) the author isn't happy about this, but I think they are giving hope on the direction of the profession too soon. There are still plenty of people happy enough to implement things themselves.

  • This is a good piece, both on the gap between how gen AI is sold and what it does, and on the reality of what professional programming is.

    • … by the time you’ve spent four hours tearing your hair out, … the code … to fix your problem is one, single line.

      This, I feel, sums up the reality of professional programming in a nutshell. 🤣

      • OK sorry this is rambly but I gotta get these programmer feelings off my chest.... If anything 4 hours is an understatement.


        Back in university I once spent an entire week tracking down a latent bug in my program after the professor changed the project requirements a week before the due date. It was an accidental use of =instead of a copy in Java. We're talking every waking moment both in and out of class (I was not the best at debugging back then...).

        Now in the working world there's bugs-- but they're not just my bugs anymore. Rather there's decades of bugs piled on top of bugs. Code has dozens of authors, most of whom quit long ago. The ones that remain often have no memory of the code.

        Just last week I did a code review of a co-workers bugfix for a bug introduced in 2008. The fix was non-trivial due to:

        1. The code being a tangled mass of overlapping state and (more importantly)
        2. No one actually remembering anything about the code or where it is called or why it is there in the first place or what the implications of changing it are. Except that it's causing problems (An O(n^2) slowdown case harming production) now in 2024.
        3. The original design doc was in the personal folder of the original author (no longer at the company), which was garbage collected years ago.

        So reviewing the code involved comparing every iteration of the code, from the initial commit, up to where the bug was introduced, up to the state it was in today before my coworkers fix, and my coworkers fix. It turns out he got it wrong, and I can't exactly blame him because there is no right in this sort of environment. Fortunately the wrongness was caught by me and whatever meager unit-tests were written for it.

        This all took maybe half a day for me, and a day for my coworker, for 1.5 days of work between the two of us. All to fix a condition which was accidentally negated from what it should have been.


        And this is indeed what LLM for code enthusiasts miss.

        Even if the LLM saves some time with writing boilerplate code, it'll inevitably mess up in subtle ways, or programmers will think the LLM can do more than they actually can. Either way they'll end up introducing subtle bugs; so you have a situation where someone saving 20 seconds here or there leads to hours of debugging effort, or worse, at an unpredictable point in the future.

        At least with human written code you can go back and ask them what they were thinking, or read the design doc, or read comments and discussion. Even the most amateurish human author code has the spark of life to it. It was in essense a manifestation of someone's wish.

        On the other hand with code that's just statistical noise there's no way to tell what it was trying to do in the first place. There is no will / soul / ego in the code, so there is no understanding, so there is no way to debug it short of reverting the whole change and starting over.

      • I wish it were always that easy, few things in legacy code maintenance brings me more joy than deleting a single line of code, the solution is sadly often more involved.

        The reality is sometimes more like fighting a hydra spaghetti ball, where felling one bug, uncovers/spawns two more.

16 comments