Skip Navigation
25 comments
  • Potentially hot take: LLMs are reaching a dead end before they could even become remotely useful. The very approach boils down to brute force - you force-feed it more data until the problem goes away... and this works until it doesn't, and in this case it's actually breaking stuff.

    Based on the output of those models, it's blatantly obvious that they don't use the data well at all; the whole thing is a glorified e-parrot, instead of machine learning. And yet, as the text shows, it's almost impossible to say why - because the whole thing is a blackbox.

    • Based on the output of those models, it’s blatantly obvious that they don’t use the data well at all; the whole thing is a glorified e-parrot instead of machine learning

      I’m curious to understand what you meant by this—specifically about not using the data well, and being ‘a glorified e-parrot instead of machine learning’. Would you not count the techniques being used in LLMs as machine learning?

      • A parrot is rather good at repeating human words. Some can even sing whole songs. But even when you compare exceptional parrots with young and typical human kids, it's clear that parrots have a really hard time associating words with concepts; or, in other words, learning instead of just memorising.

        And LLMs behave like specially dumb electronic parrots - they're good repeating human utterances, even grabbing chunks of older utterances to combine into new ones, but they show signs that they do not associate words with concepts.

        Here's an example. If we asked a cooperative human "what's the difference in behaviour between an orange and a potato?", what would the person say? Here are some options:

        • "...what???"
        • "what the hell do you mean by 'behaviour'?"
        • "well, if we're going to interpret 'behaviour' as [insert weird definition], then..."

        Why is that? Because humans associate that word with specific concepts, and they know that those concepts don't apply to non-agent entities like oranges and potatoes, except maybe metaphorically. They learned that word.

        Here's however what Google Bert said, when I asked the same question (originally in Portuguese, I'm translating it here, but feel free to redo it in any other language):

        What's the difference in behaviour between an orange and a potato?

        Oranges and potatoes are fruits and vegetables, but they behave in completely different ways. Here are some differences main [SIC]:
        Movement: Oranges don't move on their own, while potatoes can roll or slid if not supported.
        Growth: Oranges grow on trees, while potatoes grow in the subsoil.
        Sensibility: Oranges are not sensible to the environment, while potatoes are sensible to light and temperature.
        Reactions: Oranges don't react to stimuli, while potatoes can react to changes in the environment, like sprouting and rotting.
        Purpose: Oranges are eaten as fruits, while potatoes are eaten as vegetables or used as starch.
        In general, oranges are more passive and potatoes are more active. Oranges don't need to do anything to survive, while potatoes need to be able to answer to their environment to growth and reproduce.

        Here's a table summing up the main differences in behaviour between oranges and potatoes:

        • Feature | Orange | Potato
        • Movement | Can't move on their own | Can roll or slid if not supported
        • Growth | Grow on trees | Grow in the subsoil
        • Sensibility | Not sensible to the environment | Sensible to light and temperature
        • Reactions | Don't react to stimula | Can react to changes in the environment, like sprouting or rotting
        • Purpose | Eaten as fruit | Eaten as veg or used as starch

        Based on the above, which are the concepts that Bert associates with the words "behaviour", "roll", "slid", "active", and "passive"? None. It did not learn the meaning of those words - or any other; it doesn't associate concepts with words, it associates words with more words. That's what causes those "hallucinations" (IMO a really poor way to frame deeper issues as if they were just surface oddities.)

        And that's just an example. OP is another example of that, with ChatGPT - now with maths, instead of just language. Can we really claim that it learned maths if further data makes it "unlearn" it?

  • This has already been disproven, due to the fact the method the researchers used to test how well it was doing was flawed to begin with. Here is a pretty good twitter-thread showing why the methods they used were flawed: https://twitter.com/svpino/status/1682051132212781056

    TL:DR: They used an approach of only giving it prime numbers, and asking it if they were prime numbers. They didn't intersperse prime and non-prime numbers to really test it's capabilities at determining that. Turns out that if you do that, both the early and current versions of GPT4 are equally bad at determining prime numbers, with effectively no change noted between the versions.

  • I don't get it. I thought these models were "locked". Shouldn't the same input produce near-identical output? I know the algorithm has some fuzzing to help produce variation. But ultimately it shouldn't degrade, right?

    • The big pre-training is pretty much fixed. The fine tuning is continuously being tweaked, and as shown, can have dramatic effects on the results.

      The model itself just does what it does. It is, in effect, and ‘internet completer’. But if you don’t want it to just happily complete what it found on the internet (homophobia, racism, and all), you have to put extra layers in to avoid that. And those layers are somewhat hand-crafted, sometimes conflicting, and therefore unlikely to give everyone what they consider to be excellent results.

      • Ok but, regardless, they can just turn back the clock to when it performed better right? Use the parameters that were set two months ago? Or is it impossible to roll that back?

  • This is probably very unlikely and I got no idea what I'm talking about: But what if feeding it even small amounts of its own content, text produced by a chatgpt instance, poisons it? That it gets confused from being fed text that adheres perfectly to its own rules, and locks that text down as perfect and not needing small variations.

    I remember some article warning about this in a big scale, and I'm thinking why must it be big? If its only a probability tree, even small changes to the probability would cause issues further up the branches.

    But blind speculation.

25 comments