Skip Navigation
InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)ST
stingpie @lemmy.world
Posts 2
Comments 111
critique only
  • This always cracks me up, because it's a perfect example of a snake eating it's own tail. "Based" was originally just a shortened way of saying "based in reality" or "based in fact", but new people didn't get the original context, so it just became it's own word. Then, the uninitiated started making the "Based? Based on what?" joke, completely oblivious of the original meaning.

  • OAI employees channel the spirit of Marvin Minsky
  • Why do the leaders in AI know so little about it? Transformers are completely incapable of maintaining any internal state, yet techbros somehow think it will magically have one. Sometimes, machine learning can be more of an art than a science, but they seem to think it's alchemy. They think they're making pentagrams out of noncyclic graphs, but are really just summoning a mirror into their own stupidity.

    It's really unfortunate, since they drown out all the news about novel and interesting methods of machine learning. KANs, DNCs, MAMBA, they all have a lot of promise, but can't get any recognition because transformers are the laziest and most dominant methods.

    Honestly, I think we need another winter. All this hype is drowning out any decent research, and so all we are getting are bogus tests and experiments that are irreproducible because they're so expensive. It's crazy how unscientific these 'research' organizations are. And OpenAI is being paid by Microsoft to basically jerk-off sam Altman. It's plain shameful.

  • AI bell curve
  • The issue with sonnet 3.5 is, in my limited testing, is that even with explicit, specific, and direct prompting, it can't perform to anything near human ability, and will often make very stupid mistakes. I developed a program which essentially lets an AI program, rewrite, and test a game, but sonnet will consistently take lazy routes, use incorrect syntax, and repeatedly call the same function over and over again for no reason. If you can program the game yourself, it's a quick way to prototype, but unless you know how to properly format JSON and fix strange artefacts, it's just not there yet.

  • [long] Some tests of how much AI "understands" what it says (spoiler: very little)
  • Recently, research has suggested that LLMs can solve moderately more difficult problems if prompted to use "chain of thought" reasoning (CoT). In CoT, the LLMs essentially pretends to be thinking about the problem, where it comes up with a couple intermediate stages to process the problem. Of course, this doesn't really stop them from giving bad solutions to established problems, but it does cause it to be better at novel problems.

    This whole thing reminds me of the proverb of the frog & scorpion crossing the river. It is simply the nature of the scorpion to act like a scorpion, regardless of what intelligence we ascribe to it.

  • shutdowns/meltdowns in dreams
  • One theory about nightmares is that they serve as exposure therapy for stressors. If your nightmares are too extreme, maybe you could set aside during the day to enter a calmer environment and try to review them without getting overwhelmed. It might desensitize you a little bit and make them less severe.

    I haven't had severe nightmares since I was a preteen, but when I did, they tended to be pretty stressful (being kidnapped, being abandoned, my friends commiting suicide in front of me, etc). I'm not sure exactly why they stopped, but I eventually 'burnt out', and I just stopped really caring about them.

    Can't say I turned out great, but I can say I don't have nightmares anymore.

  • Not everything can be done in constant time, that's O(k)
  • Everything can be done in constant time, at least during runtime, with a sufficiently large look-up table. It's easy! If you want to simulate the universe exactly, you just need a table with nxm entries, where n is the number of plank volumes in the universe, and m is the number of quantum fields. Then, you just need to compute all of them at compile time, and you have O(1) time complexity during runtime.

  • If AI can now speak Italian, it can certainly replace us...
  • There are bindings in java and c++, but python is the industry standard for AI. The libraries for machine learning are actually written in c++, but use python language bindings. Python doesn't tend to slow things down since machine learning is gpu-bound anyway. There are also library specific programming languages which urges the user to make pythonic code that can be compiled into c++.

  • If AI can now speak Italian, it can certainly replace us...
  • I completely agree that it's a stupid way of doing things, but it is how openai reduced the vocab size of gpt-2 & gpt-3. As far as I know–I have only read the comments in the source code– the conversion is done as a preprocessing step. Here's the code to gpt-2: https://github.com/openai/gpt-2/blob/master/src/encoder.py I did apparently make a mistake, as the vocab reduction is done through a lut instead of a simple mod.

  • If AI can now speak Italian, it can certainly replace us...
  • This might be happening because of the 'elegant' (incredibly hacky) way openai encodes multiple languages into their models. Instead of using all character sets, they use a modulo operator on each character, to make all Unicode characters represented by a small range of values. On the back end, it somehow detects which language is being spoken, and uses that character set for the response. Seeing as the last line seems to be the same mathematical expression as what you asked, my guess is that your equation just happened to perfectly match some sentence that would make sense in the weird language.

  • Listen to those funny accents
  • I don't know about that guy, but I used to have a speech impediment that meant I couldn't pronounce the letter R. I went to several speech therapists, so I started to annunciate every other letter, but that made people think I had a British accent. Anyway, I eventually learned how to say R, so now I have a speech impediment that makes me sound like a British person doing a fake American accent.

  • reallygood @lemmy.world stingpie @lemmy.world

    4 SUBSCRIBER SPECIALL!

    we did it!! I think its really good how we got 4 subcriber. ! we should make our own country i think that would be cool ! <--- this would be funny if we put it on a flag i think

    I wanted to put a picture of a cat running around but it did not work. :( ! <--- me trying to figure out why its not working

    anyway thanks for joining... just be careful not to get bitten by the pac man!!!

    good thing he isn't reall...

    OR IS HE???

    !

    that was a joke i hope it was funny

    0
    reallygood @lemmy.world stingpie @lemmy.world

    Does any one know what this blender thingy is? it says it's for games?

    4