LLMs spew hallucinations by their nature. But what if you could get the LLM to correct itself? Or if you just claimed your LLM could correct itself? Last week, Matt Shumer of AI startup HyperWrite/…
another valiant attempt to get "promptfonder" into more common currency
Shumer credits Glaive AI — a company he invested in — for its LLM special cases, like telling which number is bigger or how many times the letter “r” appears in the word “strawberry.” Those are the two examples Shumer named to VentureBeat.
I quite often pondered how cool it'd have been to be a computer scientist like 50-70 years ago, you know, one of the pioneers, people on whose shoulders the entire tech sector rests now.
Think about all the advantages! Maybe I'd get to talk to Turing! Maybe I'd invent a foundational algorithm that would be in textbooks forever! Maybe I would've fucking died before this absolute blight on our domain happened!
Not a computer scientist? How the fuck do these guys get money in the first place to have startups? Its eternally frustrating dudes like this will continuously fail upwards for their entire lives.
See, what he needs to do is really get inside the grift loop and pivot back to crypto ahead of everyone else. That way he can be at the front of the pack instead of yet another also-ran in the meta verse and AI grifts.