LLMs spew hallucinations by their nature. But what if you could get the LLM to correct itself? Or if you just claimed your LLM could correct itself? Last week, Matt Shumer of AI startup HyperWrite/…
another valiant attempt to get "promptfonder" into more common currency
Shumer credits Glaive AI — a company he invested in — for its LLM special cases, like telling which number is bigger or how many times the letter “r” appears in the word “strawberry.” Those are the two examples Shumer named to VentureBeat.
I quite often pondered how cool it'd have been to be a computer scientist like 50-70 years ago, you know, one of the pioneers, people on whose shoulders the entire tech sector rests now.
Think about all the advantages! Maybe I'd get to talk to Turing! Maybe I'd invent a foundational algorithm that would be in textbooks forever! Maybe I would've fucking died before this absolute blight on our domain happened!