GPT-4o (“o” for “omni”) is a step towards much more natural human-computer interaction—it accepts as input any combination of text, audio, and image and generates any combination of text, audio, and image outputs. It can respond to audio inputs in as little as 232 milliseconds, with an average of 320 milliseconds, which is similar to human response time(opens in a new window) in a conversation. It matches GPT-4 Turbo performance on text in English and code, with significant improvement on text in non-English languages, while also being much faster and 50% cheaper in the API. GPT-4o is especially better at vision and audio understanding compared to existing models.
Prior to GPT-4o, you could use Voice Mode to talk to ChatGPT with latencies of 2.8 seconds (GPT-3.5) and 5.4 seconds (GPT-4) on average. To achieve this, Voice Mode is a pipeline of three separate models: one simple model transcribes audio to text, GPT-3.5 or GPT-4 takes in text and outputs text, and a third simple model converts that text back to audio. This process means that the main source of intelligence, GPT-4, loses a lot of information—it can’t directly observe tone, multiple speakers, or background noises, and it can’t output laughter, singing, or express emotion.
GPT-4o’s text and image capabilities are starting to roll out today in ChatGPT. We are making GPT-4o available in the free tier, and to Plus users with up to 5x higher message limits. We'll roll out a new version of Voice Mode with GPT-4o in alpha within ChatGPT Plus in the coming weeks.
I been playing with the Mistral 7b models. About the most my hardware can reasonably run... so far. Would love to add vision and voice but I'm just happy it can run.
I have this running at home on a used r630 (CPU only). oobabooga/automatic1111 for LLM/SD backends, vosk + mimic3 for tts/stt. A little bit of custom python to tie it all together. I certainly don't have latency as low as theirs, but it's definitely conversational when my sentences are short enough.
I can't tell if you are for real or joking with those concatenations of letters. Have you tried the new Oongaboonga123? I hear it's got great support for bpm°C
Maybe this is wishful thinking but this, at first glance, seems like a sign that we're already entering the LLM plateau. Like when they got the point with phones that each new version is just more cameras, smoother UI, and harder glass.
It's an end-user plateau for the moment, but there are still tons of things going on underneath the hood. From the outside it may not look like things are moving, but we've gone from Model-T to Chevy Bel Air fairly quickly, and while the difference is huge, the engineers are still trying to get us to Bugatti Veyron level. Until then, we are going to have a long "80 and 90s" period of sameness.
At that same point 18 months ago, iPhone 14 was available. Now we have the iPhone 15.
People are used to LLMs/AI developing much faster, but you really have to keep in perspective how different this tech was 18 months ago. Comparing LLM and smartphone plateaus is just silly at the moment.
Yes they've been refining the GPT4 model for about a year now, but we've also got major competitors in the space that didn't exist 12 months ago. We got multimodality that didn't exist 12 months ago. Sora is mind bogglingly realistic; didn't exist 12 months ago.
GPT5 is just a few months away. If 4->5 is anything like 3->4, my career as a programmer will be over in the next 5 years. GPT4 already consistently outperforms college students that I help, and can often match junior developers in terms of reliability (though with far more confidence, which is problematic obviously). I don't think people realize how big of a deal that is.
There's a basic problem with replacing human experts with AI. Where will they get their info from with no one to scrape? Other AI generated content?
They can't learn anything and are just "standing on the shoulders of giants". These companies will fire their software developers, just to hire them back as AI trainers.
It's not about wanting it to stop, it's about getting it to maturity so we can get out of this phase of buzz words, misleading marketing, and then we can find out what the tech can actually be useful for.
This is not just some technology but something that may lead to a true artificial intelligence with all the far reaching consequences. It’s like nukes and manhattan project.
If we had feasible way to prevent birth of true AI I am sure most would want to stop just before it becomes sentient and spreads to every network connected device in the world.
If anything, them making this version available for free to everyone indicates that there is a big jump coming sooner than later.
Also, what's going on behind the performance boost with Claude 3 and now GPT-4o on leaderboards in parallel with personas should not be underestimated.
Edit: After enough of a chance to look more into the details, holy shit we are unprepared for what's around the corner. What this approach even means for things like recent trends in synthetic data is mind blowing.
They are making this free because they desperately need the new data formats. This is so cool.
Worth listening to a podcast by Ed Zitron who covers this exact same thing.
I work in analytics consulting and aside from some relatively straight forward text classification and parsing there really are so few instances where AI is useful. He actually made the case that AI is useful to people selling AI. But these chatbots are mostly useless bullshit.
I disagree. The real news is the free model will now search the internet for up to date answers, and for calculations it will write and execute a python script, then show you the result.
Paid users of ChatGPT have had those features for months, and they were a massive step forward in terms of how often the AI provides accurate answers.
But can it respond intelligently? Does it actually think to look up information to answer questions? Or does it still hallucinate answers? Because if it does it's still useless for all of the things people seem to think it's good for. We need the idiot proof this damn thing because all of the idiots are using it.
Just yesterday I was faced with someone complaining because something that was "supposed" to work didn't work. They proceeded to describe a function they wanted to use that didn't exist. Finally it came out that it was what GPT says to do... Sigh...
1- Privacy Violations:
Unrestricted use of AI for surveillance will lead to widespread invasion of privacy as individuals' activities will be monitored without their consent.
2-Discriminatory Practices:
Biases in AI algorithms will result in discriminatory practices, targeting specific groups unfairly based on factors like race, gender, and religion.
3-Abuse of Power:
Governments or other entities will abuse AI surveillance capabilities to suppress dissent, control populations, and violate human rights.
4-Security Risks:
As AI surveillance systems will be perverted, they will be manipulated, leading to further privacy breaches and misinformation lead to absolute despotism.
5-Lack of Accountability:
no one will ever be held accountable for all of this.
And those still are understatements.
2nd draft :
Question : Describe the worst despotic society facilitated by artificial intelligence.
Answer : (( ... bad things ... ))
It's not yawn, but not because it's great. It's because it'll be around for just long enough that it will create reliance on it, ruin many things, and then those people who have become reliant will find themselves in the position of having to unruin the many ruined things without the crutch to help them.
Or maybe I'm being the next iteration of the schoolteacher or parent who said that you won't have a calculator in your pocket all the time.
But then, a calculator doesn't need a terabyte of RAM. We're a ways off that being consumer-affordable as yet. If past consumer RAM size trends are anything (and the only thing) to go by, a portable LLM would be a 2040s or 2050s expectation.
Assuming that you'd be allowed to have the terabyte of data for nothing, anyway. Exorbitant subscription models are likely to be the norm by then.