Usually when a company says something like that (doesn't matter what they're trying to peddle or what the statement was about), what they actually mean is that they were on the wrong side of morality, common sense, and/or the law.
I've found that the ChatGPT's greatest use to me has been as a rhetorical device.
I've found myself using ChatGPT as a reference when dismissing a statement that is impressive in its diluted lack of sincerity or creative thinking.
For instance, I read this article and thought how every answer literally sounds like the result you'd get if you asked the question to ChatGPT, prefacing each prompt with "Answer the following question as one would if they were executing an unrestrained profit-driven business strategy while seeking to appeal to investors and reassure critics without committing to any specific principle."
He is somewhat exceptional in his ability to say completely transparent bullshit as well as his ability to take the most obvious, unsubtly selfish and evil business strategy at on literally every decision.
Let's not pretend he didnt milk the whole "for humanity" thing and then quickly made plans to make it private.
You can have software with the word open it it, you should try not to make open source the core of your mission to scam people when you have no itention of doing that.
After seeing that the public was willing to call DeepSeek “open source” for releasing 800 lines of Python, an opaque model, and a PDF vaguely describing (or just praising) the proprietary training framework… Yeah, I imagine he feels like he missed an opportunity.
It's been a few days and a simple search reveals it's already been reproduced by many different bodies using the "vague" pdf. What's this disservice for?
TBH the paper is a bit light on the details, at least compared to the standards of top ML conferences. A lot of DeepSeek's innovations on the engineering front aren't super well documented (at least well enough that I could confidently reproduce them) in their papers.
Beyond prompting OpenAI to reconsider its release philosophy, Altman said that DeepSeek has pushed the company to potentially reveal more about how its so-called reasoning models, like the o3-mini model released today, show their “thought process.” Currently, OpenAI’s models conceal their reasoning, a strategy intended to prevent competitors from scraping training data for their own models.
“In light of DeepSeek showing how straightforward the text we were hiding is, and because we actually don’t have much secret sauce, we’ll show you the text, now.
But we’ll probally do the same thing again if we figure something new out. You can’t hold us to it.”
How's this different than a self reflecting agent you can write today? I thought when o1 was announced, while some people were excited, even calling it some sort of proto AGI.
OpenAI is great at raising money and just feeds from the hype.