mmm, unironically sounds like me. According to my iq test i had PhD level intelligence at 18, and what am i doing at 24? unemployed, playing video games, and crying
Extremely narrow field of expertise ✔️
Misplaced confidence in its abilities outside its area of expertise ✔️
A mind filled with millions of things that have been read, and near zero from interactions with real people✔️
An obsession over how many words can get published over the quality and correctness of those words ✔️
A lack of social skills ✔️
A complete lack of familiarity of how things work in the real world ✔️
I mean, GPT 3.5 consistently quotes my dissertation and conference papers back to me when I ask it anything related to my (extremely niche, but still) research interests. It’s definitely had access to plenty of publications for a while without managing to make any sense of them.
Alternatively, and probably more likely, my papers are incoherent and it’s not GPT’s fault. If 8.0 gets tenure track maybe it will learn to ignore desperate ramblings of PhD students. Once 9.0 gets tenured though I assume it will only reference itself.
I know three people that dropped out of primary and did quite well. They all ended up taking remedial studies later in life. Two were in trades and the other was a postie. All three were smart as fuck. Just because life gets in the way of going to school doesn't mean a person is dumb, just uneducated.
It would have to actually have intelligence, period, for it to have PhD level intelligence. These things are not intelligent. They just throw everything at the wall and see what would stick.
You are correct, but there's a larger problem with intelligence, we don't have a practical definition, and we keep shifting the goalpost. Then there's always a question of a philosophical zombie, if someone acts as a human and has a human body you won't be able to tell apart if they don't really have intelligence, so we only need to put LLM into humanlike body (it's not so, but you get the point)
reminds me of this, although the comic is on a different matter
All aboard the hype train! We need to stop using the term "AI" for advanced auto complete. There is not even a shred of intelligence in this. I know many of the people here already know this, but how do we get this message to journalists?! The amount of hype being repeated by respectable journalists is sickening.
people have been calling literal pathfinding algorithms in video games AI for decades. This is what AI is now and I think it's going to be significantly easier to just accept this and clarify when talking about actual intelligence than trying to fight the already established language.
While you're not wrong, I don't ever recall people en masse believing a game AI was truly intelligent. Everyone was always aware of the truth. There just isn't a great name for the computer players. I think it's an important distinction here because people do believe ChatGPT is intelligent.
Well, the “journalists” have not been replaced. But most of the content creating industry were not really that and have, as you say, started to be replaced.
I know many of the people here already know this, but how do we get this message to journalists?!
Journalists have this information, but articles about it probably generate 10% of the clicks, shares and eyeballs->ad revenue that either the hype or the scaremongering does.
No it won't. At some point, some AI will, but that point is still far away.
I'm sure it'll know how to string words and sentences together real nice, even to the point where it makes sense. It will still not have a clue what it's talking about, it'll still not understand basic concepts as "understanding" requires a whole lot more than just an advanced ability of pushing words together.
You know the Chineese? They talk about this ChatPT 7. But we Americans. My uncle, very smart man. Smartest in every room except on Thanks Giving. I always had Thanks Giving and my Turkey, everyone loved my Turkey. He said we will soon have Chat 8 and the Chineese they know nothing like it.
What a bunch of bullshit. I've asked ChatGPT recently to do a morphological analysis of some Native American language's very simple sentences, and it gave absolute nonsense as an answer.
And let's be clear: It was an elementary linguistics task. Something that I did learn to do on my own by just doing a free course online.
Yesterday, I asked it to help me create a DAX measure for an Excel pivot table. The answers it gave were completely wrong. Each time, I would tell it the error that Excel was displaying and it would respond with "Sorry about that. You can't use that function there for [x] reasons."
So it knows the reason why a combination of DAX functions won't work but recommends them anyways.
That's real fucking useful.
So copying everyone else’s work and rehashing it as your own is what makes a PhD level intelligence? (Sarcastic comments about post-grad work forthcoming, I’m sure)
Unless AI is able to come up with original, testable, verifiable, repeatable previously unknown associations, facts, theories, etc. of sufficient complexity it’s not PhD level…using big words doesn’t count either.
I think they had some specific metric in mind when they said this. But on the other hand, this is kind of a "you're here" situation, AI can't do that now, there's no telling that they can't make it do that later. Probably it would be a much more useful AI at that point, too
Eh. Maybe. but don’t discount those phds who were pushed through the process because their advisors were just exhausted by them. i have known too many 10th year students. They weren’t determined or hardworking. They simply couldn’t face up to their shit decisions, bad luck, or intellectual limits.
A scientist says Britney is really pretty, the press reports scientist thinks Britney is hot, lemmy gets mad because her core temperature is the same as most humans.
What they're really claiming is it'll have PhD level proficiency at certain tasks, that is if you asked an average PhD student to code a pathfinder algorithm GPT would produce similar level output. Likewise if you want it to write an explanation of centrifugal force it could output the same quality essay as the average PhD student.
They're not saying that it'll have agency, emotion, or self-awareness. They're not saying it'll have the colloquial understanding of intelligence or wisdom, they're using intelligence in its normal use in animal biology and computer science where it refers to an organism changing its behavior in response to stimulus in a way that benefits the organism - a worm moving away from light because this will increase its survivability is intelligence, a program selecting word order that earns it a higher score is intelligence.
Ah right, everyone was wrong the whole time. See everyone! This right here makes all of it make sense! We can all stop making fun of the statement for being ridiculous, because clearly we are just bad readers. Thank you man likely wearing a cape!
I like how they have no road map on how to achieve general artificial intelligence (apart from lets train LLMs with a gazillion parameters and the equivalent of yearly energy consumed by ten large countries) but yet pretend chatgpt 4 is only two steps away from it
Hard to make a roadmap when people can't even agree on what the destination is not how to get there.
But if you have enough data on how humans react to stimulus, and you have a good enough model, then you will be able to train it to behave exactly like a human. The approach is sound even though in practice there prooobably doesn't exist enough usable training data in the world to reach true AGI, but the models are already good enough to be used for certain tasks
Thing is we're not feeding it how humans react to stimulus. For that you'd need it hooked up to a brain directly. It's too filtered and biased by getting text only, this approach naively ignores things like memory and assumes text messages exist in a vacuum. Throwing a black box into an analytical prediction machine, only works as long as you're certain it'll generally throw out the same output with the same input, not if your black box can suddenly experience 5 years of development and emerge a different entity. It's skipping too many steps to become intelligent, I mean it literally skips the entire process between reading and writing.
The approach is not sound when all the other factors are considered. If AI continues along this approach it is likely that big AI companies will need to usurp next possible tech breakthroughs like quantum computing and fusion energy to be able to keep growing and produce more profit instead of these techs being used for better purposes (cheaper and cleaner household energy, scientific advances etc). All things considered excelling at image analysis, creative writing and digital arts wont be worth all the damage its going to cause.
Is it weird that I still want to go for my PhD despite all the feedback about the process? I don’t think I’ve ever met a PhD or candidate that’s enthusiastically said “do it!”
It’s a lot of fucking work. If you enjoy hard work, learning about the latest advancements in your field, and can handle disappointment / criticism well, then it’s something to look into.
No, not weird at all. PhD's are pain, but certain people like the pain. If you're good with handling stress, and also OK with working in a fast-paced, high-impact environment (for real, not business talk BS), then it may be the right decision for you. The biggest thing that I would say is that you should really, really think about whether this is what you want, since once you start a PhD, you've locked the next 6 years of your life into it with no chance of getting out
Edit: Also, you need to have a highly sensitive red-flag radar. As a graduate student, you are highly susceptible to abuse from your professor. There is no recourse for abuse. The only way to avoid abuse is by not picking an abusive professor from the get-go. Which is hard, since professors obviously would never talk badly about themselves. Train that red-flag radar, since you'll need to really read between every word and line to figure out if a professor is right for you
I generally tell people the only reason to do it is if your career pursuits require it, and even then I warn them away unless they're really sure. Not every research advisor is abusive, but many are. Some without even realizing it. I ended up feeling like nothing more than a tool to pump up my research advisor's publication count.
It was so disillusioning that I completely abandoned my career goal of teaching at a university because I didn't want to go anywhere near that toxic culture again. Nevertheless, I did learn some useful skills that helped me pivot to another career earning pretty good money.
So I guess I'm saying it's a really mixed bag. If you're sure it's what you want, go for it. But changing your mind is always an option.
no it's not. but you should know what you're getting into.
in the beginning of my PhD i really loved what i was doing. from an intellectually point of view i still do. but later, i.e. after 3 years doing a shitty postdoc, i realized that I was not cut out for academia but nevertheless loved doing science.
however, i was lucky to find a place in industry doing what i like.
so i guess my 2c is: think about what comes after the PhD and work towards that goal. a PhD is usually not a goal in itself. hth
It's like being drafted to a war while you only receive vague orders and you slowly realize what the phrase "war is a racket" means. You suffer and learn things that you didn't plan on learning.
If you have a good understanding of what grad school actually is, you know it’s not going to be college+, and you’re still excited? Go for it! Just go in with the attitude that this is the start of a career path (not school) with many branches along the way. Most people you’ll work with will act like your options are 1) aim for TT at an R1 or 2) cut your losses and go into industry. Those are both legit paths, but pay attention to what you’re loving and hating about the experience.
Maybe you absolutely love teaching or mentorship or grant-writing or data analysis or giving conference talks or science communication or managing a lab or any of the other billion things you have to be responsible for at some point. There are career paths between the extremes that can let do so the stuff you actually like doing, and they exist both in and outside of academia. If you go in letting yourself get excited about whatever the hell you actually get excited about, you can figure out what the path you actually want could look like and prioritize those things that don’t make you miserable.
a PhD who voluntarily pursued an instructional faculty track at an R1 where I never again have to backseat the needs of my students and my love of pedagogy behind desperately looking for research funding because publish-or-perish even though o have at bare minimum 3 months a year to devote entirely to whatever research I am excited about in the moment…or play video games if I prefer
The fact that I have a PhD while I knew that I wouldn't use it quickly after I begun, thus loosing years of my life is the proof that I'm dumb as a rock. Fitting for ChatGPT.
Oh... that's the same person (in the image at least) who said "Yeah AI is going to take those creative jobs, but those jobs maybe shouldn't have existed in the first place".
I'm so tired of repeating this ad nauseum. No, it's not going to take your job. It's hype train bullshit full of grifters. There is no intelligence or understanding, nor have we come anywhere close to achieving that. That is still entirely within the realm of science fiction.
ChatGPT is already taking people’s jobs. You overestimate the complexity of what some people get paid for.
GenerativeAI cannot do anything on its own. However, it is a productivity amplifier in the right hands. What those “more productive” people do is reduce the demand for other labour.
Chatbots are performing marketing communication, marketing automation, cloud engineering, simple coding, recruitment screening, tech support, security monitoring, editorial content and news, compliance verification, lead development, accounting, investor relations, visual design, tax preparation, curriculum development, management consulting, legal research, and more. Should it be? Many ( I am guessing you ) would argue no. Is it though? Absolutely.
All of the above is happening now. This train is going to accelerate before it hits equilibrium. The value of human contribution is shifting but not coming back to where it was.
Jobs will be created. Jobs are absolutely being lost.
You are correct that ChatGPT is not intelligent. You are right that it does not “understand” anything. What does that have to do with taking people’s jobs? There are many, many jobs where intelligence and understanding are under-utilized or even discouraged. Boiler-plate content creation is more common than you think.
People have the wrong idea about how advanced AI has to be to take people’s jobs.
The loom was not intelligent. It did not “understand” weaving. It still eliminated so many jobs that human society was altered forever and so significantly that we are still experiencing the effects.
As an analogy ( not saying this is how the world will choose to go ), you do not need a self-driving car that is superior to humans in all cases in order for Uber to eliminate drivers. If the AI can handle 95% of cases, you need 5 drivers for 100 cars. They can monitor, supervise, guide, and fully take over when required.
Many fields will be like this. I do not need an AI with human level intelligence to get rid of the Marcom dept. I need one really skilled person to drive 6 people’s worth of output using AI. How many content creators and headline writers do I need to staff an online “news” room? The lack of person number two may surprise you.
Getting rid of jobs is not just a one for one replacement of every individual with a machine. It is a systemic reduction in demand. It is a shifting of geographic dependence.
Many of the tasks we all do are less novel and high-quality than we think they are. Many of us can be “largely” replaced and that is all it takes. We may not lose our jobs but there will certainly be many fewer new jobs in certain areas than there would have been.
To add to your comment, there's also the corp's willingness to make things more precarious, as long as it gets cheaper to run and people keep consuming, so the situation might be even worse. In your uber example, they could simply not care for the 5%, stop providing them the service and go full self-driving.
I can't imagine looking at the world and thinking we need more industry. Also, I know a lot of PhDs. Knowing a lot of things about a particular subject in know way correlates with intelligence.
If AI was that capable then using human workers would eventually become cost prohibitive. If we're still stuck having to work to live under a capitalist system by then, there's gonna be serious problems. A post-labor economy doesn't need to charge for even a modestly comfortable standard of living, and the overwhelming majority of people will go looking for things to do no matter how many politicians swear otherwise.