How do we know that everyone on the internet isn't just a bot?
I mean, there might be a secret AI technology that is so advanced to the point that it can mimic a real human, make posts and comments that looks like its written by a human and even intentionally doing speling mistakes to simulate human errors. How do we know that such AI hasn't already infiltrated the internet and everything that you see is posted by this AI? If such AI actually exists, it's probably so advanced that it almost never fails barring rare situations where there is an unexpected errrrrrrrrrorrrrrrrrrrr.............
[Error: The program "Human_Simulation_AI" is unresponsive]
I've "played" plenty of simulations that are just things that run entirely on their own without a player input aside from the starting parameters. Chiefly being the one aptly named "The Game of Life."
If it's a simulation your imagination and understanding of the world (simulation) is limited and you have no idea how resource intensive it would be to run, perhaps we're a kids toy for some being
At some point it all stops mattering. You treat bots like humans and humans like bots. It's all about logic and good/bad faith.
I've had an embarrassing attempt to identify a bot and learned a fair bit.
There is significant overlap between the smartest bots, and the dumbest humans.
A human can:
Get angry that they are being tested
Fail an AI-test
Intentionally fail an AI-test
Pass a test that an AI can also pass, while the tester expects an AI to fail.
It's too unethical to test, so I feel that the best course of action is to rely on good/bad faith tests, and logic of the argument.
Turing tests are very obsolete. The real question to ask, Do you really believe that the average persons sapience is really that noteworthy?
A well made LLM can exceed a dumb person pretty easily. It can also be more enjoyable to talk with or more loving and supportive.
Of course there are things that current LLMs can't do well that we could design tests around. Also long conversations have a higher chance to show a failure of the AI. Secret AIs and future AIs might be harder of course.
I believe dead internet theories spirit. Strap in meat-peoples, rides gonna get bumpy.
You treat bots like humans and humans like bots. It's all about logic and good/bad faith.
Part of the thing with chatgpt is it's particularly good at sounding like it knows what is saying, while spewing linguistically-coherent nonsense.
For many (most? Even all to some degree?) of us, we have some idea ingrained in our culture of saying what we think to be true, and refraining from what we don't. That's heavily diluted on the internet, but the converse tends to be saying what we think will make people support/agree with us. We've grown up (some of us have!) with some feel of how to tell the difference.
GPT (and I guess most human-like chat bots will be similar for now) is more an amoral, or a-scient, attempt to say something coherent based on the training data. It's different again, but sounds uncannily like what we're used to from good-faith truth-speakers. I also think it's like the extreme-end of some cultures that prioritise saying what will make the other person happy, more than what is true.
The real question to ask, Do you really believe that the average persons sapience is really that noteworthy?
Part of the thing with chatgpt is it’s particularly good at sounding like it knows what is saying, while spewing linguistically-coherent nonsense.
That's why this is so scary! The average person on the internet is being fake the same way chatGPT based bots would be! haha.. :(
Your whole comment is great, you understand the passable, seemingly coherent nature of it. It's only a hair less coherent than the average person that would argue in bad faith, and if optimised with that specific data would be.. scary
Here is something I mentioned before on a different topic to show you the flaws of people, more so than the capabilities of bots.
https://lemmy.ml/comment/1318058
The thing that bothers me most is this thought exercise. If govt agencies and militaries are years ahead, and propaganda is so useful, shouldn't there be an ultra high chance that secret AI chatbots are already practically perfected and mass usable by now?
We have seen such a shift towards a dead internet that these are our final chances. I think we should spend more effort on finding tricks to ID bots and do something about it, else take to the streets.
Ah, the dead internet "theory"? Ultimately, it doesn't matter.
Let's pretend that you're the last human on the internet, and everyone else (including me) is a bot. This means that at least some bots pass the Turing test with flying colours; they're undistinguishable from human beings and do the exact same sort of smart and dumb shit that humans do. Is there any real difference between "this is a human being, I'll treat them as such" vs. "this is a bot, but it behaves like a human being and I need to treat it as a human being"?
This is a good answer, because it prevents the dehumanization trap that these theories fall into:
Basically, the belief that some beings don't have "souls", and don't have to be treated with conscience.
The "we are in a simulation" conspiracy fans toy with an idea of NPC that is horrifying: That some humans are just acting like humans very convincingly, but they are just thin shells that don't really really feel pain or happiness. Whatever you do to them can't be morally wrong.
It is also similar to how some religions have ideas that people can have their soul taken by Satan and are just demonic possession vessels here to corrupt us. They behave very much like humans but do not be tricked!
Europeans used to think Africans had no souls, they were just animals that were very good at imitating human behavior.
These thoughts are all extremely strong tools for any fascist movement needing some vague excuse to commit atrocities to their opponents and scapegoats.
Text bots have been able to pass a Turing test and be indisguinshable from a human for a long time. They had chat bots that could trick you into thinking they were real people, that college kids made just for fun on IRC and even in games back in the 90's.
These rules that ChatGPT imposes so it doesn't create something someone may find harmful are relatively new. And there's no law saying they need to be there.
Turing test isn't and never was a good way to discern a human from a bot, since many real people wouldn’t pass it. It has been criticized from day 1 and today it's nothing more than peculiarity.
Check this thread for "Turing" - we already discussed it and some people provided very interesting alternatives to it.
My existential crisis has been ongoing since the day I first had an existential crisis. I suspect that my parents are just part of the simulation given how they always yell at me when ever I have a happy moment. I can't ever just enjoy some time in peace.
That quickly boils down to "How do we know anything?" and the answer to that is "We don't". When you think hard enough about anything you can come up with an explanation why what we think to onow and believe is wrong. To get around that irl you can employ different tactics. For example, you can check how plausible sonething is. How many assumptions do you have to make for a theory? Usually, more assumptions means less plausible. And you can ask yourself " why does it matter? What would it change for me?" and the answer is most likely it doesn't and nothing.
Well, we do know 1 thing without making at least one leap of faith, courtesy of Descartes:
If nothing existed, there wouldn't be anything to have these thoughts. Therefore, since I'm thinking, there must be something that exists, and at least part of that is me. It might be an algorithm, a boltzman brain, some weird universe of thought, whatever. I might even be this singular thought and what I assume to be my memories and nothing else exists. But I know I exist in some kind of way.
Beyond that, you need to make assumptions, like whether reality is logical, whether your senses and and memories have any relation to reality, and so on and so forth. It makes sense to assume these assumptions are correct, but you can't know or prove they are true without relying on other assumptions that you can't know or prove independently either. Heck, without assuming that reality is logical, the concept of a proof doesn't even exist. You can choose to reject those assumptions, but that's a useless philosophical deadend.
Which is why someone answering "Believing is what you do in church, we're in the business of knowing!" to a sentence like "I believe I've seen this before" annoys me a bit, since you can't know anything useful without believing a bunch of stuff first. If someone's going to be pedantic about that choice of words, so can I.
It can be hard to tell if you're talking to a bot online. Some bots are really good at mimicking human conversation, and they can even make spelling mistakes to seem more realistic. But there are some things you can look for to help you tell the difference between a bot and a human.
For example, bots often have very fast response times, even if you ask them a complicated question. They may also repeat themselves or give you the same answer to different questions. And their language may sound unnatural, or they may not be able to understand your jokes or sarcasm.
Of course, there's no foolproof way to tell if you're talking to a bot. But if you're ever suspicious, it's always a good idea to do some research or ask a friend for help.
Here are some additional tips for spotting bots online:
Check the profile. Bots often have very basic profiles with no personal information or photos.
Look for inconsistencies. Bots may make mistakes or contradict themselves.
Be suspicious of overly friendly or helpful users. Bots are often programmed to be very helpful, so they may come across as too good to be true.
If you're still not sure if you're talking to a bot, you can always ask them directly. Most bots will be honest about their identity, but if they refuse to answer, that's a good sign that you're dealing with a bot.
That's just a variant of the ages old Philosophy question "What is real?"
Last I checked the best answer there was is "I think therefore I am" (Descartes), which is quite old and doesn't even deal with the whole "what am I", much less with the existance or not of everything else.
"Is the Internet all AI but me" is actually pretty mild skepticism in this domain - I mean, how sure are your that your're not some kind of advanced AI yourself who believes itself to be "human" or even that the whole "human" concept is at all real and not just part of an advanced universe simulation with "generative simulated organic life" inv which various AIs which are unaware of their AI status, such as yourself, participate?
Or maybe you're just one of the brains of a 5-dimensional hyper intelligence and "life as a human" is but a game they play for such minor brains to keep them occupied...
That's a leap if I ever saw one. I could ask the same question and substitute AI with god or aliens and I'd be ridiculed by the tech community and with good reason.
And you don't need to take it much further to fall into the holographic universe principle or the simulation hypothesis and for those there are big discussions to be had in science communities.
To be clear, nothing stops you or me, or anyone for that matter from assuming so, but down that road the only answer I can think of is that nothing matters and might as well lay down and die.
Look into the research of Large Language Models (LLMs). Even the latest and greatest model has some issues that come up under rigorous testing. For example, GPT-4 (the one used by Bing) fails miserably if you ask: “How many words will there be in your next answer?”
You can spot an older LLM by asking about relationships that require some understanding of the real world. For example: “I found a shirt under the car, but it was wet. Which one was wet?” GPT-4 knows enough about the world that it makes more sense if the shirt was wet, but older models would have failed this question. With every new LLM, there are always some issues, so look up what they are.
Tom Scott made an interesting video about what the situation was 3 years ago. Obviously, LLMs are a fast moving target right now, so that video aged like milk.
Yes I have thought this about Twitter and Reddit and other text based social media. I’m not 100% sure that the majority of traffic and posts have been “seeded” by AI.
My conspiracy theory is that these sites have a vested interest in driving traffic and appearing to have high engagement or participation rates for ad sales.
Text is easy to generate with AI and the sites have a ton existing posts to train models on. What do they have to lose?