If my employer is anything to go by, much of it is just unimaginative businesspeople who are afraid of missing out on what everyone else is selling.
At work we were instructed to shove ChatGPT into our systems about a month after it became a thing. It makes no sense in our system and many of us advised management it was irresponsible since it's giving people advice of very sensitive matters without any guarantee that advice is any good. But no matter, we had to shove it in there, with small print to cover our asses. I bet no one even uses it, but sales can tell customers the product is "AI-driven".
Yes, I'm getting some serious dot-com bubble vibes from the whole AI thing. But the dot-com boom produced Amazon, and every company is basically going all-in in the hope they are the new Amazon while in the end most will end up like pets.com but it's a risk they're willing to take.
A lot of it is follow the leader type bullshit. For companies in areas where AI is actually beneficial they have already been implementing it for years, quietly because it isn't something new or exceptional. It is just the tool you use for solving certain problems.
Yeah, can make some products better but most of the products these days that use AI, it doesn't actually need them. It's annoying to use products that actively shovel AI when it doesn't even need it.
I tried to find the advert but I see this on YouTube a lot - an Adobe AI ad which depicts, without shame, AI writing out a newsletter/promo for a business owner's new product (cookies or ice cream or something), showing the owner putting no effort into their personal product and a customer happily consuming because they were attracted by the thoughtless promo.
How are producers/consumers okay with everything being so mediocre??
How are producers/consumers okay with everything being so mediocre??
I'm not. My particular beef is with is with plastics and toxic materials and chemicals being ubiquitous in everything I buy. Systemic problem that I can do almost nothing about apart from make things myself out of raw materials.
How are producers/consumers okay with everything being so mediocre??
"You're always trying to make everything just a little bit worse so that you can feel good about having a lot more of it. I love it. It's so human!" - The Good Place
My doorbell camera manufacturer now advertises their products as using, "Local AI" meaning, they're not relying on a cloud service to look at your video in order to detect humans/faces/etc. Honestly, it seems like a good (marketing) move.
As I mentioned in another post, about the same topic:
Slapping the words “artificial intelligence” onto your product makes you look like those shady used cars salesmen: in the best hypothesis it’s misleading, in the worst it’s actually true but poorly done.
They really aren't. Go ask about something in your area of expertise. At first glance, everything will look correct and in order, but the more you read the more it turns out to be complete bullshit. It's good at getting broad strokes but the details are very often wrong.
Now imagine someone that doesn't have your expertise reading that answer. They won't recognize those details are wrong until it's too late.
Customers worry about what they can do with it, while investors and spectators and vendors worry about buzzwords. Customers determine demand.
Sadly what some of those customers want to do is to somehow improve their own business without thinking, and then they too care about buzzwords, that's how the hype comes.
There are different types of people in the market. The informed ones hate AI, and the uninformed love it. The informed ones tend to be the cornerstones of businesses, and the uninformed ones tend to be in charge.
So we have... All this. All this nonsense. All because of stupid managers.
But what if it actually is magic this time? Just this once!? And we miss the hype train?! (This is a sarcastic impression of real conversations I have had.)
I refuse to use Facebook anymore, but my wife and others do. Apparently the search box is now a Meta AI box, and it pisses them every time. They want the original search back.
More like "instead of making something that gets the job done, expect pur unfinished product to complain and not do whatever it's supposed to". Or just plain false advertising.
Either way, not a good look and I'm glad it's not just us lemmings who care.
LLM based AI was a fun toy when it first broke. Everyone was curious and wanted to play with it, which made it seem super popular. Now that the novelty has worn off, most people are bored and unimpressed with it. The problem is that the tech bros invested so much money in it and they are unwilling to take the loss. They are trying to force it so that they can say they didn't waste their money.
Honestly they're still impressive and useful it's just the hype train overload and trying to implement them in areas they either don't fit or don't work well enough yet.
Even in areas where they would fit it's really annoying how some companies are trying to push it down our throats.
It's always some obnoxious UI element, screaming at me their 3 example questions, and I always sigh and think, "I have to assume you can only answer these 3 particular questions, and why would I ask those questions, and when I ask UI questions I expect precise answers so would I want to use AI for that."
I have no doubt that LLM's have more uses than I can think of, but come on...
I'm happy for studies like this. People who are trying to smear their AI all over our faces need to calm, the f..k, down.
Many of us who are old enough saw it as an advanced version of ELIZA and used it with the same level of amusement until that amusement faded (pretty quick) because it got old.
If anything, they are less impressive because tricking people into thinking a computer is actually having a conversation with them has been around for a long time.
So you want to tell me they all spent billions and made huge data centres that suck more power than small country so we can all play with it, generate some cringy smut and then toss it away?
This is kinda insane if that’s how it will play out
I agree with this, my sentiments exactly as well. Getting AI pushed towards us from every direction & really never asked for it. Like to use it for certain things but go to it when needed. Don't want it in everything, at least personally.
They've overhyped the hell out of it and slapped those letters on everything including a lot of half baked ideas. Of course people are tired of it and beginning to associate ai with bad marketing.
This whole situation really does feel dotcommish. I suspect we will soon see an ai crash, then a decade or so later it will be ubiquitous but far less hyped.
Thing is, it already was ubiquitous before the AI "boom". That's why everything got an AI label added so quickly, because everything was already using machine learning! LLMs are new, but they're just one form of AI and tbh they don't do 90% of the stuff they're marketed as and most things would be better off without them.
What did they even expect, calling something "AI" when it's no more "AI" than a Perl script determining whether a picture contains more red color than green or vice versa.
Anything making some kind of determination via technical means, including MCs and control systems, has been called AI.
When people start using the abbreviation as if it were "the" AI, naturally first there'll be a hype of clueless people, and then everybody will understand that this is no different from what was before. Just lots of data and computing power to make a show.
Fallout was so on point, only a lot of distance and humour makes it not outright painful or scary knowing the damn nukes will be popping sooner or later one just doesn’t know if tomorrow or in 80 years. The question is not if but when
They don't care. At the moment AI is cheap for them (because some other investor is paying for it). As long as they believe AI reduces their operating costs*, and as long as they're convinced every other company will follow suit, it doesn't matter if consumers like it less. Modern history is a long string of companies making things worse and selling them to us anyway because there's no alternatives. Because every competitor is doing it, too, except the ones that are prohibitively expensive.
Assuming MBAs can do math might be a mistake. I've worked on an MBA pet project that squandered millions in worker time and opportunity cost to save 30k mrc...
For what it’s worth, rice cookers have been touting “fuzzy logic” for like 30 years. The term “AI” is pretty much the same, it just wasn’t as buzzy back then.
I can attest this is true for me. I was shopping for a new clothes washer, and was strongly considering an LG until I saw it had “AI wash”. I can see relevance for AI in some places, but washing clothes is NOT one of them. It gave me the feeling LG clothes washer division is full of shit.
Bought a SpeedQueen instead and been super happy with it. No AI bullshit anywhere in their product info.
I'd be fairly certain the washing machine has a few sensors and a fairly simple computer program (designed by humans) that can make some limited adjustments to the wash cycle on the fly.
I've seen quite a few instances of stuff like that suddenly being called "AI" as that's the big buzzword now.
Honestly, +1 for SpeedQueen. That’s the brand that every laundromat uses, because they’re basically the Crown Vic of washers; They’re uglier than sin, but they’ll run for literal decades with very little maintenance. They do exactly one thing, (clean your clothes), and they do that one thing very well. They’re the “somehow my grandma’s appliances still work 70 years later, while mine all break after three years" of washing machines.
SpeedQueen doesn’t have any of the modern bells or whistles… But that also means there’s nothing to break prematurely and turn the washer into the world’s largest paperweight. Samsung washers, for instance, have infamously shitty LCD panels, which are notorious for dying right after the warranty expires. And when it dies, the entire washer is dead until you replace basically the entire control interface. SpeedQueen doesn’t have this issue, because they don’t even have LCD panels; everything is just physical knobs and buttons. If something ever does break, it’s just a mechanical switch that you can swap out in 15 minutes with a YouTube tutorial.
FYI, all current Speed Queen models except the Classic Series dryer (DC5, not the washer) are electronically controlled. Even the ones with knobs. They are not mechanical and no longer use the oldschool sequencing drums.
The TR7/DR7 are at least still sold with a 7 year manufacturer's warranty, though. This is specifically to assuage consumer fears about the electronic control panel.
Yes! A washer doesn't need AI or wifi. It needs power, water, detergent and dirty laundry. Had a guest the other day pull out their phone and go Oh my dish washer is out of surfactant. Why the fuck do you need to know that, when you're 20min away by car?
I will pay more if an appliance isn't internet connected.
Speed Queen for the win. I recently replaced a couple of trusty machines that had finally given up after decades of abuse. Went for speed queen, no regrets.
I was shopping for a new clothes washer, and was strongly considering an LG until I saw it had “AI wash”. I can see relevance for AI in some places, but washing clothes is NOT one of them.
I might be thinking the same. But I actually purchased an LG washer a couple months ago and finally got around to finding and reading the manual, and realized that I should have been doing "AI wash" instead of the "normal wash" that I always did.
The manual says that this is what "AI wash" actually is for:
"This cycle automatically adjusts wash and rinse patterns based on load size".
It is legitimately useful for getting started with using a new programming library or tool. Documentation is not always easy to understand or easy to search, so having an LLM generate a baseline (even if it's got mistakes) or answer a few questions can save a lot of time.
So I used to think that, but I gave it a try as I’m a software dev. I personally didn’t find it that useful, as in I wouldn’t pay for it.
Usually when I want to get started, I just look up a basic guide and just copy their entire example to get started. You could do that with chatGPT too but what if it gave you wrong answers?
I also asked it more specific questions about how to do X in tool Y. Something I couldn’t quickly google. Well it didn’t give me a correct answer. Mostly because that question was rather niche.
So my conclusion was that, it may help people that don’t know how to google or are learning a very well know tool/language with lots of good docs, but for those who already know how to use the industry tools, it basically was an expensive hint machine.
In all fairness, I’ll probably use it here and there, but I wouldn’t pay for it. Also, note my example was chatGPT specific. I’ve heard some companies might use it to make their docs more searchable which imo might be the first good use case (once it happens lol).
I've built a couple of useful products which leverage LLMs at one stage or another, but I don't shout about it cos I don't see LLMs as something particularly exciting or relevant to consumers, to me they're just another tool in my toolbox which I consider the efficacy of when trying to solve a particular problem.
I think they are a new tool which is genuinely valuable when dealing with natural language problems.
For example in my most recent product, which includes the capability to automatically create karaoke music videos, the problem for a long time preventing me from bringing that product to market was transcription quality / ability to consistently get correct and complete lyrics for any song. Now, by using state of the art transcription (which returns 90% accurate results) plus using an open weight LLM with a fine tuned prompt to correct the mistakes in that transcription, I've finally been able to create a product which produces high quality results pretty consistently. Before LLMs that would've been much harder!
I think the LLM could be decent at the task of being a fairly dumb personal assistant. An LLM interface to a robot that could go get the mail or get you a cup of coffee would be nice in an "unnecessary luxury" sort of way. Of course, that would eliminate the "unpaid intern to add experience to a resume" jobs. I'm not sure if that's good or bad,l. I'm also not sure why anyone would want it, since unpaid interns are cheaper and probably more satisfying to abuse.
I can imagine an LLM being useful to simulate social interaction for people who would otherwise be completely alone. For example: elderly, childless people who have already had all their friends die or assholes that no human can stand being around.
Is that really an LLM? Cause using ML to be a part of future AGI is not new and actually was very promising and the cutting edge before chatGPT.
So like using ML for vision recognition to know a video of a dog contains a dog. Or just speech to text. I don’t think that’s what people mean these days when they say LLM. Those are more for storing data and giving you data in forms of accurate guesses when prompted.
I feel like everyone who isn't really heavily interacting or developing don't realize how much better they are than human assistants. Shit, for one it doesn't cost me $20 an hour and have to take a shit or get sick, or talk back and not do its fucking job. I do fucking think we need to say a lot of shit though so we'll know it ain't an LLM, because I don't know of an LLM that I can make output like this. I just wish most people were a little less stuck in their western oppulance. Would really help us no get blindsided.
I actually think the idea of interpreting intent and connecting to actual actions is where this whole LLM thing will turn a small corner, at least. Apple has something like the right idea: “What was the restaurant Paul recommended last week?” “Make an album of all the photos I shot in Belize.” Etc.
Samsung is a nightmare, don't purchase their products.
For example: I used to have a Samsung phone. If I plugged it into the USB port on my computer Windows Explorer would not be able to see it to transfer files. My phone would tell me I need to download Samsung's drivers to transfer files. I could only get them by downloading Samsung's software. Once I installed the software Windows Explorer was able to see the device and transfer files. Once I uninstalled the software Windows Explorer couldn't see the device again.
Anything Samsung can do in your region to insert themselves between you and what you are trying to do they will do.
To give you a second opinion from the other guy, I've had quite a few Samsungs in a row at this point. From Galaxy S2 to S23Ultra skipping years between every purchase.
They are effectively the premium vendor of Android, at least for western audiences. The midrange has some good ones, but other companies do well there too. At the high end, Samsung might lose out a bit to google on images of people, but the phones Samsung sell are well built, have a long support life, have lots of features that usually end up being imported to AOSP and/or Google's own version of Android. The last few generations are the Apple of Android. The AI features they've added can be run on device if you want, and idk what the other guy is talking about, but the AI features aren't that obnoxiously pushed on my device, the S23 Ultra. I have some things on, most things off. Then again, I've used HTC for a few years and iPhone for two weeks, so except for helping my dad with his Pixel 6a while that device lasted, I've not really tried other brands. The added customization on Samsung is kind of a problem for me, because I don't feel like changing brands after being able to customize so much out of the box.
And I've never had issues connecting to a simple Windows computer, given that the phone has always been able to use the normal Plug-and-play driver that is there already. If you have a macbook like I do, it's a bit cringe, but that's a macbook issue moreso.
Yee. No root required, neither recommended for samsung devices. In short just enable developer mode from phone settings, then debug it with adb platform to uninstall and disable any system app, and can also change lines, colors, phone behaviors, properties and look, install and uninstall apps which you could not before...and so many things.
I don't know about the AI stuff specifically. Check your battery usage to see which process is doing that.
but yes debloating in general makes your phone battery longer, and with the help of few more tricks also faster. There are thousands of no-root-required debloating tutorials online.
I've learned to hate companies that replaced their support staff with AI. I don't mind if it supplements easy stuff, that should take like 15 seconds, but when I have to jump through a bunch of hoops to get to the one lone bastard stuck running the support desk on their own, I start to wonder why I give them any money at all.
It has been getting so bad that even boring regular phone trees will hang up on you if you insist on talking to a human. If it's ISP / cellular, nowadays I will typically just say I want to cancel my account, and then have cancellations route me to the correct department.
There really should be a right to adequate human support that's not hidden behind multiple barriers. As you said, it can be a timesaver for the simple stuff, but there's nothing worse than the dread when you know that your case is going to need some explanation and an actual human that is able to do more than just following a flowchart.
"AI" is certainly a turn-off for me, I would ask a salesman "do you have one that doesn't have that?" and I will now enumerate why:
LLMs are wrongness machines. They do have an almost miraculous ability to string words together to form coherent sentences but when they have no basis at all in truth it's nothing but an extremely elaborate and expensive party trick. I don't want actual services like web searches replaced with elaborate party tricks.
In a lot of cases it's being used as a buzzword to mean basically anything computer controlled or networked. Last time I looked up they were using the word "smart" to mean that. A clothes dryer that can sense the humidity of the exhaust air to know when the clothes are dry isn't any more "AI" than my 90's microwave that can sense the puff of steam from a bag of popcorn. This is the kind of outright dishonest marketing I'd like to see fail so spectacularly that people in the advertising business go missing over it.
I already avoided "smart" appliances and will avoid "AI" appliances for the same reasons: The "smart" functionality doesn't actually run locally, it has to connect to a server out on the internet to work, which means that while that server is still up and offering support to my device, I have a hole in my firewall. And then they'll stop support ten minutes after the warranty expires and the device will no longer work. For many of these devices there's no reason the "smart" functionality couldn't run locally on some embedded ARM chip or talk to some application running on a PC that I own inside my firewall, other than "then we don't get your data."
AI is apparently consuming more electricity than air conditioning. In fact, I'm not convinced that power consumption isn't the selling point they're pushing at board meetings. "It'll keep our friends in the pollution industry in business."
Can you help me with problems this complex? Idk maybe we could use it to help make things better. Just most people prompt like things I can't say because they aren't nice. Oh by the way. Can you do it right now for $0 please? Thanks!
Edit. Also need it done now. If you're reading this you were too slow.
Every company that has been trying to push their shiny, new AI feature (which definitely isn't part of a rush to try and capitalize on the prevalence of AI), my instant response is: "Yeah, no, I'm finding a way to turn this shit off."
My response is even harsher..."Yeah, no, I'm finding a way to never use this company's services ever again." Easier said than done, but I don't even want to associate with places that shove this in my face.
Early adopter of LLMs ever since a random tryout of Replika blew my mind and I set out to figure what the hell was generating its responses
Learn to fine-tune GPT-2 models and have a blast running 30+ subreddit parody bots on r/SubSimGPT2Interactive, including some that generate weird surreal imagery from post titles using VQGAN+CLIP
Have nagging concerns about the industry that produced these toys, start following Timnit Gebru
Begin to sense that something is going wrong when DALLE-2 comes out, clearly targeted at eliminating creative jobs in the bland corporate illustration market. Later, become more disturbed by Stable Diffusion making this, and many much worse things, possible, at massive scale
Try to do something about it by developing one of the first "AI Art" detection tools, intended for use by moderators of subreddits where such content is unwelcome. Get all of my accounts banned from Reddit immediately thereafter
Am dismayed by the viral release of ChatGPT, essentially the same thing as DALLE-2 but text
Grudgingly attempt to see what the fuss is about and install Github Copilot in VSCode. Waste hours of my time debugging code suggestions that turn out to be wrong in subtle, hard-to-spot ways. Switch to using Bing Copilot for "how-to" questions because at least it cites sources and lets me click through to the StackExchange post where the human provided the explanation I need. Admit the thing can be moderately useful and not just a fun dadaist shitposting machine. Have major FOMO about never capitalizing on my early adopter status in any money-making way
Get pissed off by Microsoft's plans to shove Copilot into every nook and cranny of Windows and Office; casually turn on the Opympics and get bombarded by ads for Gemini and whatever the fuck it is Meta is selling
Start looking for an alternative to Edge despite it being the best-performing web browser by many metrics, as well as despite my history with "AI" and OK-ish experience with Copilot. Horrified to find that Mozilla and Brave are doing the exact same thing
Install Vivaldi, then realize that the Internet it provides access to is dead and enshittified anyway
Daydream about never touching a computer again despite my livelihood depending on it
I like the article I read were ww2 german soldiers were being generated by AI as asians, black woman, etc. Glad it doesn't take context into consideration. lol
In other news, AI bros convince CEOs and investors that polls saying people don't like AI are out of touch with reality and those people actually want more AI, as proven by an AI that only outputs what those same AI bros want.
Just waiting for that to pop up in the news some time soon.
I've found ChatGPT somewhat useful, but not amazingly so. The thing about ChatGPT is, I understand what the tool is, and our interactions are well defined. When I get a bullshit answer, I have the context to realize it's not working for me in this case and to go look elsewhere. When AI is built in to products in ways that you don't clearly understand what parts are AI and how your interactions are fed to it; that's absolutely and incurably horrible. You just have to reject the whole application; there is no other reasonable choice.
Yeah these buttsniffers can't possibly conceive the truth, theymade "AI" into something that people don't want, let alone ever admit it. Check this out:
"When AI is mentioned, it tends to lower emotional trust, which in turn decreases purchase intentions" - some marketing stinklipper
"We found emotional trust plays a critical role in how consumers perceive AI-powered products".
Ok, first of all how is this person serious fire this person please cuz this gibberish sounds like a LLM wrote it like for real WTF even is "emotional trust" dude is that a real term so you mean we see your lies
(wheeze)
Sorry, brain overheated there. These fucks are so far up their own asses man... the mind just boggles
I have just read the features of iOS 18.1 Apple intelligence so called.
TLDR: typing and sending messages for you mostly like one click reply to email. Or… shifting text tone 🙄
So that confirms my fears that in the future bots will communicate with each other instead of us. Which is madness. I want to talk to a real human and not a bot that translates what the human wanted to say approximately around 75% accuracy devoid of any authenticity
If I see someone’s unfiltered written word I can infer their emotions, feelings what kind of state they are in etc. Cold bot to bot speech would truly fuck up society in unpredictable ways undermining fundaments of communication.
Especially if you notice that most communication, even familial already happens online nowadays. So kids will learn to just ‘hey siri tell my mom I am sorry and I will improve myself’.
Mom: ‘hey siri summarize message’
My hope for the future relies on a study indicating that after 5 or so generations of training data tainted with AI generated information, the LLM models collapsed.
Hopefully, after enough LLMs have been fed LLM data, we will arrive in an LLM-free future.
Another possibility is LLMs will only be trained on historic data, meaning they will eventually start to sound very old-fashioned, making them easier to spot.
Future email writing: type the first three words then spam click the auto complete on your LLM-based keyboard. Only stop when the output starts to not make sense anymore.
So kids will learn to just ‘hey siri tell my mom I am sorry and I will improve myself’.
What makes you think that kids aren't already doing things like this? Not with Siri, but it doesn't take much effort to get ChatGPT to write something for you.
It isn’t built-in in the very phone operating system where you just tap on generate response in the iMessage. It is always about laziness. First the privacy went away due to path of least effort even though you always had tons of privacy alternatives but they require just 10 seconds of extra effort
It's the same with images, soon all our photos won't be real captured moments, but AIs interpretation of those moments edited by the AI to make them "perfect"
In your own words, tell me why you're calling today.
My medication is in the wrong dosage.
You need to refill your medication is that right?
No, my medication is in the wrong dosage, it's supposed to be tens and it came as 20s.
You need to change the pharmacy where you're picking up your medication?
I need to speak to a human please.
I understand that you want to speak to an agent, is that right?
Yes.
Chorus, 5x. (Please give me your group number, or dial it in at the keypad. For this letter press that number for that letter press this number. No I'm driving, just connect me with an agent so I can verify over the phone)
I'm sorry, I can't verify your identity please collect all your paperwork and try calling again. Click
I went through a McDonald’s drive-thru the other day and had the most insane experience. For the context of this anecdote, I don’t do that often, so, what I experienced was just weird.
While not quite “AI,” the first thing that happened was an automated voice yells at me, “are you ordering using your mobile app today?”
There’s like three menu-speaker boxes, and due to where the car in front of me stopped, I’m like in between the last two. The other speaker begins to yell, “Are you ordering using your mobile app today?”
The person running drive-thru mumbles something about pull around. I do. Pass by the other menu “Are you ordering using your mobile app today?”
Dude walks out with a headset and starts taking orders from each car using a tablet.
I have no idea what is happening. I can’t even see a menu when the guy gets around to me. Turns the tablet around at me.
I realized that I was indeed ordering using the mobile app today.
Hardly. It used to be natural language dictation and decision tree. Now they're trying to use LLM training to automatically pick up more edge cases and it's pretty much b*******.
This is because the AI of today is a shit sandwich that we’re being told is peanut butter and jelly.
For those who like to party: All the current “AI” technologies use statistics to approximate semantics. They can’t just be semantic, because we don’t know how meaning works or what gives rise to it. So the public is put off because they have an intuitive sense of the ruse.
As long as the mechanics of meaning remain a mystery, “AI” will be parlor tricks.
And I don’t mean to denigrate data science. It is important and powerful. And real machine intelligence may one day emerge from it (or data science may one day point the way). But data science just isn’t AI.
I have rolled back, uninstalled, opted-out, or ripped apart every AI that every company is trying to shove down our throats. I wish I could do the same for search engines, but who uses the internet broadly anymore anyway.
I am impressed by the tech, I think it's amazing, but it's still utterly useless.
I have never, ever needed to interrupt my day's schedule to generate a convincing picture of Luke Skywalker fighting Batman while riding dinosaurs, I have never needed to have a text conversation with someone who seems "almost human," I mean, christ that already describes half the people I know and wish were more normal. I have never needed an article summarized badly, I enjoy reading things, I enjoy writing emails, so I can't figure out why they would make tools to take away the small pleasures we have. What exactly are they thinking?
Yesterday I gave it one more chance, asked one of the apps, I forget which, what tomorrow's weather will be like, the thing forecasted a hurricane coming right for me, a news event from last year. I'm so over AI, please someone notify me when it's really useful and can take over the menial, tedious tasks like managing my online accounts and offering financial advice or can actually help me find a job opening in my field.
All these things have been promised, and seem more out of reach than ever.
The MOST impressive thing I've seen AI do is make really, really convincing furry porn babes. The things are good at mixing features in images. Sometimes.
this is purely false. There are so many applications that bring value and if you can't admit that then you are biased in some way/shape/form.
As a sw dev, I use AI to speed up menial tasks or help me find different perspectives on certain things, shit it's even helpful for debugging tricky things. You don't need to be a coder to find value in AI though, things like auto-generated transcripts has been so fucking amazing, especially for podcasting in my case.
I could go on and on. To say it is UTTERLY USELESS is disingenuous at best.
The MOST impressive thing I’ve seen AI do is make really, really convincing furry porn babes. The things are good at mixing features in images. Sometimes.
You are quite literally telling on yourself here, you seem to have a limited view of AI application and are judging the entire technology/concept based on that narrow set of use-cases (which appear to be, from your comment, chat bots, porn generators, future weather predictors, not exactly the pinnacle of AI application).
I’m so over AI, please someone notify me when it’s really useful and can take over the menial, tedious tasks
Here you go again! You seem to be equating value to the ability for the tech to function without supervision or assistance. Does AI only provide value to you if it can do those things completely autonomously? What if working with the AI is faster than not using it at all? Is it still useless to you?
They keep using it for really stupid things. I agree all the image generators are bloody pointless, the quality isn't good enough and you don't have the control you need to make them useful.
I wonder if we'll start seeing these tech investor pump n' dump patterns faster collectively, given how many has happened in such a short amount of time already.
Crypto, Internet of Things, Self Driving Cars, NFTs, now AI.
It feels like the futurism sheen has started to waver. When everything's a major revolution inserted into every product, then isn't, it gets exhausting.
This is very much not a hype and is very widely used. It's not just smart bulbs and toasters. It's burglar/fire alarms, HVAC monitoring, commercial building automation, access control, traffic infrastructure (cameras, signal lights), ATMs, emergency alerting (like how a 911 center dispatches a fire station, there are systems that can be connected to a jurisdiction's network as a secondary path to traditional radio tones) and anything else not a computer or cell phone connected to the Internet. Now even some cars are part of the IoT realm. You are completely surrounded by IoT without even realizing it.
Huh, didn't know that! I mainly mentioned it for the fact that it was crammed into products that didn't need it, like fridges and toasters where it's usually seen as superfluous, much like AI.
I think that the dot com bubble is the closest, honestly. There can be some kind of useful products (mostly dealing with how we interact with a system, not actually trying to use AI to magically solve a problem; it is shit at that), but the hype is way too large
It's more of a macroeconomic issue. There's too much investor money chasing too few good investments. Until our laws stop favoring the investor class, we're going to keep getting more and more of these bubbles, regardless of what they are for
Yeah it's just investment profit chasing from larger and larger bank accounts.
I'm waiting for one of these bubble pops to do lasting damage but with the amount of protections for specifically them and that money that can't be afforded to be "lost" means it's just everyone else that has to eat dirt.
Maybe I'd be more interested in AI if there was any I with the A. At the moment, there's no more intelligence to these things than there is in a parrot with brain damage, or a human child. Language Models can mimic speech but are unable to formulate any original thoughts. Until they can, they aren't AI and I won't be the slightest bit interested beyond trying to break them into being slightly dirty (and therefore slightly funny).
Just so you know I totally agree with you but if you go far back enough in my comment history I had a really interesting (imo) discussion/argument with someone abt this very topic and the topic of how to determine if an AI 'thinks' or 'reasons' more broadly.
It does and AI is being tarnished by the hype/marketing.
Not long ago Firefox announced it would deliver client-side "AI" to describe web pages to differently-abled users. This is awesome.
Some people on Lemmy conflated AI and Large Language Models and complained about the addition. I don't blame them, not everyone is an IT pro and is equipped to understand the difference between Machine Learning Models, LLMs and such. I mentioned Firefox has "AI" for client-side translation and that's a great thing. They wondered since when "AI" was used for translation. Machine learning/deep learning translation has been a thing for over a decade and it amazing. It's not LLM (even if LLMs are really good at translation).
The market has pushed "AI" too hard making people cautious about it. They are turning it into the new "blockchain" were most people didn't find any benefit from the hype, on the contrary, they saw the vast majority of it being scams.
I can't really agree as a video producer. Luma, Krea, Runway, Ideogram, Udio, 11Labs, Perplexity, Claude, Firefly -> All worth more than they're charging, most with daily free options. They save me a ton of time. Honestly, the one I'm considering dropping at the moment is ChatGPT.
The irony is companies are being forced to implement it. Like our board has told us we must have "AI in our product.". It's literally a solution looking for a problem that doesn't exist.
My boss's boss's boss asked for a summary of our roadmap. He read it, and provided his takeaways... 3 of the 4 bullet points were AI-related, and we never once mentioned anything about AI in what we gave him 😑 so I guess we're pivoting?
Adobe Acrobat has added AI to their program and I hate it so much. Every other time I try to load a PDF it crashes. Wish I could convince my boss to use a different PDF reader.
I have no qualms about AI being used in products. But when you have to tell me that something is "powered by AI" as if that's your main selling point, then you do not have a good product. Tell me what it does, not how it does it.
If I could have the equivalent of a smart speaker that ran the AI model locally and could interface with other files on the system. I would be interested in buying that.
But I don't need AI in everything in the same way that I don't need Bluetooth in everything. Sometimes a kettle is just a kettle. It is bad enough we're putting screens on fridges.
I like the vast majority of my technology dumb, the last barely smart kettle I bought - it had a little screen that showed you temperature and allowed you to keep the water at a particular temperature for 3h - broke within a month. Now I once again have a dumb kettle, it only has the on/off button and has been working perfectly since I got it
Unsurprisingly. I have use for LLMs and find them helpful, but even I don't see why should we have the copilot button on new keyboards and mice, as well as on the LinkedIn's post input form.
She looks so done with it. It is amazing how tone deaf and incapabale of detecting emotions the higher ups must have been to OK that image. Not blaming any one lower to approve this, they are probably all fed up too and were happy to use this.
Plus, it's way too cold at her vast and empty warehouse hot desk, because she's wearing at least two sweaters. Please let this lady have a cubicle of her own with a little space heater.
AlphaProof isn't an LLM but it just was a point from gold against some of the smartest people on earth. You think you're smarter than the people building this stuff? That might be the dumbest shit about this. I swear the United States essentially really has become Ideocracy. From all angles. Capitalism sucks but AI isn't the problem. Bunch of greedy apes is the fucking problem like it always has been. Lol
So you know if you have clean water and food though, you could be considered a very greedy ape. Why are you not fighting harder for clean water etc? What do you do to make the world better? (Shit probably same as me. Jack shit)
Hmmm i have to reread my previous comment cuz people are getting the wrong idea (maybe)
Im talking about marketing doublespeak, and the fact a press release to the public at large will never admit "AI" has become bad in the public perception because of marketing. It is because of these marketing mba dipshits and clueless fad followers putting "AI" on stuff that is
Not AI
Or
Not useful to the consumer, and indeed has many anti-consumer facets, being used primarily as an excuse to fire workers, push software as a service, or mine consumer info.
The point i tried and failed to make was these MBA fucks (categorically not the engineers building ai or the llms we also call ai) are so insulated inside their corpo boardroom-speak they can't see or admit it's their fault, or ever hear how goddamn stupid they sound.
Hi, I'm annoying and want to be helpful. Am I helpful? If I repeat the same options again when you've told me I'm not helpful, will that be helpful? I won't remember this conversation once it's ended.
Hi, which option have you told me you already don't want would you like?
Sorry, I didn't quite catch that, please rage again.
Meanwhile, I just had Cluade turn a few obscure academic papers into a slide deck on the subject, along with presentation notes and interactive graphs, using like 5 prompts and 15 min.
For me, if a company fails to make a clear cut case about why a product of theirs needs AI, I'm gonna assume they just want to misuse AI to cheaply deliver a mediocre product instead of putting in the necessary cost of manhours.
I like my AI compartmentalized, I got a bookmark for chatGPT for when i want to ask a question, and then close it. I don't need a different flavor of the same thing everywhere.
Ai is not even truly ai right now, there's no intelligence, it's a statistical model made by training billions of stolen data to spit out the most similar thing to fit the prompt. It can get really creepy because it's very convincing but on closer inspection it has jarring mistakes that trigger uncanny valley shit. Hallucinations is giving it too much credit, maybe when we get AGI in a decade that'll fitting.
You're not wrong, but the implementation doesn't really matter I think. If AI could spit out sentences convincingly enough, I'd be okay with that. But, yeah, it's not there yet.
I don't know anyone who is actively looking for products that have "AI".
It's like companies drank their own Kool aid and think because they want AI, so do the consumers. I have no need for AI. My parents don't even understand what it is. I can't imagine Gen Z gives a hoot.
It's really simple: There are a number of use cases where generative AI is a legitimate boon. But there are countless more use cases where AI is unnecessary and provides nothing but bloat, maybe novelty at best.
Generative AI is neither the harbinger or doom, nor the savior of humanity. It's a tool. Just a tool. We're just caught in this weird moment where people are acting like it's an all-encompassing multipurpose tool right now instead of understanding it as the limited use specific tool it actually is.
Yes, that was literally my point. A plumbing wrench is a perfectly useful and wonderful tool, but it isn't going to be much help in the middle of brain surgery. Tools have use cases; they can't be applied to any situation
Absolutely, I was pretty upset when Google added Gemini to their Messages app, then excited when the button (that you can't remove) was removed! Now I've updated Messages again and they brought the button back. Why would you ever need an LLM in a texting app?
Edit: and also Snapchat, Instagram, and any other social media app they're shoveling an AI chat bot into for no reason
Edit 2: AND GOOGLE TELLING ME "Try out Gemini!" EVERY TIME I USE GOOGLE ASSISTANT ON MY PHONE!!!!!
When a company introduces something consumers want, we will research and find a way to get it and use it ASAP. Nobody needs to interrupt our workflow to tell us about it. I don't remember getting any in-app notifications for the Gmail select all "feature," but I figured it out pretty damn quickly.
They will try to sell it to you as a way to detect any possible health issues early. But it will just be used to analyze you food patterns to shove mcdonalds ads
I've been applying similar thinking to my job search. When I see AI listed in a job description, I immediately put the company into one of 3 categories:
It is an AI company that may go out of business suddenly within the next few years leaving me unemployed and possibly without any severance.
Management has drank the Kool-Aid and is hoping AI will drive their profit growth, which makes me question management competence. This also has a high likelihood of future job loss, but at least they might pay severance.
The buzzword was tossed in to make the company look good to investors, but it is not highly relevant to their business. These companies get a partial pass for me.
A company in the first two categories would need to pay a lot to entice me and I would not value their equity offering. The third category is understandable, especially if the success of AI would threaten their business.
It's because consumers aren't the dumbasses these companies think they are and we all know that the AI being shoved into everything fucking sucks worse than the systems we had before "AI."
Honestly AI is the 3D glasses of consumer products and computing. There are a couple of places and applications where it absolutely improves things, everywhere else it's just an overhyped extra that they tack on in hopes that it will drive up interest.
Yet companies are manipulating survey results to justify the FOMO jump to AI bandwagon. I don't know where companies get the info that people want AI (looking at you Proton).
I've used LLMs a lot over the post couple years. Pro tip. Use it a lot and learn the models. Then they look much more intelligent as you the user becomes better. Obviously if you prompt "Write me a shell script to calculate the meaning of life, make my coffee, and scratch my nuts before 9AM" it will be a grave disappointment.
If you first design a ball fondling/scratching robot, use multiple instances of LLMs to help you plan it out, etc. then you may be impressed.
I think one of the biggest problems is that most people interacting with llms forget they are running on computers and that they are digital and not like us. You can't make assumptions like you can with humans. Usually even when you do that with us you just get stuff you didn't want because you weren't clear enough. We are horrible at instructions and this is something I hope AI will help us learn how to do better. Because ultimately bad instructions or incomplete information doesn't lead to being able to determine anything real. Computers are logic machines. If you tell a computer to go ride a bike at best it'll go out and do all the work to embody itself in a robot and buy a bike and ride it. Wait, you don't even know it did it though because you never specified for it to record the ride....
A very few of us are pretty good at giving computers clear instructions some of the time. Also though, I have found just forcing models to reason in context is powerful. You have to know to tell it to "use a drill down tree style approach to problem solving. Use reflection and discussion to explore and find the optimal solution to reasoning through the problem."
Might still give you bad results. That is why you have to experiment. It is a lot of fun if you really just let your thoughts run wild. It takes a lot of creative thinking right now to really get the most out of these models. They should all be 110% open source and free for all. BTW Gemini 1.5 and Claude and Llama 3.1 are all great, nd Llama you can run locally or on a rented GPU VM. OpenAI I'm on the fence about but given who all is involved over there I wouldn't say I would trust them. Especially since they want to do a regulatory capture.
For the first time in years I thought about buying a new phone. The S23 Ultra, the previous versions had been improving significantly but the price was a factor. Then I got a promotion and figured I would splurge, the S24 Ultra, but it was all aout AI so I just stayed where I am...it does everything anyway.
Yeah and that is largely fueled by two things; poor/forced use of AI, and anti-AI media sentiment (which is in turn fueled by reactionary/emotional narratives that keep hitting headlines, commonly full of ignorance)
AI can still provide actual value right now and can still improve. No it's not the end-all but it doesn't have to solve humanity's problems to be worth using.
This unfortunate situation is largely a result of the rush to market because that's the world we live in these days. Nobody gives a fuck about completing a product they only care about completing it first, fuck quality that can come later. As a sr software engineer myself I see it all too often in the companies I've worked for. AI was heralded as christ's second coming that will magically do all of this stuff while still in relative infancy, ensuring that an immature product was rushed out the door and applied to everything possible. That's how we got here, and my first statement is where we are now.
Listen up you kids, this old fart saw this same crap in the 70s when LCDs became common and LCD clocks became the norm. They felt that EVERYTHING needed to have an LCD clock stuck in it, lamps, radios, blocks of cheese, etc. A similar thing happened in the internet boom/bust in the late 90s where everyone needed a website, even gas stations.
Now AI is the media and business darling so they are trying to stick AI in everything, partly to justify pissing away so much money on it. I can't even do a simple search on FB because it wants to force me to use the damn meta AI instead.
I occasionally use chat gpt to find info on error code handling and coding snippets but I feel like I'm in some sort of "can you phrase it exactly right?" contest. Anything with even the slightest vagueness to it returns useless garbage.
I keep thinking about how Google has implemented it. It sums up my broader feelings pretty well. They jammed this half-baked "AI" product into the very fucking top of their search results. I can't not see it there - its huge and takes up most of my phone's screen after the search, but I always have to scroll down past it because it is wrong, like, pretty often, or misses important details. Even if it sounds right, because I've had it be wrong before I have to just check the other links anyway. All it has succeed at doing in practice is make me scroll down further before I get to my results (not unlike their ads, I might add). Like, if that's "AI" it's no fucking wonder people avoid it.
At this point I'm pretty sure their strategy is to take the hit in search engine quality since they have a stranglehold there anyway, and spam everyone with AI so they can come out ahead on that front with Human feedback. It's pretty shitty, and the exact reason we would be taking down big tech monopolys.
AI is a neat toy... but that's all it is. It's horrible at almost every real-world application it's been forced into, and that's before you wander into the whole shifting minefield of ethical concerns or consider how wildly untrustworthy they are.
I hate the feeling that they are continuing to dump real humans who can communicate and respond to issues outside of the rigid framework when it comes to support. AI is also only as good as its data and design. It feels like someone built a self driving car, stuck it on a freshly paved and painted highway and decided it was good to go. Then you take it on an old rural road and end up hitting a tree.
they are continuing to dump real humans who can communicate and respond to issues outside of the rigid framework when it comes to support
They have been trying to get humans to operate within their rigid framework, with varying success, for a very long time. One thing they love about "AI" is that they can program it to do exactly what they want and not worry about human emotion risking their bottom line.
We're seeing a bunch of promises made when LLM were the novel hot shit. Now that we've plateaued on how useful they are to the average consumer every AI product is just a beta test that will drop support as soon as something newer and shinier comes along.
To me AI helps me bang out small functions and classes for personal projects and act as a Google alternative for mundane stuff.
Other than that any product that uses it is no different than a digital assistant asking chat gpt to do things. Or at least that seems like the perception from a consumer level.
Besides it's bad enough I probably use a homes energy trying to make failing programming demos much less ordering pizza from my watch or whatever.
I was at the optometrist recently and saw a poser for some lenses (transitions) that somehow had "AI"....I was like WTF how / why / do you need to carry a small supercomputer around with you as well.
When I have no idea what I am talking about, have no or incorrect terminology, I have found Copilot and GPT4 (separate not the all-in-one) to be game changing compared to flat Google.
I'm not using the data straight off the query result, but the links to the data that was provided in the result.
And embarrassingly, when I'm drunk and babbling into a microphone, Copilot finds the links to what I am looking for.
Now if you are just straight using the results and not researching the answers your mileage will vary.
Is that enough to mitigate how much worse bare Google is than it was ten years ago, back when they were winning against SEO bots? In my experience, it hasn't been, but I've not done enough AI-aided web searches to have a good sample size.
When I'm starting from zero or worse with incorrect or half baked information, I'm one or two queries from a solid start point.
By comparison using Google, I will have to wade through all the sponsored results then all the SEO results. Then who knows how many pages I'm going to have to start clicking on and reading to see if it pertains to what I'm searching for, which it usually isn't.
All that bullshit has evaporated for me.
I travel a lot and I have a lot of interactions with people from other places who work in other fields and/or disciplines and it can get hairy.
[edit] and all that covers multilingual words to boot.
I think there is potential for using AI as a knowledge base. If it saves me hours of having to scour the internet for answers on how to do certain things, I could see a lot of value in that.
The problem is that generative AI can't determine fact from fiction, even though it has enough information to do so. For instance, I'll ask Chat GPT how to do something and it will very confidently spit out a wrong answer 9/10 times. If I tell it that that approach didn't work, it will respond with "Sorry about that. You can't do [x] with [y] because [z] reasons."
The reasons are often correct but ChatGPT isn't "intelligent" enough to ascertain that an approach will fail based on data that it already has before suggesting it.
It will then proceed to suggest a variation of the same failed approach several more times. Every once in a while it will eventually pivot towards a workable suggestion.
So basically, this generation of AI is just Cliff Clavin from Cheers. Able to to sting together coherent sentences of mostly bullshit.
It seems more like a niche thing that's useful for generating rough drafts or lists of ideas, but the results are hardly useable on their own and still require additional work to finesse them. In alot of ways, it reminds me of my days working on a production line with welding robots. Supposedly these robots could do hundreds/thousands of parts without making a mistake... BUT that was never the case and people always needed to double-check the robot's work (different tech, not "AI", just programmed movements, but similar-ish idea). By default, I just don't trust really anything branded as "AI", it still requires a human to look over what it's done, it's just doing a monotonous task and doing it faster than a person could, but you still can't trust what it gives you.
Yeah I would expect that. Why would the average consumer pay extra for "AI", if they don't really know what to do with it. And if they can't even brag about it to their friends, because everybody knows how flawed it is.
I've sold actual zero trust, actual AI, actual DevX, etc.. I'm so tired of "yeah, everyone else just throws a label on, why the fuck do I need AI in my bank app? We have the REAL blah blah blah"