Everything costs more, everything has a stupid app that gets abandoned, IoT backend that's on life support the moment it was turned on. Subscriptions everywhere! Everything is built with lower quality, lower standards.
My thermostat hides no brainier features behind an "Ai" subscription. Switching off the heating when the weather will be warm that day doesn't need Ai... that's not even machine learning, that's a simple PID controller.
I'm so glad I switched to just home assistant and zigbee devices, and my radiators are dumb, so I could replace them with zigbee ones. Fuck making everything "smart" a subscription
I think I will try ESP-Home, half of my appliances are Tasmota-based now, I just was too lazy to research compatible Thermostats... (Painful hindsight)...
Even the supposed efficiency benefits of the nest basically come down to "if you leave the house and forget to turn the air down, we will do it for you automatically"
I got a christmas card from my company. As a part of the christmas greeting, they promoted AI, something to the extent of "We wish you a merry christmas, much like the growth of AI technologies within our company" or something like that.
I was trying to take a photo of piece of jewellery in my hand tonight and accidentally activated my phone's AI. It threw up a big Paperclip-type message, "How can I help you?" I muttered "fuck off" as I stabbed at the back button. "I'm sorry you feel that way!" it said.
Yeah, I hate it. At least Paperclip didn't give snark.
they don't care. you're not the audience. the tech industry lives on hype. now it's ai because before that they did it with nft and that failed. and crypto failed. tech needs a grift going to keep investors investing. when the bubble bursts again they'll come up with some other bullshit grift because making useful things is hard work.
Yup, you can see it in talks on annual tech conferences. Last year it was APIs, this year it’s all AI. They’ll just move on to the next trendy thing next year.
To be fair, APIs have been around since the 70s,and are not trendy, they're just required to have a common interface for applications to request and perform actions with each other.
I was ok with crypto and nft because it was up to me to decide if I want to get involved in it or not.
AI does seem to have impact at jobs, at least employers are trying to use it and see if it actually will allow them to hire less staff, I see that for SWE. I don't think AI will do much there though.
it's not up to you, it just failed before it could be implemented. many publishers already commit to in-game nfts before they had to back down because it fell apart too quickly (and some still haven't). if it held on for just a couple more years there wouldn't be a single aaa title that doesn't have nfts today.
crypto was more complicated because unlike these two you can't just add it and say "here, this is all crypto now" because it requires prior commitment and it's way too complicated for the average person. plus it doesn't have much benefit: people already give you money and buy fake coins anyway.
I'm giving examples from games because it's the most exploitative market but these would also seep into other apps and services if not for the hurdles and failures. so now we're stuck with this. everyone's doing it because it's a gold rush except instead of gold it's literal shit, and instead of a rush it's literal shit.
--- tangent ---
... and just today I realized I had to switch back to Google assistant because literally the only thing gemini can do is talk back to me, but it can't do anything useful, including the simplest shit like converting currency.
"I'm sorry, I'm still learning" -- why, bitch? why don't you already know this? what good are you if I ask you to do something for convenience and instead you tell me to do it manually and start explaining how I can do the most basic shit that you can't do as if I'm the fucking idiot.
Nft didn't fail, it was just the idiotic selling of jogs for obscene amounts that crashed (most of that was likely money laundering anyway). The tech still has a use.
Wouldn't exactly call crypto a failure, either, when we're in the midst of another bull run.
But it should all be freely available & completely open sourced since they were all built with our collective knowledge. The crass commercialization/hoarding is what's gross.
For example there's one AI that I read about awhile back that was given data sets on all the known human diseases and the medications that are used to treat them.
Then it was given data sets of all the known chemical compounds(or something like that, can't remember the exact wording)
Then it was used to find new potential treatments for diseases. Like new antibiotics. Basically it gives medical researchers leads to follow.
That's fucking cool and beneficial to everyone. It's a wonderful application of the tech. Do more of that please.
Yeah. I've been interested in AI for most of my life. I've followed AI developments, and tinkered with a lot of AI stuff myself. I was pretty excited when ChatGPT first launched... but that excitement turned very sour after about a month.
I hate what the world has become. Money corrupts everything. We get the cheapest most exploitative version of every possible idea, and when it comes to AI - that's pretty big net negative on the world.
I mean you're technically correct from a copyright standpoint since it would be easier to claim fair use for non-commercial research purposes. And bots built for one's own amusement with open-source tools are way less concerning to me than black-box commercial chatbots that purport to contain "facts" when they are known to contain errors and biases, not to mention vast amounts of stolen copyrighted creative work. But even non-commercial generative AI has to reckon with it's failure to recognize "data dignity", that is, the right of individuals to control how data generated by their online activities is shared and used... virtually nobody except maybe Jaron Lanier and the folks behind Brave are even thinking about this issue, but it's at the core of why people really hate AI.
You had me in the first half, but then you lost me in the second half with the claim of stolen material. There is no such material inside the AI, just the ideas that can be extracted from such material. People hate their ideas being taken by others but this happens all the time, even by the people that claim that is why they do not like AI. It's somewhat of a rite of passage for your work to become so liked by others that they take your ideas, and every artist or creative person at that point has to swallow the tough pill that their ideas are not their property, even when their way of expressing them is. The alternative would be dystopian since the same companies we all hate, that abuse current genAI as well, would hold the rights to every idea possible.
If you publicize your work, your ideas being ripped from it is an inevitability. People learn from the works they see and some of them try to understand why certain works are so interesting, extracting the ideas that do just that, and that is what AI does as well. If you hate AI for this, you must also hate pretty much all creative people for doing the exact same thing. There's even a famous quote for that before AI was even a thing. "Good artists copy, great artists steal."
I'd argue that the abuse of AI to (consider) replacing artists and other working creatives, spreading of misinformation, simplifying of scams, wasting of resources by using AI where it doesn't belong, and any other unethical means to use AI are far worse than it tapping into the same freedom we all already enjoy. People actually using AI for good means will not be pumping out cheap AI slop, but are instead weaving it into their process to the point it is not even clear AI was used at the end. They are not the same and should not be confused.
Not as bad as the IR touch screens. They had a IR field protected just above the surface of the screen that would be broken by your finger, such would register a touch at that location.
Or a fly landing on your screen and walking a few steps could drag a file into the recycle bin.
Put a curved screen on everything, microwave your thanksgiving turkey, put EVERYTHING including hot dogs, ham, and olives in gelatin. Only useful things will have AI in them in the future and I have a hard time convincing the hardcore anti-ai crowd of that.
Docker is only useful in that many scenarios. Nowadays people make basic binaries like tar into a container, stating that it's a platform agnostic solution. Sometimes some people are just incompetent and only know docker pull as the only solution.
AI is one of the most powerful tools available today, and as a heavy user, I’ve seen firsthand how transformative it can be. However, there’s a trend right now where companies are trying to force AI into everything, assuming they know the best way for you to use it. They’re focused on marketing to those who either aren’t using AI at all or are using it ineffectively, promising solutions that often fall short in practice.
Here’s the truth: the real magic of AI doesn’t come from adopting prepackaged solutions. It comes when you take the time to develop your own use cases, tailored to the unique problems you want to solve. AI isn’t a one-size-fits-all tool; its strength lies in its adaptability. When you shift your mindset from waiting for a product to deliver results to creatively using AI to tackle your specific challenges, it stops being just another tool and becomes genuinely life-changing.
So, don’t get caught up in the hype or promises of marketing tags. Start experimenting, learning, and building solutions that work for you. That’s when AI truly reaches its full potential.
I think there's specific industrial problems for which AI is indeed transformative.
Just one example that I'm aware of is the AI-accelerated nazca lines survey that revealed many more geoglyphs that we were not previously aware of.
However, this type of use case just isn't relevant to most people who's reliance on LLMs is "write an email to a client saying xyz" or "summarise this email that someone sent to me".
One of my favorite examples is "smart paste". Got separate address information fields? (City, state, zip etc) Have the user copy the full address, clock "Smart paste", feed the clipboard to an LLM with a prompt to transform it into the data your form needs. Absolutely game-changing imho.
Or data ingestion from email - many of my customers get emails from their customers that have instructions in them that someone at the company has to convert into form fields in the app. Instead, we provide an email address (some-company-inbound@ myapp.domain) and we feed the incoming emails into an LLM, ask it to extract any details it can (number of copies, post process, page numbers, etc) and have that auto fill into fields for the customer to review before approving the incoming details.
So many incredibly powerful use-cases and folks are doing wasteful and pointless things with them.
That's really neat, thanks for sharing that example.
In my field (biochemistry), there are also quite a few truly awesome use cases for LLMs and other machine learning stuff, but I have been dismayed by how the hype train on AI stuff has been working. Mainly, I just worry that the overhyped nonsense will drown out the legitimately useful stuff, and that the useful stuff may struggle to get coverage/funding once the hype has burnt everyone out.
I think of AI like I do apps: every company thinks they need an app now instead of just a website. They don't, but they'll sure as hell pay someone to develop an app that serves as a walled garden front end for their website.
Most companies don't need AI for anything, and as you said: they are shoehorning it in anywhere they can without regard to whether it is effective or not.
In film school (25 years ago), there was a lot of discussion around whether or not commerce was antithetical to art. I think it’s pretty clear now that it is. As commercial media leans more on AI, I hope the silver lining will be a modern Renaissance of art as (meaningful but unprofitable) creative expression.
Issue is, that 8 hours people spend in "real" jobs are a big hindrance, and could be spent on doing the art instead, and most of those ghouls now want us to do overtime for the very basics. Worst case scenario, it'll be a creativity drought, with idea guys taking up the place of real artists by using generative AI. Best case scenario is AI boom totally collapsing, all commercial models become expensive to use. Seeing where the next Trump administration will take us, it's second gilded age + heavy censorship + potential deregulation around AI.
You are overexaggerating under assumption that there will exist social and economic system based on greed and death threats, which sounds very unreali-- Right, capitalism.
The reassuring thing is that AI actually makes sense in a washing machine. Generative AI doesn't, but that's not what they use. AI includes learning models of different sorts. Rolling the drum a few times to get a feel for weight, and using a light sensor to check water clarity after the first time water is added lets it go "that's a decent amount of not super dirty clothes, so I need to add more water, a little less soap, and a longer spin cycle".
They're definitely jumping on the marketing train, but problems like that do fall under AI.
No, problems like "how dirty is this water" do not fall under AI. It's a pretty simple variable of the type software has been dealing with since forever.
Respectfully, there’s no universe in which any type of AI could possibly benefit a load of laundry in any way. I genuinely pity anyone who falls for such a ridiculous and obvious scam
My parents got a new washer and dryer and they are wifi enabled. Why tf do they need to be wifi enabled? It won't move the laundry from the washer to the dryer, so it's not like you can set the laundry and then go about your errands and come home to dry clothes ready to be folded
Honestly I find this feature of my washer/dryer super-useful because it reminds me to turn the stuff over instead of forgetting and letting it sit in the washer getting midlewy
Like others mentioned this one actually makes sense. Letting you know your washing is done so you can move it to the drier and letting you know its dry already so you can fold it is actually super helpful. I studied at an uni that had a connected laundry room so I didnt have to go all the way there to check if the machine was done with my laundry.
A notification that your load is done is actually convenient. It's typically also paired with some sensors that can let you know if you need more detergent or to run a cleaning cycle on the washer.
Mine also lets you set the wash parameters via the app if you want, which is helpful for people who benefit from the accessibility features of the phone. Difficult to adjust the font size or contrast on a washing machine, or hear it's chime if you have hearing problems.
I mean, they have Alexa connected refrigerators with a camera inside the fridge that sees what you put in it and how much, to either let you know when you're running low on something or ask to put in an order for more of that item before you run out, or tell you if something in there is about to spoil, or if the fridge needs cleaned, so I imagine a washer would do something similar?
I think AI is a great tool if used properly. However, it should be a background tool. The second you advertise it to the end consumer, it's going to be dogshit.
If someone asks me to build a sort-function for their table, I'm not gonna write an email: "Yes and I actually used radix sort for the table contents which makes it extremely fast and performant!!!". I'm writing: "Done".
The end consumer doesn't give a shit how it works, as long as it works.
They're not advertising to existing consumers, they're trying to attract new ones. If someone is shopping around for sort functions and yours says it uses radix to make it faster and more performant, then it's likely going to be a good selling point. Similarly, they put "we use AI, so you know it's good" on everything because they think that's also a good selling point.
I just moved. The number of companies where I had to argue with an AI phone system that refused to let me speak to an actual person in the past month is more than 10. I'm sure that made costs cheaper, but I know that made their value go down to me.
At this point, I'm full on ready to make "though shall not make a machine in the likeness of a human mind" global international law and a religious commandment. At least that way, we can burn all AI grifters as witches!
Forcing AI into everything maximizes efficiency, automates repetitive tasks, and unlocks insights from vast data sets that humans can't process as effectively. It enhances personalization in services, driving innovation and improving user experiences across industries. However, thoughtful integration is critical to avoid ethical pitfalls, maintain human oversight, and ensure meaningful, responsible use of AI.
It's worse. The industry needs to entrench LLM and other AI so that after the bubble bursts it's so grafted info everything can't easily be removed, so afterwards everybody still needs to go past them and pay a buck.
Basically it's like a tick that needs to dig in deep now so you can't get rid of it later.
But the companies must posture that their on the cutting edge! Even if they only put the letters "AI" on the box of a rice cooker without changing the rice cooker
When it comes to the marketing teams in such companies, I wonder what the ratio is between true believers and "this is stupid but if it spikes the numbers next quarter that will benefit me.”
I hate what AI has become and is being used for, i strongly believe that it could have been used way more ethically, solid example being Perplexity, it shows you the sources being used at the top, being the first thing you see when it give a response. The opposite of this is everything else. Even Gemini, despite it being rather useful in day to day life when I need a quick answer to something when I'm not in the position to hold my phone, like driving, doing dishes, or yard work with my ear buds in
Yes, you're absolutely right. The first StarCoder model demonstrated that it is in fact possible to train a useful LLM exclusively on permissively licensed material, contrary to OpenAI's claims. Unfortunately, the main concerns of the leading voices in AI ethics at the time this stuff began to really heat up were a) "alignment" with human values / takeover of super-intelligent AI and b) bias against certain groups of humans (which I characterize as differential alignment, i.e. with some humans but not others). The latter group has since published some work criticizing genAI from a copyright and data dignity standpoint, but their absolute position against the technology in general leaves no room for re-visiting the premise that use of non-permissively licensed work is inevitable. (Incidentally they also hate classification AI as a whole; thus smearing AI detection technology which could help on all fronts of this battle. Here again it's obviously a matter of responsible deployment; the kind of classification AI that UHC deployed to reject valid health insurance claims, or the target selection AI that IDF has used, are examples of obviously unethical applications in which copyright infringement would be irrelevant.)
There is russian phrase "fight of beaver and donkey", which loosely means fight of two shits. Copyright is cancer and capitalist abuse of genAI is cancer.
"AI" isn't ready for any type of general consumer market and that's painfully obvious to anyone even remotely aware of how it's developing, including investors.
...but the cost benefit analysis on being first-to-market with anything even remotely close to the universal applicability of AI is so absolutely insanely on the "benefit" side that it's essentially worth any conceivable risk, because the benefit if you get it right is essentially infinite.
CEOs get FOMO. They can get funding for their companies if they share "new, exciting innovations" for their products and AI is that - even if it's forcefeed in fit.
No lie, I actually really love the concept of Microsoft Recall, I've got the adhd and am always trying to retrace my steps to figure out problems i solved months ago. The problem is for as useful as it might be it's just an attack surface.
It's Apple Intelligence which is baked into iOS 18 (disabling it gave me 20-25% of my battery back)
It's in Arc browser (easily disabled)
If you're able to avoid it altogether and not be forced to constantly disable it everywhere, I commend whatever you're doing. I see it scattered everywhere and I consider myself a niche user that runs their own Lemmy instance and doesn't actively use any of the big social networks.
People are complaining because it's permeating everything while offering little to no value to the end user. The massive divide has arrived where the value to the shareholders is all that matters, and the tech companies doing it aren't even remotely thinking about the user experience or benefit. I've been a dev in tech for 18+ years and I've never seen the field this desperate and stagnant when it comes to good ideas.
There's also the fact that it's being used to replace artists and it's basically a massive plagiarism machine. OpenAI tried to claim that their AI is the equivalent of a learning human, but actual learning humans aren't trying to convert every single thought and interaction into billions of dollars worth of profit for corporations.
Maybe I am just good at ignoring it. I don't use a whole lot of mainstream websites or, like you, big social media. I think people are just disproportionately annoyed by things they don't have to use.
Chatbots aren't anything new, if anything them being slightly better isn't really a bad thing.
I think windows mentioned cortana copilot being there but I use openshell and outside of the day I installed windows 11, it hasn't even mentioned it. Am I just not being targeted for ads for it? Literally not once has windows shoved it in my face but people complain about it frequently as if copilot was launching a full size ad window every time they turn on their computer.
The thing about it being little value to the end user does seem fair. I actually enjoy amusing myself with image generation but that's about all I use it for. Don't really care about whiny artists, especially since everyone complains that it isn't good enough to be real art but is also somehow good enough to replace good artists (??).
Is there a way to fight back? Like I do t need Adobe in my Microsoft Word at work, can I just make a script that constantly demands AI content from it that is absolutely drivel, and set it running over the weekend while I'm not there? To burn up all their electricity and/or processing power?
They would probably detect that and limit your usage.
Even not using their service still leaves its pollution. IMO the best way to fight back is to support higher pollution taxes. Crypto, AI, whatever's next - it should be technology agnostic.