I’m in the sector, and there are legitimate time and effort savings when used correctly. Code refactoring gets a little smarter than a dumb script, boilerplate code is instantly generated, and real educational topics can be delved into and analyzed.
I don’t want to see it closed off, and I want the data used to train made public. These LLMs have capabilities older scripted systems can never match.
Eventually they will replace workers. Our society is too self-centered to make that a good thing.
I think that they're neat, they're development is fascinating to me, and that they have their utility. But I am sick of executive and marketing types sloppily cramming them into every corner of every service just so they can tell their shareholders that it's "powered by AI". So far, I'll use a page or app dedicated to chatting with the llm, or I've also found that GitHub copilot in vscode is pretty nifty sometimes for things like quickly generating docs that I can then just proofread and edit. But in most other applications and websites I don't use them at all or I'm forced to and the experience is worse. Recently, I've been having to work in Microsoft's power platform a bit for a client (help me). Almost every page in the entire platform has an AI chatbot on the side that's supposed to do some of the work around you. Don't use it. It fucks up your shit. Ask it to do something, it will change your flow or whatever you're working with with the wrong syntax that won't even compile 9/10 times, with no opportunity to undo, and the remaining 1/10 is logic errors. Ask it questions about the platform, not only will it not know anything, it will literally accuse you of not speaking English.
TL;DR I think they're neat and useful IF they're used responsibility and implemented well. Otherwise they are a nuisance excuse to use a buzzword at best or dangerous at worst
AI is the perfect tool to generate propaganda and fake-news on a massive scale for government and secret services. Humans may live in bubbles divorced from reality because of it.
It also is the perfect technology for censorship, sentiment analysis/monitoring and thought-control automation.
I love it for what I use it for which is research, speeding up scripting and code writing, resume building, paraphrashing stupidly long news articles, teaching me Spanish and Japanese, bypassing the bullshit that are what passes as search engines these days, and talking my anxiety down. They cut through the noise and boost my productivity.
Using AI like Deepseek is a lot easier than shifting through 50 search results, if the question is for a relatively new technology though then it usually doesn't work
Most GenAI was trained on material they had no right to train on (including plenty of mine). So I'm doing my small part, and serving known AI agents an infinite maze of garbage. They can fuck right off.
Now, if we're talking about real AI, that isn't just a server park of disguised markov chains in a trenchcoat, neural networks that weren't trained on stolen data, that's a whole different story.
I think it's fine if used in moderation. I use mine for doing the mindless day-to-day stuff like writing cover letters or business-type emails. I don't use it for anything creative though, just to free myself up to do that stuff.
I also suck at coding so I use it to write little scripts and stuff. Or at least to do the framework and then I finish them off.
It's bullshit. It's inauthentic. It can be useful for chewing through data, but even then the output can't be trusted. The only people I've met who are absolutely thrilled by it are my bosses, who are two of the most frustrating, stupid, pig-headed, petty people I've ever met. I wish it would go away. I'm quitting my job next week, taking a big paycut and barely being able to pay the bills, specifically because those two people are unbearable. They also insist that I use AI as much as possible.
Let me know when we have some real AI to evaluate rather than products labeled as a marketing ploy. Anyone remember when everything had to be called "3D" because it was cool? I missed my chance to get 3D stereo cables.
Generative ai is just an advanced chat bot, a toy that uses too much power to be efficient.
My personal experience is that any output has to be double checked and edited. It would be better to just do whatever I asked it to do from the beginning. When it can fact check itself and cite sources, then it might become useful.
An ai that can comb through vast amounts of data and give an output of specific data relevant to the question presented than a generative ai might be useful. But it can’t analyze data very well at the current moment. It hallucinates too much.
I personally hate the path that AI is going. Generative ai steals art and scrapes text to create garbage on demand using too much power and computing resources that could be spent on better purposes, such as simulating protein folding for disease research (see folding at home). u/[email protected] gave some good uses of ai.
To be honest, I think it's a severe mistake that AI is continuing to improve, as long as you aren't gullible and know what to look for, you can tell when something is ai generated, but there are too many people who are easily fooled by ai generated images and videos. When chatpgt released, I thought it was a nice toy, but now that I know the methods of which such large scale models are obtaining their data to train on, I can only resent it. So long as generative models continue to improve in accuracy of text and images, so will my hatred towards it in turn.
p.s: don't use the term "AI art" for the love of God. art captures human emotions and experiences, machines can't understand them, they are only silicon. Only humans can create art, nothing else.
Pretty cool technology ruined by greed. If we don't get this under control (which we won't probably) we're in for a pretty interesting age of the Internet, maybe even the last one.
I'm a layman in terms if AI but I think it can be a useful tool, if used in proper context. I use it when I struggle to find something by regular internet search. The fact you can search in a conversational style and specify as you go on what you need is great.
I feel it is pushed into contexts where it has no place and where it's usefulness is limited or counterproductive.
Then there is the question of the inproper use of copyrighted material which is terrible
As a tool for reducing our societal need to do hard labor I think it is incredibly useful. As it is generally used in America I think it is an egregious from of creative theft that threatens to replace a large range of the working class in our nation.
I would probably be a bit more excited if it didn't start coming out during a time of widespread disinformation and anti-intellectualism.
I just come here to share animal facts and similar things, and the amount of reasonably realistic AI images and poorly compiled "fact sheets", and recently also passable videos of non-real animals is very disappointing. It waters down basic facts as it blends in to more and more things.
Stuff like that is the lowest level of bad in the grand scheme of things. I don't even like to think of the intentionally malicious ways we'll see it be used. It's a going to be the robocaller of the future, but not just spamming our landlines, but everything. I think I could live without it.
I kinda wish we had one on lemmy that summarized articles since we dont have the userbase of reddit (theres always some dude summarizing the facts without the fluff in the comments) AI is good at summaries.
AI is a tool and a lot of fields used it successfully prior to the chatgpt craze. It's excellent for structural extraction and comprehension and it will slowly change the way most of us work but there's a hell of a craze right now.
I don’t think it’s useful for a lot of what it’s being promoted for—its pushers are exploiting the common conception of software as a process whose behavior is rigidly constrained and can be trusted to operate within those constraints, but this isn’t generally true for machine learning.
I think it sheds some new light on human brain functioning, but only reproduces a specific aspect of the brain—namely, the salience network (i.e., the part of our brain that builds a predictive model of our environment and alerts us when the unexpected happens). This can be useful for picking up on subtle correlations our conscious brains would miss—but those who think it can be incrementally enhanced into reproducing the entire brain (or even the part of the brain we would properly call consciousness) are mistaken.
Building on the above, I think generative models imitate the part of our subconscious that tries to “fill in the banks” when we see or hear something ambiguous, not the part that deliberately creates meaningful things from scratch. So I don’t think it’s a real threat to the creative professions. I think they should be prevented from generating works that would be considered infringing if they were produced by humans, but not from training on copyrighted works that a human would be permitted to see or hear and be affected by.
I think the parties claiming that AI needs to be prevented from falling into “the wrong hands” are themselves the most likely parties to abuse it. I think it’s safest when it’s open, accessible, and unconcentrated.
No joke, it will probably kill us all.. The Doomsday Clock is citing Fascism, Nazis, Pandemics, Global Warming, Nuclear War, and AI as the harbingers of our collective extinction..
The only thing I would add, is that AI itself will likely speed-run and coordinate these other world-ending disasters... It's both Humanity's greatest invention, and also our assured doom.
Death. Kill 'em all. Butlerian jihad now. Anybody trying to give machines even the illusion of thought is a traitor to humanity. I know this might sound hyperbolic; it's not. I am not joking rn. I mean it.
Whatever that means, it sounds based (I've been meaning to play Stellaris for ages but haven't really gotten around to it since the one game I played back in like 2018 when I bought it)
Yes it is, simply due to the nature of the "training"/"learning" process, which is learning in name alone. If you know how this mathematical process works you know the machine's definition of success is how well it's output matches the data it was trained with. The machine is effectively trying to encrypt it's data base on it's nodes. I would recommend you inform yourself on how the "training" process actually works, down to the mathematical level.
AI using as much energy’s crypto , the AI = crypto mindset in general
AI is often push by the same people who pushed NFTs and whatnot, so this is somewhat understandable. And yes, AI consumes a lot of energy and water. Maybe not as much as crypto, but still, not something we can afford to use for mindless entertainment in our current climate catastrophe.
AI art “having no soul”
Yup. AI "art" works by finding pixel patterns that repeat with a given token. Due to it's nature, it can only repeat patterns which it identified in it's training data. Now, we have all heard of the saying "An image in worth a thousand words". This saying is quite the understatement. For one to describe an image down to the last detail, such detail that someone who never saw the image could perfectly replicate it, one how need more than a thousand words, as evidenced by computer image files, since these are basically what was just described. The training data never has enough detail to describe the whole image in such detail and therefore it is incapable of doing anything too specific.
Art is very personal, the more of yourself you put into a piece, the more unique and "soulful" it will be. The more of the work you delegate to the machine, the less of yourself you can put into the piece, and if 100% of the image generation was made by the machine, which is in turn simply calculating an average image that matches the prompt, then nothing of you is in the piece. It is nothing more than the maths that created it.
Simple text descriptions do not give the human meaningful control over the final piece, and that is why pretty much any artist worth their tittle is not using it.
Also, the irony that we are automating the arts, something which people enjoy doing, instead of the soul degrading jobs nobody wants to do, should not be lost on us.
“Peops use AI to do «BAD THING» , therefour AI ISZ THE DEVILLLL ‼‼‼”
It is true that AI is being used in horrible was that will take sometime to adapt, it is simply that the negative usages of AI have more visibility than the positive usages. As a matter of fact, this node network technology was already in use in many fields before the Chat-GPT induced AI hype train.
can’t trust anti AI peops to actually criticise the tech
Correct. It is well known that those who stem to financially benefit from the success of AI are more than willing to lie about it's true capabilities.
AI consumes a lot of energy and water. Maybe not as much as crypto, but still, not something we can afford to use for mindless entertainment in our current climate catastrophe.
Activities like eating beef use more energy than AI models , therefour they contribute more to climate change (in whatever likely negligible ways compared to corporate entities like Shell , who we shoud focus on wen reversing climate change) than use of AI models . But there's no widespread moral panic over individual's beef consumption (closest you'll get are some kinds of vegans , but they stay fringe) compared to AI use
Crypto's exceptional bcus it's literally wasting energy with very limited use cases
Peops shouldn't be calling for death of an entire medium based on some thing subjective like its outputs being "soulless"
Simple text descriptions do not give the human meaningful control over the final piece, and that is why pretty much any artist worth their tittle is not using it.
What about AI-assisted art , which has more "human meaningful control" than simple (txt2img prompt|inpaint)ing then ?
Also, the irony that we are automating the arts, something which people enjoy doing, instead of the soul degrading jobs nobody wants to do, should not be lost on us.
Wasz initially onef those peops that didn't think art was automatable . Turns out I was wrong . Also not every artist enjoys every part of the process . Any one who ever does art in any serious capacity knows that , am sure some would find spending hours upon hours tweaking (pose|composition|colours|lighting placement|.*) before getting to the "fun parts" "soul degrading" . While AI art models doesn't automate everything , it can automate those with varying success
can’t trust anti AI peops to actually criticise the tech
Correct. It is well known that those who stem to financially benefit from the success of AI are more than willing to lie about it's true capabilities.
While I don't think you're wrong , that's not what I said
Wen I say can't trust anti AI art peops to criticise AI art tech , I'm including you . If you use AI "art" in scarequotes , you're part of the problem , most your criticisms are based on (things easily debunked|misinformation|subjectivities|.*)