Actually I use it as a starting point for fungi. Seek will usually get me to the genus, and from there I can cross reference various books to narrow it down. Hell, sometimes it'll give me an exact match, and then I just have to perform a yes or no ID with my field guides. That being said, I mostly end up with no, I'm shit scared of all amanitas and most mushrooms just aren't tasty enough to warrant the effort.
I have heard that spore prints are a reliable way of determining mushroom species (removing the stem, putting the underside of the mushroom on an ink pad, pressing against paper, and comparing the print with those of known species).
I bet an AI could analyze that data pretty well. But since there's really no market for such a product, if I want it, I would have to make it myself. In which case I highly advise against using it because I really don't trust me.
I don't actually know if it's considered a deepfake when it's just a voice; but I've been using the hell out of Speechify, which basically deepfakes voices and pairs them with a text input.
...so... nursing school, we have an absolute fuck-ton of reading assignments. Staring at a page of text makes my brain melt, but thankfully nowadays everything's digital, so I can copy entire chapters at a time, and paste them into Speechify. Now suddenly I have Snoop-dogg giving me a lecture on how to manage a patient as they're coming out of general anesthesia. Gets me through the reading fucking fast, and it retains so, SO much better than just trying to cram a bunch of flavorless text.
That's also the business model behind ad localization now, they'll pay the actor once for appearing on set and then pay them royalties to keep AI editing the commercial to feature different products in different countries.
I think it comes down more to understanding what the tech is potentially good at, and executing it in an ethical way. My personal use is one thing; but Speechify made an entire business out of it, and people aren't calling for them to be burned to the ground.
As opposed to Google's take of "OMG AI! RUB IT INTO EVERYONE'S NOSE, THEY'RE GONNA LOVE IT!" and just slapping it onto the internet, and then pretending to be surprised when people ask for a pizza recipe and it tells them to add Elmer's Glue to it...
Two controlled inputs giving a predictable output; vs just letting it browse 4chan and see what happens. The tech industry definitely seems to lean toward the later, which is fucking tragic, but there are gems scattered throughout the otherwise pure pile of shit that LLMs are at the moment.
Do not use ai for plant identification if it actually matters what the plant is.
Just so ppl see this:
DO NOT EVER USE AI FOR PLANT IDENTIFICATION IN CASES WHERE THERE ARE CONSEQUENCES TO FAILURE.
For walking along and seeing what something is, that’s fine. No big deal if it tells you something’s a turkey oak when it’s actually a pin oak.
If you’re gonna eat it or think it might be toxic or poisonous to you, if you want to find out what your pet or livestock ate, if you in any way could suffer consequences from misidentification: do not rely on ai.
You could say the same about a plant identification book.
It's not so much that AI for plant identification is bad, it's that the higher the stakes, the more confident you need to be. Personally, I'm not going foraging for mushrooms with either an AI-based plant app or a book. Destroying Angel mushrooms look pretty similar to common edible mushrooms, and the key differences can disappear depending on the circumstances. If you accidentally eat a destroying angel mushroom, the symptoms might not appear for 5 to 24 hours, and by then it's too late. Your liver and kidney are already destroyed.
But, I think you could design an app to be at least as good as a book. I don't know if normal apps do this, but if I made a plant identification app, I'd have the app identify the plant, and then provide a checklist for the user to use to confirm it for themselves. If you did that, it would be just like having a friend just suggest checking out a certain page in a plant identification book.
The problem with AI is that it's garbage in, garbage out. There's some AI generated books on Amazon now for mushroom identification and they contain some pretty serious errors. If you find a book written by an actual mycologist that has been well curated and referenced, that's going to be an actually reliable resource.
If you're using the book correctly, you couldn't say the same thing. Using a flora book to identify a plant requires learning about morphology and by having that alone you're already significantly closer to accurately identifying most things. If a dichotomous key tells you that the terminating leaflet is sessile vs. not sessile, and you're actually looking at that on the physical plant, your quality of observation is so much better than just photographing a plant and throwing it up on inaturalist
The difference between a reference guide intended for plant identification written and edited by experts in the field for the purposes of helping a person understand the plants around them and the ai is that one is expressly and intentionally created with its goal in mind and at multiple points had knowledgeable skilled people looking over its answer and the other is complex mad libs.
I get that it’s bad to gamble with your life when the stakes are high, but we’re talking about the difference between putting it on red and putting it on 36.
One has a much, much higher potential for catastrophe.
Like I get what you're saying but this is also hysterical to the point that people are going to ignore you.
Don't use AI ever if there are consequences? Like I can't use an AI image search to get rough ideas of what the plant might be as a jumping off point into more thorough research? Don't rely solely on AI, sure, but it can be part of the process.
The blanket term "AI" has set us back quite a lot I think.
The plant thing and the deepfakes/search engines/chatbots are two entirely different types of machine learning algorithm. One focussed on distinguishing between things, the other focussed on generating stuff.
But "AI" is the marketable term, and the only one most people know. And so here we are.
I particularly "Love" that a bunch of like, procedural generation and search things that have existed for years are now calling themselves "AI" (without having changed in any way) because marketing.
You're talking about types of machine learning algorithms. Is that a more precise term that should be used here instead of AI? And would the meme work better if it wss used. I'm asking, because I really don't understand these things.
There are proper words for them, but they are ~technical jargon~. It is sufficient to know that they are different types of algorithm, only really similar in that both use machine learning.
And would the meme work better if it wss used
No because it is a meme, and if people had learned the proper words for things, we wouldn't need a meme at all.
Likely transformers now (I think SD3 uses a ViT for text encoding, and ViTs are currently one of the best model architectures for image classification).
It's particularly annoying because those are all AI. AI is the blanket term for the entire category of systems that are man made and exhibit some aspect of intelligence.
So the marketing term isn't wrong, but referring to everything by it's most general category is error prone and makes people who know or work with the differences particularly frustrated.
It's easier to say "I made a little AI that learned how I like my tea", but then people think of something that writes full sentences and tells me to put dogs in my tea. "I made a little machine learning based optimization engine that learned how I like my tea" conveys it much less well.
AI is the new flavor, just like 2.0, SIM-everything, VIRTUAL-everything, CYBER -everything, were before. Eventually good use cases will emerge, and the junk will be replaced by the next buzzword.
Machine Learning as an invention has already been used for good, useful things. It's just that it never got caught up in hype like the modern wave of Generative Transformers (which is apparently the proper term for those overhyped chatbots and picture generators)
We’re in that awkward part of AI where all the degenerates are using it in unethical ways, and it will take time for legislation and human culture to catch up. The early internet was a wild place too.
At least it's routing you to a department instead of trying to help you solve the issue yourself by showing you different help pages you already looked at before trying to contact support.
Could be. Classification is a type of problem. LLM is a type of model. You can use LLMs to solve classification problems. There's a good chance that's what's happening here.
I am a physicist. I am good at math, okay at programming, and not the best at using programming to accomplish the math. Using AI to help turn the math in my brain into functional code is a godsend in terms of speed, as it will usually save me a ton of time even if the code it returns isn't 100% correct on the first attempt. I can usually take it the rest of the way after the basis is created. It is also great when used to check spelling/punctuation/grammar (so using it like the glorified spellcheck it is) and formatting markup languages like LaTeX.
I just wish everyone would use it to make their lives easier, not make other people's lives harder, which seems to be the way it is heading.
With all the hot takes and dichotomies out there, it would be nice if we could have a nuanced discussion about what we actually want from AI right now.
Not all applications are good and not all are bad. The ideas that you have for AI are so interesting, I wish we could just collect those. Would be way more helpful than all the AI shills or haters rn.
nuanced discussion about what we actually want from AI right now.
👆
So on Bluesky, the non-free almost-Twitter Twitter replacement, as anti-AI as X-Twitter is pro: you see extreme anti-AI sentiment with zero room for any application of the tech, and I have to wonder if defining the tech is part of the problem.
They do want Gmail to filter spam, right?
They don’t hate plant ID apps, do they?
I’m guessing they mean “I don’t need ChatGPT, which was enabled by theft, and I don’t want chatbots in other apps either.”
But they come out saying effectively “don’t filter spam!” the way they talk. At least arguably: not like every expert in the field would use the exact same definition, but still I doubt the average absolutist is fully aware what their message may come across as.
I'm a good programmer but bad at math and can never remember which algorithms to use so I just ask it how to solve problem X or calculate Y and it gives me a list of algorithms which would make sense.
Yeah I've been using it to help my novice ass code stuff for my website and it's been incredible. There's some stuff I thought yeah I'm probably never gonna get around to this that I rocketed through in an AFTERNOON. That's what I want AI for. Not shitty customer service.
Great examples. The most valuable use for me has been writing SQL queries. SQL is not a part of my job description, but data informs choices I make. I used to have to ask a developer on my team all the questions I had and pull them off their core work to get answers for me, then I had to guess at interpreting the data and inevitably bug them again with all my follow-up questions.
I convinced the manager to get me read access to the databases. I can now do that stuff myself. I had very basic understanding of SQL before, enough to navigate the tables and make some sense of reading queries, but writing queries would have taken HOURS of learning.
As it is, I type in basics about the table structure and ask my questions. It spits out queries, and I run them and tweak as needed. Without AI, I probably would have used my SQL access twice in the past year and been annoyed at how little I was able to get, but as it is I’ve used it dozens of times and been able to make better informed decisions because of it.
Old, niche videogames where the fanbase doesn't have the capacity to do it? Sure. James Cameron replacing Arnold with a UHD leather Muppet in True Lies? Not so much.
I've had to literally perform a Google search to find a customer support phone number before. Because the website of the company just kept redirecting me in circles.
Gethuman.com is my go-to. They used to be much better than they are now, but it's still routinely better than trying to navigate automated systems or find phone numbers myself
We need to strike back with an AI customer which alerts us if we could finally talk or chat again with a human if all automatic solutions are discussed.
I've had automated systems just straight up hang up on me when I ask for a customer service representative instead of actually linking me to one. Because it says it "couldn't understand me".
Using it for plant identification is fine as long as it's an AI designed/trained for plant ID (even then don't use it to decide if you can eat it). Just don't use an LLM for plant ID, or for anything else relating to actual reality. LLMs are only for generating plausible-sounding strings of text, not for facts or accurate info.
Best and easiest way is to reverse image search from a photo, it's easy to look through the results for yourself and see what actually matches (it's frequently not the first search result). Perhaps there's some kind of AI involved in reverse image search, but searching like this is infinitely preferable to me instead of some bot telling me an answer which may or may not be correct. It's not "convenient" if you actually care about the answer.
Expert level mushroom identification is a skill and I wouldn't recommend anyone use those apps and assume a mushroom is correct. (And especially don't eat it without absolutely verifying)
Plant identification - not concerned. Most normal people aren't ingesting a plant in the wild. And I mean if you're rubbing against a plant and get a reaction, ideally that's a lesson you only learn once.
Tbh anything that can give you a curated set of options, and some resources that can help you make the final decision is pretty incredible. But that's the thing about most AI - it needs some human vetting for good results, regardless of how powerful it is
I'd like all ai service to publish the energy used in training the model and performing inference.
"Queries uses an average of X kWh of power.
A model training run requires X MWh, and the development of this model over the years required X TWh of power."
Then we could judge companies by that metric. Off course, rich people would look for the most power-draining model for the sake of it.
development of this model over the years required X TWh of power
This part is kind of hard to measure. When do you start counting? From the first work that informed the research direction eventually leading to this model? From the point where the concept of this final model first came about? Do you split the energy usage between multiple models that came from the same work?
That's something of a red herring. The source of that energy matters more than how much is used (use renewables where possible) - your ire is directed at entirely the wrong place; and also how much is used in computers and datacentres doing other stuff? If I'm generating pictures I'm not playing games, which is using the same card and probably more constantly.
I gotta congratulate you though, that's an argument that to my knowledge was NOT levelled against photography when that was invented. I mean like all the other arguments it's bollocks but at least it's new! <pretty much every other argument against ai art was levelled at photography and many of therm at pre-mixed paints before that!>
by no means this is a new argument and it is not aimed at individual use cases so "If I dont ai I game" doesn't really apply and is severely shortsighted
I like using perplexity because the ads aren't in your face and it's pretty good at providing concise answers... And it doesn't fuck with my news feed every time I look up some random thing
I've learned that training a model to search your (companies) unmaintainable, unorganized, and continuously growing documentation storage is a godsend.
For something like Salesforce development, you've got the answer spread across their old framework docs, their new framework docs, their config settings reference page, and a couple stack overflow questions.
Copilot / Bing search has legitimately been incredibly helpful at synthesizing answers from those and providing sources so that I can verify, do more research, and ask follow up questions.
I'm still hoping for good customer support AI. If I'm going to be connected to someone who barely speaks English and is required to follow a prewritten script, or worse plays prerecorded messages to fake being fluent, I might as well talk to an AI, especially if it means shorter hold times.
AI is a bad replacement for good customer service, but it could be an improvement over bad customer service.
Glad you posted this, b/c I now have a follow up to a previous comment where I shared this from Klarna (amongst other tidbits):
So Klarna automated L1 support, did a good job at it, and saved money. Apparently they could’ve done it early without LLMs and saved even more money.
Have you ever wanted L1 support? :)
Guess even if not it still could give reps more time to handle your queries if they’re not telling people to click “forgot my password” when they write in saying “hey I forgot my password”.
I just gave the chat bot that was put in place at the IT department where I work at a poke. It answered my question perfectly: "How do I print from my laptop to the library?" And it's not like the chat bot is the only route for support, but it does divert a lot of routine questions from our help desk so they can focus on questions that require a human touch. That could be people where a chat bot is not a good format or it could be a non-routine question.
Was looking into trying to find an AI to make stories from images, since I have to deal with the unfortunate reality that for a fandom I like, just about all the fanfic is unbelievably badly written to the point that an AI does a better job making interesting stories. I know they exist, just a question of where the ones that work are.
Simple ask you'd think wanting to find shit that generates stories from images. Search engines hardly helped, so it was like, fine, I'll ask an AI about AI. Surely it'll help me find the tool I need, right?
Somehow the results it gave me were worse than the search engine itself.
Isn't GPT-4o (the multimodal model currently offered by OpenAI) supposed to be able to do things like this?
Don't get me wrong, I think you would be better served by taking this as a fun exercise to develop your imagination and writing skills. But since it's fanfic and presumably for personal, non-commercial purposes I would consider what you want to do to be a fair and generally ethical use of the free version of ChatGPT...
I am totally looking forward to AI customer support. The current model of a person reading a scripted response is painful and fucking awful and only rarely leads to a good resolution. I would LOVE an AI support where I could just describe the problem and it gives me answers and it only asks relevant follow up questions. I can't wait.
I already use LLMs to problem solve issues that I'm having and they're typically better than me punching questions into Google. I admit that I've once had an llm hallucinate while it was trying to solve a problem for me, but the vast majority of the time it has been quite helpful. That's been my experience at least. YMMV.
If you think LLMs suck, I'm guessing you haven't actually used telephone tech support in the past 10 years. That's a version of hell I wish on very few people.
The script doesn't go away when you replace a helpdesk operator with ChatGPT. You just get a script-reading interface without empathy and a severally hindered ability to process novel issues outside it's protocol.
The humans you speak to could do exactly what you're asking for, if the business did not handcuff them to a script.
The script doesn’t go away when you replace a helpdesk operator with ChatGPT. You just get a script-reading interface without empathy and a severally hindered ability to process novel issues outside it’s protocol.
The humans you speak to could do exactly what you’re asking for, if the business did not handcuff them to a script.
But they do handcuff them to a script.... at least 1st and 2nd level tech support. That's the point. It's so fucking awful. It's a barrier to keep you from the more highly paid tech support people who may actually be able to answer your questions. First you have to wait on hold to make sure you think it's worth wasting their time on your annoying problem, THEN it's a maze you have to navigate, and then whoops you just got hung up on.... so sorry, start all over! LLMs are (can be) so much better at this!
Going for a hike, seeing a nice plant and saying: I wonder what this plant is. And most of the time getting a correct answer.
If people is stupid enough to eat wild things based on any kind of unprofessional identification it may be just proving that Darwin was onto something.
I realized pretty quickly that it wasn't a real person, but my elderly neighbor didn't. She told me how bad she felt just hanging up on this "person," but she just couldn't get them off the line. (I told her I had the same experience, and I'll warn her about AI Robots at a different time- I didn't want to make her feel foolish.)
AI is what we make it. That being said, there has not been a proper filtering of input for AIs learning pool. Shotgun approach may be easiest and fastest but is not bestest
The creation, curation, and maintenence of training data is a big industry in and of itself that has been around for years. Likewise, feature engineering is an entire sub-discipline of data science and engineering unto itself. I think you might be making the mistake that chatgpt = AI.
The people downvoting you just aren't ready for a Big-G heavy future. I bet they are doing so with a purposeful grimace and a terrible sound, as they vote your banging godzilla memes down
It seems, by the comments, that everyone is quite enjoying all that ai has to offer right now. I think if we phrase the question in a good way, everyone is also quite excited to see the development in the future.
Why is it the hype right now to hate ai? Sure, the brands are pushing their half baked products everywhere, but I think this is a part of the journey. Good products can't happen without it.
Why the hate? Because 99% of what's AI now is actively harming society.
Training and running them consumes enormous amounts of energy, all the IP is within some gigantic monopolistic corporations, these corporations in turn push huge amounts of money into products that are not only bad, but dangerous (MS Recall or X's porn generator AI), other corporations use AI as excuses to fire thousands of people and letting their core products rot away.
Currently, AI has hardly any positive sides, and those positives are very very narrow. Overall it's a net negative.
Training and running them consumes enormous amounts of energy,
Right now, using GPUs and unoptimized chips. Running and building other services like Google Search and YouTube also took enormous amounts of power and still does, though it's vastly more efficient these days.
all the IP is within some gigantic monopolistic corporations,
This feels like a pretty knee jerk point instead of a well thought out one.
A) the biggest AI players are startups like ChatGPT and Anthropic which have gotten a lot of funding and attention but are neither giant nor monopolistic.
B) of the biggest monopolist companies (Apple, Google, Microsoft, and Meta), only Google and Apple are keeping their research closed, with both Microsoft and Meta publishing their models openly.
these corporations in turn push huge amounts of money into products that are not only bad, but dangerous (MS Recall or X's porn generator AI),
Literally the vast majority of software developers already use copilot or a similar AI assistant. Bing search is genuinely useful for synthesizing answers and asking plain language questions with sourced answers. People are finding ChatGPT useful or they wouldnt be paying for it. DeepMind has literally discovered novel protein structures that we never knew existed before. And VFX artists like Corridor Crew are using it to make wild videos way faster than they ever were able to before. This feels like youre just cherry picking poor uses.
other corporations use AI as excuses to fire thousands of people and letting their core products rot away.
Capitalism does that with all forms of automation, whether it's AI based, or just normal, run of the mill, software / machines. It's how you end up producing the same products with less effort and manual labour. If you want to go back to hand milling flour you're more than welcome to, otherwise automation is going to continue. The answer to automation lies in the government and social safety nets, not blocking automation technology.
The tech is awesome already and getting developed extremely fast. Sure, there are many negatives, but many might just be growing pains.
On a wider scale : this is THE tech. This is the direction, it's is inevitable. Of course it's important how the development happens, there are many nuances to it. What I don't like is these blanket hate statements against something.
Many people are using ai today and benefiting in personal life and business.
As a massive car enthusiast ChatGPT is a fucking GODSEND now that Google is completely shit.
I wanted to know what model of transmission was in a car, hours of googling only returned me link after link after link of gearboxes to buy, parts for gearboxes to buy all like 2000 - 2004 make and model. Now for those who dont know, gearboxes are often internally the same with different casings for different manufacturers or use cases, so "How much horsepower can a make and model gearbox support" is a waste of time but "How much power can an Aisin A340e support" gets you the right info.
please use Bing Copilot instead of ChatGPT for this. it's the same language model underneath, largely, but the distinction of backing replies with actual sources and citing those sources in a way that allows you to click through and check the information you're getting is huge for a variety of important reasons.
Maybe deepfakes could teach ppl to not upload faces on internet. I pray this day comes when it will ruin all the picture social media but the price could be too high
Don't want AI bots? Stop calling 911 from the drive-thru when your fucking burger doesn't have enough ketchup on it you goddamn mouth breathing idiots. That's why they can't get to the heart attack victim, so that's why you're going to get a bot.
The sooner you see this as a reaction to a stimulus you are the source of, the sooner it goes away.
"Don't want automated looms? Stop buying clothes. Buy material and make your own, as your forefathers did. Surely your neighbors will be open to your message of time and effort instead of ease."
Stop assuming the tragedy of the commons can be avoided by scolding the people talking about wanting to avoid it.
You want a device that's only available on Star Fleet ships as part of the crew and probably limited only to a subset of higher ranking members of the crew? A major difference between the holodeck and deep fakes is that what happens in the holodeck stays in the holodeck - unless it gets out, in which case it usually becomes an illustration of why it should have been kept in.
You want a device that's only available on Star Fleet ships as part of the crew and probably limited only to a subset of higher ranking members of the crew?
Call me a luddite, but I don't think going through a phase where bad actors have the power to set every democracy back by centuries through misinformation and other bad actors have an infinite kiddy porn machine is worth it for what ultimately amounts to a luxury VR Video game that, if even possible to exist (the holodeck isn't a "technology", it is a narrative device), would be something that realistically only the ultra-rich would be able to use (because let's face it, Star Trek's post-capitalist utopia isn't happening)