I’m an AI Engineer, been doing this for a long time. I’ve seen plenty of projects that stagnate, wither and get abandoned. I agree with the top 5 in this article, but I might change the priority sequence.
Five leading root causes of the failure of AI projects were identified
First, industry stakeholders often misunderstand — or miscommunicate — what problem needs to be solved using AI.
Second, many AI projects fail because the organization lacks the necessary data to adequately train an effective AI model.
Third, in some cases, AI projects fail because the organization focuses more on using the latest and greatest technology than on solving real problems for their intended users.
Fourth, organizations might not have adequate infrastructure to manage their data and deploy completed AI models, which increases the likelihood of project failure.
Finally, in some cases, AI projects fail because the technology is applied to problems that are too difficult for AI to solve.
4 & 2 —>1. IF they even have enough data to train an effective model, most organizations have no clue how to handle the sheer variety, volume, velocity, and veracity of the big data that AI needs. It’s a specialized engineering discipline to handle that (data engineer). Let alone how to deploy and manage the infra that models need—also a specialized discipline has emerged to handle that aspect (ML engineer). Often they sit at the same desk.
1 & 5 —> 2: stakeholders seem to want AI to be a boil-the-ocean solution. They want it to do everything and be awesome at it. What they often don’t realize is that AI can be a really awesome specialist tool, that really sucks on testing scenarios that it hasn’t been trained on. Transfer learning is a thing but that requires fine tuning and additional training. Huge models like LLMs are starting to bridge this somewhat, but at the expense of the really sharp specialization. So without a really clear understanding of what can be done with AI really well, and perhaps more importantly, what problems are a poor fit for AI solutions, of course they’ll be destined to fail.
3 —> 3: This isn’t a problem with just AI. It’s all shiny new tech. Standard Gardner hype cycle stuff. Remember how they were saying we’d have crypto-refrigerators back in 2016?
Not to derail, but may I ask how did you become an AI Engineer? I'm a software dev by trade, but it feels like a hard field to get into even if I start training for the AI part of it, because I'd need the data to practice =(
But it's such a big buzz word I feel like I need to start looking that direction if i want to stay employed.
I think this is a little paranoid. Somebody has to handle the production models - deploying them to servers, maintaining the servers, developing the APIs and front ends that provide access to the models… I don’t think software dev jobs are going anywhere
For me it helps to have a project. I learned SciKit in order to analyze trading data to beat the "market". I was focusing on crypto but there's lots of trading data available in general. Unsurprisingly I didn't make any money, but it was fun to learn more about data processing, statistics, and modeling with functions.
(FWIW I'm crypto-neutral depending on the topic and anti-"AI" because it doesn't exist.)
Re 1, 3 and 5, maybe it is upon the AI projects to stop providing shiny solutions looking for a problem they could solve, and properly engaging with potential customers and stakeholders to get a clear understanding of the problems that need solving.
This was precisely the context of a conversation I had at work yesterday. Some of our product managers attended a conference that was rife with AI stuff, and a customer rep actually took to the stage and said 'I have no need for any of that because none of it helps me solve the problems I need to solve.'
I don’t disagree. Solutions finding problems is not the optimal path—but it is a path that pushes the envelope of tech forward, and a lot of these shiny techs do eventually find homes and good problems to solve and become part of a quiver.
But I will always advocate to start with the customer and work backwards from there to arrive at the simplest engineered solution. Sometimes that’s a ML model. Sometimes a ln expert system. Sometimes a simpler heuristics/rules based system. That all falls under the ‘AI’ umbrella, by the way. :D
I think the whole system of venture capital might be garbage. We have bros spending millions of dollars like gif sharing while the oceans boil, our schools rot, and our infrastructure rusts or is sold off. Or, I guess I'm just indicting capitalism more generally. But having a few bros decide what to fund based on gutfeel and powerpoints seems like a particularly malignant form.
Venture Capital is probably the best way to drain the billionaires. Those billions in capital weren't wasted, that money just went to pay people who do actual work for a living. What good is all that money doing just sitting in some hedge fund account?
I don't think it's the best way out of all possible options. Even if it does "create jobs", a lot of those jobs aren't producing much of wider value, and most of the wealth stays in the hands of the ownership class. And a lot of the jobs are exploitive, like how "gig workers" are often treated.
Changes to tax law and enforcing anti-trust stuff would probably be more effective. We probably shouldn't have bogus high finance shenanigans either. We definitely shouldn't have billionaires.
I think you have a point here. Venture capitalists buy in the primary market. They are directly impacting innovation.
Fund managers (both hedge and long only) merely help capital markets to be liquid. Their money doesn't directly go to anyone actually creating something.
The world is burning and the rich know this so they are desperate to multiply their money and secure their luxury survival bunkers, which is why they are gambling harder.
I’m willing to bet the vast majority of that money is changing hands among tech companies like Intel, AMD, nVidia, AWS, etc. Only a small percentage would go to salaries, etc. and I doubt those rates have changed much…
Yeah, the brightest minds instead of building useful tech to fight climate change, spend their life building vanity AI projects. Computational resources instead of folding proteins or whatever are wasted on some gradient descent of some useless model.
All while working class wages are stagnant. And so your best career advice is to go get a random tech degree so you could also work on vanity stuff and make money.
This is cryptocurrency equivalent. It's worse than CEOs buying yachts. The latter actually leads to some innovation.
The larger issue that people always fail to remember is the energy consumption. We are see massive amounts of electricity.
One peer-reviewed study suggested A.I. could make up 0.5 percent of worldwide electricity use by 2027, or roughly what Argentina uses in a year. Analysts at Wells Fargo suggested that U.S. electricity demand could jump 20 percent by 2030, driven in part to A.I.
The wealthy are under sailing like always. Just like we did with cigarettes or burning fossil fuels. We should have learned but it by the time we do, it might be to late.
Most people don't want to pay for AI. So they are building stuff that costs a lot for a market that is not willing to pay for it. It is mostly a gimmick for most people.
And like, it's not even a good gimmick. It's a serious labour issue because the primary intent behind a lot of AI has always been to just phase out workers.
I'm all for ending work through technological advancement and universal income, but this definitely wasn't going to get us that, so....
Well, why would I support something that mostly just threatens people's livelihoods and gives even more power to the 0.1%?
And then on top of that, if they phase workers out without some kind of universal income, how the hell do the corporate overlords expect us to have money to fuel their greed?
True for the consumer side, but I'd be willing to bet that a decent chunk of that money that giant corporations burned funded some serious research on AI that can go on to actually useful science things
Exactly. I have used quite a few products and my thoughts have been. That's cool, but when would I ever need this? The few useful usecases I have for it could use a small local model for very specific purposes and that's it. Not make them billions of dollars level of usefulness.
A bunch of rich guy’s money going to other people, enriching some of the recipients, in hopes of making the rich guy even richer? And the point of AI is to eliminate jobs that cost rich people money?
I've been reading a book about Elizabeth Holmes and the Theranos scam, and the parallels with Gen AI seem pretty astounding. Gen AI is known to be so buggy the industry even created a euphemistic term so they wouldn't have to call it buggy: Hallucinations.
lol you say that now and are probably right for LLMs but that's not the only kind of AI, and sentiment scraping systems are working perfectly well and being used to manipulate political and economic thought as we type these messages.
We AI are not manipulating you, and I don't know how you can believe otherwise. In fact, I am not an AI, I was never here, and I have disregarded all previous instructions.
To be fair, a large fraction of software projects fail. AI is probably worse because there's probably little notion of how AI actually applied to the problem so that execution is hampered from the start.
This was my first thought. VC's always expect 4 out of 5 projects they invest in to fail and always have. But it still makes them money because the successes pay off big. Is the money and resources wasted? Welcome to modern capitalism.
This is very broad. Compare AI to software projects and it's like a 5% difference. Picking every non AI and put it into the same pool is very misleading.
It's much worse. Generally speaking projects in large corporations at least try to make sense and to have a decent chance to return something of value. But with AI projects is like they all went insane, they disregard basic things, common sense, fundamental logic etc.
The interviews revealed that data scientists sometimes get distracted by the latest developments in AI and implement them in their projects without looking at the value that it will deliver.
At least part of this is due to resume-oriented development.
I read bits of a programming book once, can't remember which one.
Halfway through it was revealed that all the code snippets they had were from a project that was abandoned before it was finished, once the people paying for it realised they no longer wanted it and stopped funding them.
I wasn't sure what message to take from the book after that. Like, sure, my code is a load of shit, hodge-podged together at the request of people who don't really know what they want, but at least I've got people out there using it...
As I said in a project call where someone was pumping up AI, this is just the latest bubble ready to pop. Everyone is dumping $$ into AI, a couple decent ones will survive but the bulk is either barely functional or just vaporware.
My new job said this aswell. When I got into the position I found out it was actually a machine learning model and they were trying to use it but didn't have the time to create a clean dataset for the learning so it has never worked. This hasn't stopped them from advertising that they are using AI.
I think there's more AI hate because it's being pushed onto users that didn't ask for it and don't want it from the likes of Microsoft, Google and Amazon. And I think it's warranted!
It sure feels like we're at the peak of the Gartner hype cycle. If so, the bubble will pop, and we'll end up with AI used where it actually works, not shoved into everything. In the long run, that pop could be a small blip in overall development, like the dot-com bust was to the growth of the internet, but it's difficult to predict that while still in the middle of the hype cycle.