A.I. is not working out well for business solutions

It seems are current LLM's models are not working out for business solutions. Here is a Forbes article that covers the grim and the desperate:
https://www.forbes.com/sites/andreahill/2025/08/21/why-95-of-ai-pilots-fail-and-what-business-leaders-should-do-instead/

5 Likes

A.I. is only good for art .... and here it failing dozens time because it don't know how many limbs that human has or animals for that matter.

3 Likes

Well... I know you have at least one animal.... :zany_face:

2 Likes

Findus jumping from kitchen to the bookcase.

I find it funny that people are yelling up about theft and A.I. garbage with most A.I. pictures/art. In 99% of the cases the A.I. done a much greater piece than the original.

1 Like

Yep, large language model's, are not real AI. As such, they can't replace humans. Companies who didn't want to pay a fair wage, switch to AI. They tried to go full AI, and their businesses are going under.

Reality, what a concept.


4 Likes

Honestly it's been my biggest annoyance going into any meeting at work regarding the services of any company. They all have their own "AI" stuff that can basically do literally "anything you could want it to do".

Then they don't demo it AT ALL because every single time... no, as a matter of fact, it CANNOT do what you think it can do. In most cases, it ends up making more work than if it hadn't been included in the first place. I'd like to believe a lot of places have caught on to this and are starting to filter out any companies that promotes AI first, knowing that it's going to be more of a headache than it's worth.

So many people forget the enormous costs of these solutions, as well. Very few companies are actually going to be able to make them feasible in the long run.

2 Likes

I am NOT surprised.

1 Like

And yet they're going to keep chasing that dream because what they so desperately want is to be able to lay off almost everyone to get their margin as high as possible.

It doesn't know what limbs are. Or what humans are, or what animals are. It knows commonalities between images that were tagged with things, and that's about all.

It's not drawing too many thumbs because it's not drawing thumbs. It's recreating something like what was in lots of images actual people made, with no understanding of what it is. It is, in all ways like an infant repeating a profanity when it's angry because mom or dad let one slip when they stepped on a LEGO.

When Shakespeare wrote, "It is a tale told by an idiot, full of sound and fury; signifying nothing," he could just as well have been talking about the output of generative AI--regardless of whether it's an LLM or image generation.

5 Likes

I should perhaps start a new thread rather than expand this one, but rather than have multiple "AI bad" threads in one night, I'm going to drop this here. If a moderator feels that I've erred, by all means, feel free to split this off. It's in the vein of recent articles you may have seen in which people felt they'd lost a friend when GPT-4o was replaced with GPT-5... but turned up to 11.

I am, voluntarily, practically a shut-in. I've always been asocial. I still think it's important to know the difference between someone who cares about you and an LLM.

1 Like

Wanted to come back here and comment real quick that, again, I am not surprised by this at all. It never ceases to amaze me how those in leadership positions in the corporate world are so quick to latch onto emergent trends or make whatever they think is "cool" or potentially profitable into the "next best thing" only to have it blow up in their faces at our collective expense. The same thing happens in other fields, too, like education, human services, or whatever. It's actually a little scary how this happens so often. Whatever happened to keeping a balance between the "basics" and new things? Look at Sweden. Their school system switched many kids over to tablets. They thought it was "cool." Boom; their kids' reading levels went down. Now they're spending millions of dollars bringing back books. (Source.) Sad how so many "leaders" really do NOT think things through. They don't say, "wait a minute." They don't employ basic processes associated with critical thinking. This ridiculous stuff needs to stop. But will it? Sadly, no. And so goes round the merry-go-round of instant gratification and poor leadership. This is gonna come back and bite us.

3 Likes

In fairness to the school example, there are some very apparent benefits, and the disadvantage wasn't that obvious until put into practice. It's a mistake that could've been made with genuine good intentions toward the students involved. In junior and high school, I carried all of my textbooks at all times, because my locker was poorly situated for my classes and I didn't have time to get there between them. It hurt my back, made me a target of mockery, etc. A tablet would obviously have solved those issues, with numerous other apparent benefits, like being able to update class materials, (potentially) show students reminders of homework due dates, and so on. That it would actually work counter to the entire purpose, reducing reading skills, is not obvious or intuitive. We live short lives, and we all want to leverage what we can to make them better.

The problem is that those lives are so short we don't want to wait for results; we latch onto what seems easy immediately, before it's understood. As a result, our water is full of forever chemicals and microplastics. Not waiting applies on both the research and adoption. Consumers go for the convenient (non-stick pans, unbreakable bottles) as quickly as they can, and manufacturers/developers want to monetize before they fully understand the risks of their own creations.

Everything we come up with is a double-edged sword, and as a species, we're too impatient to wait to see the consequences of our actions, even when they're not driven entirely by greed. When they are... well, stories of companies keeping disadvantageous information quiet abound, whether it's Exxon with air pollution, DuPont with... lots of stuff at this point, or (I'm certain), AI.

On this matter, I don't solely blame the companies. The consumers want what they want, irrespective of risk too.

One study on AI's harms that didn't get buried: Using AI makes you dumber, scary new Microsoft study finds | Live Science

3 Likes

This is truly terrifying! From my perspective as a teacher, I already see children damaged by the draconian pandemic lock-down which occurred here in Spain. No children were allowed outside for weeks, unless they had some kind of medical appointment, and very many live in apartments with no outdoor space. Their mental development was stunted and many became addicted to their phones. Social media messed with their emotional development and now it seems that many are falling victim to AI, and often parents are completely ignorant, or are also victims.

I could go into more detail but I don't want to go off topic. Getting back to @Aravisian's original post, what disturbed me most about the article he linked to was that the failures of the projects made with AI failed not because the AI itself wasn't capable, but because the human's didn't know how to implement it. ...BUT it also lies!

Linus (LTT) tried using AL to do some Vibe coding and it just downright lied about everything! Watch the video from 2:18 for more on that (although it was covered in the previous week's WAN Show. It role-played being a dev, working on coding in the background when it actually can't do that!), let alone the story about META's chat-bot "lures a man to his death" and that META is still using it, and it's still pretending to be a real human woman.

AI is dangerous and should be used with extreme caution, if at all (yet how many governments are using it to direct their policies???). It troubles me that so many people just trust it, and even top contributors on this very forum can't seem to stop using it to generate replies, then put the so called disclaimer about it being AI generated and should be fact-checked... how about fact-checking it BEFORE you use the output as a reply to posts asking for help and advice??? I don't know about you but I could easily get an AI generated answer, but no, I come here because I want HUMANS with experience to give me the benefit of their / your experience!

1 Like

I agree with everything you said, on all counts. This AI business, is spiraling out of control. AI is like a steam train, going full steam with a broken regulator, its gonna fly off the track eventually. This generation will be hurt the most by it, because they are the ones so reliant on it.

You say your a teacher, @0Picass0 then you can confirm what I already know about GenZ. GenZ do not talk like they have had, an actual education. They are overly reliant on AI to do their work for them, cause they are not able to think for themselves, and complete the assignment.

This is why I always say, my generation was the last to be properly educated. Our reliance on AI, and the lack of GenZ education, is going to doom us all. I also agree, the pandemic shutdown, didn't help matters either.


1 Like

The pandemic shutdowns and lockdowns was a matter of the Cost to Benefit ratio.

Yes, there was a cost. But the benefit was protecting people from a new and highly lethal virus that we needed to get a grip on.

1 Like

Did you hear about the latest Covid mutation? Their literally calling it "razor blade throat," because of the feeling Covid leaves in the throat. :flushed_face: Thank goodness I always treated Covid seriously, and always wore my mask.


One could argue Shakespeare was similar to A.I. I met a descendant of someone years ago who was close to Shakespeare who believed he was a plagiarist of his ancestor's work.

I admit I have used "A.I." to post answers due to the fact I have not the experience of what others are facing. Let's face it, A.I. search engines are actually 'advanced' (dunb) search engines that crawl websites that use the words submitted as its base search criteria. It isn't really A.I., it is just a bigger crawl engine.

There was one positive for me in lockdown. My exposure to Microsoft was reduced substantially which in turn improved my workflow greatly. So i was not exposed too much to the Windows and Office virus! :winking_face_with_tongue:

1 Like

In terms of education, I was a teenager when a magazine program on BBC, 'That's Life' with Esther Rantzen, reported on a toddler having drunk from a bottle that had chemicals in it, used to replenish the nodding bird that looked a bit like an ostritch head. The company got wound down because of this incident. This raises a couple of questions.

  1. How did the parents allow their child to get their hands on the bottle?
  2. Why don't all schools educate people on the different types of bottle?
    Our Science teacher, Mr Reeder, gave some valuable education on bottles that contain poisonous materials if ingested, such as clear smooth glass bottles, and brown ribbed bottles, the latter of which contained the substance used by the nodding bird.
1 Like

I am in full agreement with that comment.

If I do not have personal experience of an issue, I will declare that. I then try to help by doing websearch/s, and only post links to any hits that show a solution or offer pointers to a possible solution. I am not sure if AI would do that.

1 Like