I have learned this lesson so many times, that I clearly never learn it.
I have a handful of specific personal examples I can point to, though I don't have "evidence" to hand as they're work-related. I typically avoid using so-called "AI" tools* as much as possible, but the company I work for is encouraging staff to experiment (with solid guidelines on data protection and confidentiality issues) to see what can benefit us in day-to-day tasks, and thus benefit the company in "efficiencies".
*so-called "AI" tools
I honestly believe all use of the term "AI" in reference to our current LLM / NLP / ML / etc. technologies is criminal-level false advertising and a flat-out lie, intentionally propagated to leverage subconscious biases of AI as portrayed in fiction, where the tech is akin to near-human intelligence (+/- malice), and I immediately distrust any company that doesn't make a clear distinction as to what their "AI" tool actually is.
As for myself, I have education and background in web programming and a bit of personal interest in tinkering with tech in general, the latter barely at a hobbyist level though. I currently work in the fire safety industry with a role that involves a little bit of web work and a load of (small-scale) data manipulation and reporting projects, as well as some research and "knowledge" projects.
I have tried Bing Copilot and ChatGPT, both 3.5 and 4. Both are so unbelievably inaccurate they are plain dangerous. It's beyond a joke.
They regularly make bold statements that, at first glance, feel like they make sense, but are simply not true if you have any subject knowledge. I have asked them for some pretty simply code suggestions to which they provided working code that almost did what I asked, but not exactly - and an inexperienced developer could easily not notice or understand this, and go down a rabbit hole of inconsistent or incorrect behaviour that they need to trace and debug when they already have been asking for help because they don't understand.
Mozilla's MDN experienced this first-hand when they introduced an "AI" helper on their trusted technical documentation platform that regularly gave plausible and nearly-right but subtly incorrect explanations! The kickback from that was incredible to follow in real-time, I tell you. I spectated the live Q&A video call where they went through user-submitted questions (that I saw developing on GitHub issues and discussions beforehand) and it was amazing. Even though they had worked for months to specifically train a technically-focused "AI" on MDN content, it still failed almost immediately and they had to turn it off because it was fundamentally flawed. They never did answer all of the questions despite promising to, now that I think about.
I have also asked it for information about fire safety topics - things I know are well-documented and easy to find in general searches, even if you don't know the best keywords or bits of industry terminology - and they give, at the very best, disgustingly generic output that barely answers my question. At worst, they have on several occasions given me wrong information - either citing USA fire codes when I'm specifically asking about UK law, or just flat-out hallucinating about BS or EN Standards that cover different topics. I have explicitly asked ChatGPT 4 to help me write an email explaining UK domestic smoke alarm requirements to an elderly person, and it gave me information from USA websites.
These tools do often provide sources, but sometimes only for small chunks of their total output and often they have critically misunderstood their sources - this is one I do have a very specific example of, though they've fixed it so I can't demonstrate it live. As we deal with various life-safety products adjacent to fire safety, a colleague was looking into SCA and AED incidents. Upon searching something like "how long was christian eriksen dead for" in Bing, the generated summary at the top of the results incorrectly stated that he was dead for over an hour, despite citing a source that clearly - in the headline - said he was resuscitated in around 5 minutes. The cited source, over half way down the page, referenced another incident from over 10 years ago of another football player in a totally different tournament. All 10 of us in the office, myself included, tried searching similar queries to see if it was consistently getting the two events mixed up, and it gave the wrong information every single time.
We have also tried using (admittedly non-tailored) "AI" tools for image generation and creative inspiration. Bing Copilot's image generation, specifically, leaves so much to be desired it may as well not exist. We gave it explicit instruction to show a certain type of fire safety product outside and at least 1 if not 2 of the 4 images generated per prompt will be indoors, or in some abstract setting that can't be identified. We asked it for collages of fire safety or first aid products, and it basically just uses that as a colour theme - red for fire safety, green for first aid - while drawing random shapes. At least it usually gives you something approximating a fire extinguisher, and it does some pretty nice fire, but that's about the best praise I can give it.
Don't even get me started on how useless "AI" chatbots on websites are... I swear they only exist so big companies can throw them up knowing their customers will just get frustrated, give up, and live with whatever issue they're suffering because there aren't any customer service people anymore to reduce wages. Absolute garbage. The companies who use them need boycotting and the decision-makers responsible need sending to prison for the rest of their worthless lives. Not once has a single such chatbot come even close to helping me, even with simple problems like parcel tracking information, and they always get stuck in a loop because they don't offer the help topics I need and refuse to connect me to a person no matter what option I click on!
I don't use AI and I don't DIStrust it. I'm just not impressed with it. I work for a corporation that spent a lot of money on an AI security system. It is supposed to do what I used to do and make my job easier. I am simply now there as a sanity check, I suppose. However. It makes my job so much harder. It is like working with a 3 year old child. It is so easy to subvert, bypass, or fool. And I wish I was exaggerating when I say that 99% of the time it gets triggered, it incorrectly judged the situation. I spend most of my time tied up correcting its erroneous judgement, unable to keep tabs on everything I should be keeping tabs on because I'm hand holding the darn AI. The 1% of the time it alerts me to an actual problem, it's always a situation I was already monitoring and taking measures about. The hilarious thing is how much faith my superiors - no, everyone in the company! - place in it. It's because they don't work with it.
So yeh when people say AI can answer my questions about stuff or w/e, meh. I work with way more expensive AI than that and I am not impressed.
It collect some information about you what you asking and what are you interesting.
AI knows best??
https://news.sky.com/story/uks-first-teacherless-ai-classroom-set-to-open-in-london-13200637
Hardly surprising at this point, to be honest. This has been the end-goal that everyone thought of when this sort of technology started to become mainstream.
However, It's hard to argue that AI knows best. As more people continue to use it, as more content created by it gets uploaded, the training data becomes polluted with its own results. Eventually, it will stagnate.
I do agree that for knowledge that doesn't change very often it can be quite useful. However, the goal is not just to embed knowledge into people's heads, but to teach them how to do things. How to review other people's content, how to speak in public, how to research, how to contrast materials, how to work as a team, how to manage deadlines, how to summarize, etc.
There's also the matter of constructive feedback. People need to be told when they are doing something wrong and get some pointers; everyone thinks and learns differently. But chat assistants are programmed to give answers, not feedback. This is specially true of subjective skills or tasks that can be done in different ways, like writing.
On the other hand, motivation is at a record low on all fronts: students, teachers and even parents. An AI teacher that doesn't yell, complain or get tired can be quite effective to combat this. For instance, there would be no reason not to tailor the learning curriculum for each student or small groups of students. This way, everyone learns what they are actually interested in, increasing engagement. At the same time, teachers that are actually worth their salt are also more motivated to do their job right.
My main concern is the same as it has always been: that people see these things as godsends that "know best".
This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.