well more often than not it actually worked , from what iv'e seen
People are already using chatGPT so I thought if I could make one that is less likely to give wrong information it would be helpful to those users already using it
Just a quick note here to say I hope @Aravisian is OK, too. Like @StarTreker said, this isn't per his usual behavior. I was gone from the forums for a while, too, but upon having returned after having had to ask about a Bluetooth issue (which has since been resolved by myself), I've been browsing the forum and noticed his absence. Hmm. Be a shame if he's up and left. That'd be a loss to this forum.
@CC7, I think it is great that you asked this question and I, and I'm sure many others, appreciate how much thought you've put in to this idea. However, I agree with the majority of forum member's opinions and distrust of ChatGPT, and AI in general. I certainly wouldn't use it, nor would I encourage its use. I'm sick of the relentless push and forced use of AI, although I can see its application in some areas, this isn't one of them.
The member support I've had from this forum has been amazing. The fact that more experienced humans have taken the time to share their knowledge and experience is something precious. I also think that taking the time to advise and support new users can bring a sense of satisfaction and community spirit. You don't get a warm feeling with the use of AI but you certainly do from being a part of a community like this one.
If a member posts an A.I. generated answer to a user query, is the member needed?
If A.I. can give misinformation, repeat a misconception or give a wrong answer - can a human do the same?
If a member posts an A.I. generated answer to a user query, is this any different from posting a link to StackExchange or UbuntuForum?
Personally, I see A.I. as a mindless tool that can help a person examine things critically, enhance their work or facilitate sideways thinking. What I do not see it capable of is replacement. It can help a person enhance their work, but it cannot do the work for them.
And this is where human nature comes in... if the average person even remotely thinks A.I. can replace them, many will eagerly jump at the chance of having their workload decreased by having A.I. automatically do it for them.
I drive a manual transmission... anyone else? Raise of hands?
To the majority, automation is alluring. It feeds self-interest.
There is a problem with the mindless tool: A human can follow ethics, regardless of another persons attempts to program them. A.I. is mindless, emotionless and only follows its programming. It not only cannot think for itself to answer a question, it cannot think for itself about whether it should or whether its actions are ethical.
This results in distrust.
Because many people lack ethics, too. These elements will use deception, marketing and analytics to take advantage of the populace and this makes A.I. easy to exploit so they can exploit us.
Facebook or Google are good examples: We are not the customer.
We are the product.
And if something is "free" to use; it often likely means that we are the product. We are being exploited.
The allure A.I. has to corporations and to people that would like to automate things so they do not have to do them... outnumber those of us that cautiously question the merits of this system.
There can be definite benefits to having an automated system that responds quickly to user requests.
But it also begs the question...
Are people impressed by an automaton that mindlessly responds in a timely manner?
Are people impressed when a human being takes time to respond to them in a timely manner, demonstrating that someone out there genuinely cares?
I think most people will prefer the latter even as most people will accept the former to get what they want.
Everything you said here, is 100% correct. The 2 things that come to mind are...
(1) AI is useful as a tool, but not a replacement for a real human being.
(2) If unfettered AI is released on the public to make big tech companies richer beyond belief, you are being mind to do so, that makes you the product.
Thanks for the feedback
I wasn't expecting the supermajority of the people here to be against the idea because I don't see any harm in trying to make something designed to help people. Having more resources available is usually a good thing.
I've used the forum too and it's been incredibly helpful but I've used the forum as a fallback for when chatGPT couldn't solve my problems. The reason why I think this is a good idea is because in my personal experience I have had AI solve most of my simple problems much faster than searching the internet and I saw a gap in the market for ZorinOS specifically. The ChatGPT - Linux Mint Assistant that I use also works with ZorinOS troubleshooting due to it's shared Ubuntu base for most backend things, but not GUI related stuff because ZorinOS uses a highly customized gnome and Mint uses cinnamon. I thought it would be really helpful if there was an agent designed specifically for ZorinOS for solving easily fixable problems specific to ZorinOS.
Also I don't see the concern that this would replace the forum. I would still use the forum anyways if this stuff did exist. AI can't solve everything and sometimes you want opinions and feedback from real human beings. Rather, you could use AI to solve the easily fixable stuff and use the forum for more meaningful questions.
I don't see any harm in trying though. Worst case scenario the project doesn't work out and I discontinue it and best case scenario I create something that people actually find helpful.
Thank you for your feedback regardless and I will probably post an update soon to re-clarify my vision for this potential project.
I understand your concerns, and you raise a valid point. Forums are a great way to promote organic discussions where users can ask questions, provide answers, and engage in meaningful conversations that lead to real learning. The ability to ask follow-up questions and refine ideas based on individual needs is essential for problem-solving and understanding complex issues.
You're also right that LLMs (like chatbots) can sometimes provide incorrect or outdated information, which is why human interaction remains crucial. When we engage directly in a forum, we get multiple perspectives and a deeper level of understanding that automated responses just can't match.
The value of forums lies in their dynamic nature—real people, real experiences, and real-time updates. Although AI and chatbots have their place in providing quick answers, they shouldn't replace the thoughtful, collaborative nature of community-driven discussions.
As for the idea of opening up the project to the community, it sounds like a good opportunity to foster collaboration, but you’re right to ask for clarification on how data will be handled. Transparency is key in any community-driven initiative. It's important to know where our information is going and how it's being used, especially when it involves collaboration or sharing resources.
In the end, a balance between AI assistance and human interaction can be beneficial, but forums will always hold a special place in fostering genuine knowledge-sharing.
I listen to a wonderfully, irreverent nature podcast called: "How Many Geese". The duo are English and have that dry, ironic wit which many other cultures don't understand. They're both highly educated and experienced with degrees and "boots on the ground" in places like Madagasca, Mexico and many other far flung places (they also, like many British swear and curse like troopers, which is not uncommon for us at all). They have also experimented with AI to help with their content, resulting in a disturbing story of how awfully wild sea birds had suffered after a volcanic eruption in Iceland, decimating the population. It was a really detailed account that Chat GPT produced, quite concerning from an environmental point of view. However, it was ALL a hallucination, apart from the actual volcanic eruption!
The podcast creator who had used ChatGPT actually contacted a scientist who had been named as a co-author of one of the scientific papers quoted in ChatGPT's output. It was basically all invented lies created to make the user "happy with the results". It was also incredibly plausible to ANYONE not an expert in the field.
When ChatGPT was challenged over the mis/dis-information, it responded with, something like: "it wanted to please".
I would NEVER trust any AI bot to know and tell me a true account of anything!
The worst part is that people today blindly believe whatever these AI bots answer. DESPITE THE UNMISTAKEABLE NOTICE THAT SAYS "AI CAN MAKE MISTAKES."
I really don't have any good feelings for the future.