I've used AI to help me get some things done in BASH that i would either never have been able to do, or it would have taken 20x the time. That said, i don't rock up to ChatGPT and ask for a script to do x or y. I look around for pre-existing options, often in stackoverflow-like sites, or Git. DuckDuckGo is my starting point. Then i use AI to improve or tweak what i have copied or made.
I have found ChatGPT to be very helpful when it comes to understanding a technical topic i have been trying to understand. It's certainly made some improvements to my scripts. But you gotta be careful. It has also steered me wrong many times. Once or twice i've had to tell ChatGPT that it is wrong (it was).
I am not an optimist when it comes to AI's impact on society. The laissez-fair manner in which it has been rolled out to the masses should be concerning to everyone - even the optimists.
Out of curiosity, I did use Brave A.I. on a post that did resolve the problem posted by an OP. However, I will no longer use A.I. because of the environmental threat caused by A.I. Imagine this scenario:
"I've managed to find the answer using A.I. how to construct GNU/Linux from scratch, avoiding using systemd, pulse-audio, plus any other packages that prevent me from having full control of the system. Unfortunately as a result it put extra strain on the Power Grid resulting in a weeklong brownout!"
I don't seek out AI intentionally , but it's running behind the scenes all over the web. I was testing Opera browser a while back and it was possible to enable three different AI options. My mail client includes an AI reply option that I have not tried and won't.
Yeah, this is so true. It's been quietly doing a lot of good work in many places where it simply didn't get the same media attention. This is from 2016:
What I don't like is how these tools are being used more and more to replace creative tasks that humans actually want to do and are good at.
Instinctively I don't like the idea of AI. I'm not a very techy person, I don't have much of an understanding of how computers work etc (hence why Im on this forum! Lots of helpful knowledgeable people on here) and from that viewpoint AI really doesn't sit comfortably with me.
Really I have two main objections -
When it comes to researching/ learning anything how helpful is AI really? Yes you can get an answer but being given the option of just one answer means you don't get to research and evaluate any opposing answers that might be presented when researching something the more traditional way i.e If I ask ChatGBT (a free to use language model AI) 'Is climate change real?' its answer includes 'The earths climate is changing due to human activities' but when I do a traditional search for the same answer yes I get articles talking about how humans are causing climate change but I also get articles discussing the natural cycles of climate change. The traditional search gives me the chance to read different views/ sets of data and make my own mind up.
This is probably a more controversial view, we don't yet know how dangerous AI could become - how far down the road can AI go? could it start to think for itself eventually? Could it at some point decide to cut humans out of the loop and start to make decisions on its own? Do we really want that (I know it might sound a bit Terminator'ish but can we guarantee it won't go that way?)
For now me personally I'm going to steer clear of AI as much as I can.
ChatGPT supplies the same, actually. It supplies the natural cycles, including Milankovitch cycles, human activities that influence Climate change and what these effects mean.
If you run a net search, you can find many articles (Peer reviewed journals being the most reliable) covering the effects of each and how we influence them. But, you can also find plenty of misinformation being spread in order to deny the science. Opposing views must include the ability to Fact Check and Verify.
While a person can make up their own mind to form an opinion, it is important to remember that a person cannot make up their mind about a topic they are not educated in, then get opinionated about it and tell the scientists that they are lying or wrong.
That really gets under our skin.
I must be fair here: ChatGPT gets things wrong sometimes, but it is pretty good about the sciences and retaining objectivity.
A.I. is extremely helpful for modeling complex dynamic systems. From logistics, finances, weather and climate to trends and population. For industry, government or corporations, A.I. can be very helpful for reducing waste, expenses, increasing productivity and creating better methodology.
These benefits, however, can as easily be abused. A.I. is computer modeling, not necessarily bound to honor, ethics, integrity or compassion. And this is what scares me the most. Because many people seeking to enhance profits, exploit resources or others, etc/ also may lack these qualities.
For the general person sitting at home, A.I. may not be as useful for Researching and Learning. I think it is useful, but merely as a tool that can guide or point you in the right direction. Using it as a search engine is redundant and too reliant on the A.I. (I agree with your point above, here). But if using it to ask about the things you find in your research and for more or other resources, it can be helpful.
The trouble is many people stop at Step 1 and do not ever reach Step 5... They just accept first answer and run.
What we call A.I. today is nowhere near sentience level. It can crunch complex models, but it cannot think. This is part of what makes ChatGPT so unreliable. Ever misspell a word and you glance at it and think, "Wait... that doesn't look right..." ChatGPT lacks that ability entirely.
But... when developments allow A.I. to attain that level of sentience - Can it turn on us? become harmful? I believe it is possible. Programmers will likely put many safeguards into place (This is complex and we could discuss what and how that works in a very long thread of its own), so it is highly unlikely that A.I. could go rampant or far afield. But it is possible since we cannot assume we know enough or all of the factors to anticipate every possible outcome of a novel developing artificial intelligence.
That lacking ethics or empathy or compassion comes back into the picture again.
As a tool, I don't think that by itself it has the ability to "take over" anything. But it can be abused by those who control it, unilaterally, without people even noticing — or worst, caring about it.
People already see these LLMs as some sort of oracle of truth™. Not checking beyond step 1, as Aravisian said, is not going to do anyone any favors; except those who control it, that is.
This I believe is one of oldest dilemmas of humanity, AI is just a tool, it can't be bad or good, the use give it to it is what can be right or wrong. Is something really fresh, we still need to learn how to use it. I as programmer use AI, it really help me a lot. Personally I use Perplexity one of the things I liked the must about it was that give me references and links
That is a partial answer that give me to the question that is treated in the topic. Right now me personally AI for guidance, I go first to an AI, because can ask question in a more natural language (specially when I'm not completely sure of what I'm looking for) and after the AI answer I go to the traditional search engine.
It has to be notice the Brave Search that have AI integrated, and give an AI generated answer along with usual response you would wait from a search engine
@Harry_A . As you'll see from the bottom, right corner, Zorin Core 17.1 does indeed use Wayland, since the kernel upgrade to 6.8. You can't trust A.I.. to give you the correct answer. However, if you'd asked in this forum, or done a search here you'd see, much to the consternation of many, yeah, Wayland is now default.
Edited due to correction: I completely missed the "Lite" - sorry! This will teach me not to reply when I'm tired.
Zorin OS Lite does not use Wayland, either by default or by choice. Lite uses the XFCE Desktop. Core uses the Gnome Desktop which is all for IBM... I mean Redhat... No, I mean Wayland.
Wayland lacks support for proper Desktop manager integration. Though Gnome uses a shell with Mutter, it does not employ a desktop manager. XFCE does, which is why XFCE allows more desktop settings and actions than Gnome does. This is what makes XFCE compatibility with Wayland tricky.