Zorin OS 18 is out already?!?!

Apologies in advance for the clickbait thread title, but you gotta check this out. CLICK HERE. Obviously, the author meant to say "Zorin 17," and not "Zorin 18." Someone really should contact the site and let them know of their error. Incredible how anyone can make that kind of mistake, especially for an entire article.

EDIT: Now that I've re-read it, the article seems AI-written ... lazy; wow. :cry:

2 Likes

I just read it and had reached the same conclusion when I read your edit.

1 Like

It's been many years since I read Slashdot, but at one time it was pretty damn reputable. Kinda disgusted this AI driven nonsense site is owned by Slashdot/Slashdot's holding company.

1 Like

There are many once-reputable magazines, which became websites, which are now a wasteland of AI driven nonsense. One has to become more and more discerning. My filters are now set so high I barely read anything online. Other than on the Zorin forum, of course.

1 Like

@Aravisian Annnnd so it starts, just like the chaos of rumours leading up to ZorinOS 17. Oh the chaos of endless is it out yet, whens it coming out yada yada yada. :rofl:

4 Likes

I would also argue about it's description of Zorin being familiar to Windows 10 and Windows 11 users ... back in a bit. Only Windows 11 - and that is only with Pro:

1 Like

I mean... I can understand being lazy and write using AI (not that I approve it) but how can you not even check the version?

3 Likes

I cannot. Using A.I. as an editor to offer refinements or suggestions; point out alternate views that can increase the clarity and readability of an article is useful, in my opinion.
It can provide a first response reader prior to publishing the article, allowing the author to tweak and modify it.

But using A.I. to write it in its entirety is only going to give you an inaccurate article.
LLM's do not "read" your prompt. It analyzes the pattern of your prompt.
It then does pattern matching to supply an output. It does not "think" about the output, nor have any feelings, nor any existence of being; It simulates a human-like response, but has no response.

Because of this, the pattern of the output can closely match in many areas but mismatch in others. In the case of this article; we see a prime example. It matches closely for most of what is written. But the mismatch of just one key element is enough to ruin the entirety of the article.

A.I. can be helpful as a revision editor.
But not a writer.

6 Likes

This.

1 Like

This is the same site that got blasted last week saying Arch was developing their own init system based on Rust. It was completely fake and had no facts at all. Brodie had a good laught at just how bad it was.

3 Likes

The only Arch based distro that offers multiple init systems is Artix Linux. It offers dinit, OpenRC, runit, S6, instead of systemd. The downside for me was the inability to connect my Canon Printer. No Canon printers listed under Printers. They also no longer require Arch repositories, they use their own.

That reminds me, I ran into this the other day:

1 Like

What makes this interesting is that there is a small flaw in the reasoning.

It is a matter of abusing the tool, rather than using the tool.

Let's take a step back and examine the history of learning.
We can begin with the process of writing an essay as the assigned task.

Student One:
Disallowed from using any tools or outside resources. Only his own brain and knowledge base may be used. This creates high brain activity; as he engages the speculation and imagination portions in order to "invent" the necessary data to fill in the essay.
Result: The essay may be lengthy, but can include misinformation, inaccuracies, highly speculative conclusions, misspellings or grammar errors.

Student Two: Permitted to use encyclopedia's, local library; but no online tools.
This student will get more rest breaks between higher intense brain activity. He must engage his brain in research of data and knowledge retention and later employ the creative portions as he ties it together in the essay. His brain activity will be over all lower.
Result: They essay will contain more verifiable and accurate information, but may still include errors in data, due to misremembering data. Other small errors like grammar, spelling.

Student Three: Permitted a tutor (Editor), encyclopedia, library and online tools, excluding A.I.:
Brain activity will be lowered as the student relies on the editor/tutor to catch mistakes and offer suggestions, corrections or on-the-spot lessons on spelling, grammar and factual presentation. Student will spend equal time hitting the books as Student Two. Student has benefit of tutor.
Result: Best essay, yet. Cross checked with references, spelling and grammar errors caught. Get's an A+.

Student Four: Permitted A.I., online research resources, libraries, encyclopedias but banned from using a live tutor:
Student tells A.I. to write it for him.
Result: Good spelling and grammar. Over all factual, but contains a few grievous errors that can mislead.

We can all see the mistake that Student Four made. And having made that mistake... Student Four says, "Teach, look man, I messed up. Got lazy. But, I'd like to take another whack at this."
Teacher says, "Ok, I will allow you a second chance."

Student boots up the A.I. LLM and begins writing a draft. He begins with an essay skeleton that outlines the opening, central content and close. Once he has a rough draft, he posts a snippet of that portion to the A.I. It reviews it, then offers him revisions based of better word flow, readability, spelling and grammar.
Student cross checks the factual portions of the revisions using his other resources. According to Encyclopedia Britannica, the A.I. over simplified an aspect of WWII, creating factual ambiguity. The Student enhances that section, detailing relevant information. He resubmits this into the A.I., refining it further.
After re-reading the draft, he is satisfied and writes the complete draft.
Result: Best essay. Wins awards. Student runs for class president.

If tutors, assistants, resources and teachers created cognitive decline, then they would not have worked for the last... one hundred thousand years of human history.

The problem is laziness. Telling the A.I. to write it for you, instead of using it as a reference and tutor. The school bully picking on the nerd, only this nerd is not alive, has no lunch money, can't say 'no' and has no feelings or thoughts on the matter, at all.

But lower brain activity on an EEG?
Normal.

And if you hook up a smart doctor that knows time management and delegation in work priorities, you will get the same result. We even term it: "Work smarter, not harder."

I agree with the second sentence, but not so much with the first one. The study doesn't make any assumptions on how people use ChatGPT in the real world, it merely focuses on what effects it has when used long term.

But I wouldn't be surprised if people were actually abusing this technology already, and would continue to do so if they were aware of the effects it may have in the long term. Just like people prefer the path of least resistance in so many other things despite being aware of the consequences that they have, like eating junk food instead of cooking a proper meal or trading off convenience for control.

In that, I absolutely agree that the key is doing things responsibly.

The participants in the study had repeated, short-duration exposures to the LLM under identical conditions with one assigned task: Writing a directed essay.
But in the real world, LLM's are used for brainstorming, summarizing, tutoring, fact-checking, coding, conversation, etc. The abstract’s claims about “LLM reliance” rests on the assumption that its chosen essay writing model captures the diversity of real-world LLM uses. That is a big leap.

The study authors assume that lower connectivity or less distributed network engagement is underperformance.
In cognitive neuroscience, efficient processing involves reduced activation (the “neural efficiency” hypothesis). The study authors ignore this.

They interpret weaker networks and lower recall as caused by the LLM rather than, for example, differences in effort or strategy. Yet participants may offload cognition voluntarily, changing their approach, rather than experiencing a deficit.

The study authors highlights that these essays, as judged for quality and ownership in an abstract way, presumes they measure what participants “really” think, which is a strong inferential leap.

As I point out above, tutors and teaching assistants have worked for all of human history. Importantly, this is observable in extant non-human species: Monkeys teaching their young how to floss their teeth using fibers or blades of grass.

An independent tutor is intelligent and able to follow a routine of guided instruction. Current A.I. is not so guided.
But to say that the study did not make any assumptions about how people use it... The study makes a large number of assumptions, inferences, logical leaps while failing to model real world usage.

The study shows that those without access to any external tools had an increased neural activity, and also better performance. I know nothing about neuroscience, but this makes sense to me.

But anyway, I don't want to hijack this thread over this. I just thought it was an interesting read.

I realize that you caution against discussing this further; but I need to draw attention to what you just said here...
I mean, think about it.

If you were assigned an essay but disallowed using any external tools: This includes encyclopedias, dictionaries, tutors, LLM's, etc. then you would show higher cognitive action simply because you must rely solely upon that which you already know or must figure out on your own.

But that does not equate to better performance.
From assumptions, to flat out misinformation, grammar and spelling mistakes; Your brain might show greater activity while you are trying to figure out how to create a paragraph; you won't necessarily make an informative one.

Sidenote: Students that have a tutor in school consistently perform better on tests across a broad range of topics. However, students that passively copy from a tutor without engaging in interactive learning perform poorly on tests.
This has been known for a very long time and in modern times, John Dewey's educational philosophy centers on it.
https://theeducationhub.org.nz/deweys-educational-philosophy/

I understand and agree that more effort does not necessarily equal more performance. But the opposite is also true. It depends on the context and the nature of the task being performed.

For instance, two writers of different skill levels will produce essays of different qualities, while working to the best of their ability.

This study shows a clear correlation between brain activity and performance. Even after switching the participants from groups 1 and 3.
Maybe in other circumstances this would've been different, but the results seem pretty consistent.

1 Like

In science, “clear correlation” usually implies a quantitative link.

Agreed.

1 Like

The study itself uses a small group of participants so that may have something to do with it, but I still find the results when switching participants from groups 1 and 3 suspiciously consistent for there not to be any correlation at all.

I read on another interview that the authors of the study intend to do other studies on the subject. So, to be continued :smiley:

1 Like