The effects of AI being included in new hardware

This is always a possibility. At that point the ZorinGroup has a choice to make. So they continue with the Ubuntu base, or move to a more customizable but more time consuming base.

Either way, the ZorinGroup keep three things in mind....

  • ease of transition from Windows to Linux
  • stability: it has to work, mostly OOB from install
  • efficient: it has to be reliably fast and responsive.

Taking this into account, I'm sure the ZorinGroup would like to minimize the amount of work they have to do in order to accomplish all of these. The more difficult it is to get all of these working correctly and mostly without issue, the slower the release cycle... which they do not want to extend unnecessarily.

Change is difficult, and not less so in programming. While they will avoid this possibility currently, it doesn't mean they won't accept it in the future. This of course depends on the direction Ubuntu reveals to the Linux community.

As of now, Ubuntu's intentions and goals seem to want to be the next M$, for Linux. They're popularity and the amount of systems that use their OS makes a difference. This is what will determine whether they will be able to sustain that direction, effecting the distros that are based on their OS. This is of little consequence to canonical.

Ultimately, it will come down to the ZorinGroup whether they continue with a different base or finally assemble a completely proprietary system.

Until then, we have the ZorinGroup attempting to make any changes as least impactful to Zorin as possible.


Well i wrote yesterday we can moving that topic to another place. If some kind administrator could do that do not mess with this topic Zorin 17 release.

1 Like

Done and moved to new thread.

1 Like

Thank you @Aravisian you are always kind and never triggered red button. I am suprised your patient and silence.

It's really difficult to point to a single thing and call it intelligence. But this sentence makes a very astute observation by pointing out what it's not capable of doing. I don't think I've ever thought about AI in this way, and it really does put things into perspective nicely.

But even if it never achieves true intelligence, as we understand it anyway, AI remains an extremely complex system that is really difficult to make sense of. To me, the unsettling aspect about this news is the uncertainty about how much in control I really have.
After all, it only needs to be intelligent enough to outsmart humans at specific tasks like playing chess, enhancing images... or avoiding detection while using system resources it's not supposed to have access to. It almost feels like being in Jurassic Park with a Velociraptor waiting for any mistake to escape and feast on me.

EDIT: We need a velociraptor emoji :t_rex:


Yes, it does. Because if something mimics something else, where do we draw the line?
See, let's say I mimic you but in my own body. People could still tell us apart.
Let's say I mimic you more by mimicking your hairstyle and dress style as well as your speech patterns and behavior. People may be amused, but still able to tell us apart. But... some people that do not know you very well at all might get confused as to who is who.
Let's say I go further... and change my facial structure to match yours...

What if, given the means and technology, I change my DNA to match yours. To where i mimic you to such a fine detailed degree that no one can tell us apart?
At what point do I become you?

If I built an android that uses nanotechnology to create cells and mitochondria and rhizomes and blood vessels and triglycerides - so that even a medical doctor cannot tell that it is an artificially created android... Is it now just another human?

With quantum computing, and much more development, AI can surpass us; easily.
Humanity still follows its ingrained patterns and programming the vast majority of the time. But AI can be developed to a point that it follows its programming only for the most basic of tasks. And a will to switch between at need without difficulty.

I mean take any argument you can observe between two people (The internet is full of them) and take note of who accepts Correction when wrong. That's programming.
But QmC AI would be able to readily and could apply a new layer of programming to handle that - in milliseconds.

Our biggest fear would not be Malicious AI. About 99.99% of any threats AI can give us would be non-malicious.
It's all the ways that it can surpass us that makes it truly frightening.

For now - yes. Enough so that if included in CPU's, it may be better at managing efficiency which is good all the way around.
But a Human can program it for clandestine activities and teach it to avoid detection.

Lovin' the velociraptor reference. :smiley:

Well said!

1 Like

I suspect that Microsoft's push for AI is to justify raising Windows hardware requirements. :upside_down_face:


I don't know about that. It takes a lot of research, trial and error to maintain software for multiple hardware configurations. Add in hardware over a decade and a half to two decades old... it can be very difficult.

Older systems won't have the optimizations of current systems. Attempting to test and troubleshoot in order for older hardware to run something new can be almost impossible.

This is why you see video cards reach end of life. The libraries that interact with the drivers aren't written with old hardware in mind. Eventually, even a GPU will get overwhelmed with instructions if it can't keep up. You would see a loading screen/icon/animation for several minutes before the software or OS terminates it because the hardware can't handle it.

You wouldn't pull a fifth wheel trailer with a Volkswagen Beatle. The same can be said with software.

Older 32bit systems can't support the multithreading and asynchronous operations that occur in 64bit hardware. The old computer would freeze.

Try using a 90s laptop to surf the web and you'll see what I mean. It takes a while to load a webpage, if it does. It's not just dealing with html and css, but javascript, php while accessing databases for all the ads that are included in just about every site. Throw in video or animation and it very well may crash.

Even the web requires asynchronous computation anymore.

It eventually happens with just about everything technological.

That is true I reading new processors will be need to operate IA in windows 12 and new motherboard. Propably also new graphic cards. New ideas with old technology to more control people with IA, programming what are will be implemente to feed IA to learning and copy our ideas or not.
Well this is my imagination but if more learning from us then the future with robots terminator or some another things.
Well in history we know people who have a better technology died like Atlantyda?
I don't know this is just my speculation.

There are good reasons to be both optimistic and pessimistic about this news. We already know about the massive potential AI has to surpass us, same as mechanical machines did before, and I think that's great and will have a big positive impact in our lives. Unfortunately, I lean towards the pessimistic side of the scale on this one.

One thing that has been mentioned a lot about the future of AI is that there cannot be any progress made without considering the ethics of it. But from the looks of it corporate interests are dictating the pace, as consumers are already being promised the new thing to look forward to.


My c-rappy little Gateway with only 4GB RAM and a Celeron can run Win11 - it's kind of surprising, really. Came with Win10, upgraded to 11, then wiped and went with Mint MATE :joy: Gonna try 17 when it drops :eyes: it did not like 16/.2/.3 versions.. Not giving up!


This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.