Linux May Be the Best Way to Avoid the AI Nightmare

What do you guys and gals think about this?Linux May Be the Best Way to Avoid the AI Nightmare

The industry that is excited about AI may have a nightmare. Data centers around the world are now consuming massive amounts of semiconductors and electricity to run machine learning. When the result is little more than glue on a pizza, the bubble will burst.

In fact, Linux OS is a major contributor to AI technology, while Windows does not seem to have good support for the necessary libraries. The AI boom may have prompted Nvidia to support Linux in a positive way. Is that a dream or a nightmare?


Hey, AI has some good uses... For example: Microsoft Screenshots EVERYTHING Using Copilot


In the old days, we actively looked for malware that did that in order to remove it.


You DO know I was joking that it's a good thing to spy on user?

1 Like

Never had a doubt.
You would not be posting here if you did think it was. :wink:

M.S. is not the first, by far. Google promotes their spying and data collection as a convenience ("good thing") that is helpful to users.
My biggest concern about Co-Pilot taking such frequent screenshots is less about immediate privacy and more about data storage, too. (How doers data storage factor into this?)
Not only can that data be disseminated to garner a large volume of personal information about a user, but it would quickly devour large amounts of storage.
Likely, to mitigate user local storage from filling up like a swimming pool hit by a tsunami, M.S. remotely keeps those screenshots on a server elsewhere, non-locally.
This increases costs and energy usage in a way that mere convenience for the user cannot justify.
Using A.I. data crunching, every bit of a users activities can easily and swiftly be profiled and documented. Whether it be about shopping, home searching, employment or personal interests. More, finances, work related information and other sensitive information all would be not only observable by M.S., but due to that data being transmitted and stored on a server, at higher risk for being obtained by third parties, even without buying it from M.S.

The absolute magnitude of Co-Pilot's ramifications are staggering.


My concern is that people will act out of fear rather than evidence. Do not fall into the trap of the Little Tech trying to escape Big Tech. Do not poke holes in solid Zorin OS with dodgy security software.

Remember when the word "spyware" was a thing? Now it's called a "feature". The turns have indeed tabled.


I read somewhere that those screenshots are saved locally (not sure if they are sent to a server later, but they exist locally)... and take up to 25 GB of storage. That's gonna give huge headaches to people with smaller SSDs, while also making HDDs even more unusable than they already are on w11 (some months ago I tried to install w11 on my laptop just to see how it runs without meeting microsoft's minimum requirements and it was incredibly slow and unusable... but cpu usage was at 30% and gpu at less than 5%, with the proper w10 drivers installed and working fine: it was the HDD the one that was at 100% usage for a few hours until I gave up and completely removed windows again. It was not malware because it was a fresh install and the drivers were downloaded from this laptop's page on acer's official website and I had already removed all the bloatware)

I'd say Windows not including SSD in the minimum requirements of windows 11 is false advertising, even more so now that they are adding a "feature" that is constantly taking screenshots and writing (and deleting) on the drive. And even if SSDs wouldn't suffer big performance hits here, I'm sure it will also make the drive go bad faster

i don't think there is a big problem in security in that, already the pc has history and each app has option to get the recent files, and all the browsers has caches and coockes and suggest your previous activities on you.
i think the copilot will be good, because it not work online, it just on your offline pc

Most Linux operating systems, including Zorin OS, allow the user to control the system. However, it is up to the users whether or not they can protect themselves. In other words, if you can control yourself, then you can protect yourself.

Is your fear really brought on by AI? Are you running away from confronting a more immediate problem?

Cache and cookies are fairly easy to clean up. From what I've read, since Windows is an open OS with more access to system files, other applications might have access to those Recall screenshots (which are unencrypted inside the user session). You can apparently create a blocklist for certain apps not to be included in those screen shots, but what if you forget to block your password manager or banking website and some other app grabs them?

If this is a mispresentation of how it really works, please correct me, but that's what I've read and would be concerned about. Not that I would have any personal use for that whole functionality. Someone else might -- so they will have to determine the risks for themselves.

I think people are just hoping it stays that way with Linux -- that we continue to have that personal control. I don't think the issue is necessarily AI itself, but what people might do with it. Good actors today may become bad actors tomorrow. Others are trusted as good actors but may not be. Or maybe the bad actors take advantage of the good ones.

There are deepfakes created with AI. I've been "mis-summarized" by AI in work meetings already (which wasn't even a purposefully nefarious act). So AI is obviously not all rainbows and unicorns. Not to mention the warnings that have been published by those who have early access to it.

So it's the fear of having AI foisted upon us without that control or without proper safeguards. And it is being foisted because -- $$, especially by companies already notorious for using personal data for their profit. Just doesn't seem to make for a good combo -- AI and for-profit data mining and asserting control.

Sure, AI gives us a nice summary of all the product reviews on Amazon -- what could possibly go wrong? :slight_smile:


Search history isn't a problem because the worst that can happen is that someone (other than microsoft) knows what you search online. It's a bit risky because it allows for more effective phishing to be done by knowing the type of sites you visit and if you get informed on cybersecurity or not, but as long as you are careful that's not a problem.

Session cookies, however, are not much safer than copilot, as I have seen youtubers fall victim of malware that steals the cookies, giving the hacker full access to the account without even knowing the password.

Websites that treat more sensitive data usually log you out after inactivity or closing the web browser to prevent this, but if you were unsure if you typed the password correctly, click the eye button to show the password and copilot saves it as a screenshot that any program can see, doesn't that rise many red flags? Would you write your password to your bank account on a paper that anyone that enters your house (including visitors) can see?

1 Like

That is a valid opinion, and perhaps I am being needlessly concerned.

My concern is that some people disrupt the community out of fear of new technology (See Do NOT include A.I. or A.R. in ANY of your distros). And they may see it as an opportunity to expand the community. I don't see malice in them and it would be an instinct for people to herd. But if we cannot curb our instincts, we will not be able to live as human beings. People caught up in fear are easily manipulated and make the community toxic.

1 Like

Yeah, you're right -- you do want to avoid having things blown out of proportion. Nothing wrong with being a moderating voice.

1 Like

Quote from... can't remember:

Hi! I just have been having an issue with models that cause the system to run out of VRAM. It usually does the following:

(attempt to run a model via api, for example Llama2 70b)
ollama-runner tries to load the model into VRAM
ollama-runner runs out of VRAM and the process kills
the API hangs indefinitely until it is killed (via systemctl restart or killing the docker container if applicable)

I don't know why it has to be restarted to process the next request, would it be possible to have a feature where it detects if it runs out of VRAM or crashes and then returns an error via the API and/or auto restarts? This is something I've been running into recently AS I ONLY HAVE 24G OF VRAM