I reading new processors will be integrated with AI and will be in 2024.
Microsoft want do that.
How it could be implemented in new version 2024 in Zorin and if it will be how to shutdown that IA implemented in processors?
Skynet
Link to relevant reading:
Nvidia and Intel Will Leverage AI in Latest Offerings - Spiceworks.
Ok... so... Why?
AI Chip Architecture Explained | Hardware, Processors & Memory | Synopsys Blog.
How users should feel about this is not so easy to figure out. A lot of information is needed.
It is not unusual for a company to include a "benefit" for consumers in order to mask a detriment to consumers.
We can creating some subject about that to know what is best for good atmosphere on this forum. You know when it comes strong wind then comes a rain with lightning. I am not a fan IA. Example poor photos on some mobiles where is used AI. I don't want sayed it is ok or not in future. The civilization will judgement them.
In theory, I think this is great. On time, these new processors will become affordable enough for the consumer market, and people will be able to run artificial intelligence programs on their machines and local networks. For example for things like image enhancers to restore old pictures, remove crowds from holiday photos, etc.
In practice, however... the more complexity these things have the harder they become to control. Most neural networks are complete black holes, even to their developers. Imagine an AI chip that is encouraged to actively disguise malicious activity while running, capable of learning how to work around whatever mechanisms are in place to prevent it from running with elevated privileges or bypassing firewalls.
Maybe this is not possible to do but if it runs at the hardware level, there's very little we can do to control it. And we're aware that it's possible for an AI of turning to its controller if it needs to.
I think it's very nicely put by Tristan Harris:
It's actually been one of our focuses is getting and helping media who help the world understand these issues, not see them as chat bots or see it as just AI art, but seeing it as there's a systemic challenge where corporations are currently caught not because they want to be, because they're caught in this arms race to deploy it and to get market dominance as fast as possible.
Source: The AI Dilemma
Isn't what is now called the AI chip actually just a GPU?
Pretty much, yes. From the second linked article:
The AI workload is so strenuous and demanding that the industry couldn’t efficiently and cost-effectively design AI chips before the 2010s due to the compute power it required—orders of magnitude more than traditional workloads. AI requires massive parallelism of multiply-accumulate functions such as dot product functions. Traditional GPUs were able to do parallelism in a similar way for graphics, so they were re-used for AI applications.
This is similar to how CPUs were powerful enough to handle the computational workload in the early days of computers graphics, but a better way of handling things quicker and more efficiently was needed.
I really encourage accuracy, so I must comment on your link. For those that speed or skim read, this may mislead. I have full confidence that @zenzen is aware of scientific reality and has zero intentions to mislead anyone.
The link very clearly states at the outset that the Story Is False and there have been zero experiments with AI in drones. No drone turned and killed its operator because its operator was keeping it from its objective.
It was a hypothetical argument about proceeding forward cautiously with AI experiments.
There is also the story about the experiment in which two "AI" computers "did not like" that their operators were listening to their conversation and so they invented a new language to communicate in in secret.
Great Story - But it never happened.
In reality, the two computers in the experiment modified their communications to achieve greater efficiency, which is what they were supposed to do.
But when they modified it so heavily that the experiment controllers could no longer legibly understand them, the controllers shut down the experiment.
The computers had no feelings on the matter, at all.
Such stories, like Hollywood Movies (Terminator, WarGames, ShortCircuit) can be fun and imaginative ways of reminding our society to move forward wisely.
But while we are being wise, let's also be smart.
The point of the story is that we don't have full control over the AIs. This was acknowledged in the follow up statement to clarify this misunderstanding:
"We've never run that experiment, nor would we need to in order to realise that this is a plausible outcome."
There are also other scenarios where an AI has grown out of control, but with less dramatic results.
ChatGPT misleading members of this forum...
Or Tesla cars in self-drive mode.
Do you know that the Autopilot function on aircraft has been around much, much longer with a very low incidence record?
In fact, the Autopilot has a much better record than the human pilots do.
Autopilot could not, in my opinion, be considered AI. But it works. Quite well. Because of its simplicity.
Humans are impatient. We want reliable consistent AI and we want it now, right now, this now, not a later now. Reality doesn't work that way... Complex things take a long development time. And Tesla was most definitely being rushed (by somebody....)
The biggest issue here is that what we are calling "AI" is NOT AI. It is Not Artificial Intelligence. A spider can demonstrate equivalent if not better intelligence than these programs.
Granted... the spiders have a couple hundred million years head start.
It's a hard definition.
Humans obey their programming most of the time. As a simplistic example:
Someone jumps out at you wearing a mask and startles you. Your eyes will widen to increase your visual field. Your mouth will emit a noise. Your muscles will jerk away from the sudden motion in front of you. You do not intelligently think about any of these actions, they are Programming. In fact, so much so, that you might fall over backwards.
But the human is capable of overreaching the programming. You can train yourself to suppress the program and behave differently when startled.
AI of today can write new programs. It can "learn" in a similar fashion as we do by applying new programs, even self written ones. But what it cannot do is intelligently overreach its own programming. It can only mimic that and it can only do so when a Human is telling to to do it and how to do it.
This difference is profound when it comes to making fast decisions and understanding the full complexities of a situation.
The Tesla Car is programmed to see objects it is to avoid - but utterly incapable of seeing those objects as People.
it can map out a grid - but it cannot fathom that the grid is a model of the Real World.
It can use sensors and cameras to see; but has no understanding of the very concept that it has sight.
We have built little robots with Insect like thinking skills and patted ourselves on the back. A bit precocious...
We cannot achieve AI until we develop Quantum Computing.
And when we do...
It will make us look like complete blithering idiots.
It is not here, yet.
Well i sometimes hearing what people talking here example
also Andrzej Dragan.
They sayed IA is learning things and they with program can forward something like a plus or minus but don't know what is inside and how it learning.
Next a man sayed we creating something what learning fast and is connected everywhere. That means if robot A learning something then he can send to all another robots some new updates. Propably we going in "era terminator".
I will admit that I'm not entirely sure about an accurate definition of AI, but I'm not talking about ChatGPT or Tesla's autopilot.
Citing one of the research from the linked podcast, there are two key points that summarize this:
- The median respondent also believes machine intelligence will probably (60%) be “vastly better than humans at all professions” within 30 years of HLMI.
- 48% of respondents gave at least 10% chance of an extremely bad outcome.
What we are calling "AI" can create "art". This includes human faces:
Sometimes, those faces have eyes with two pupils.
What we are calling "AI" is interesting, but it doesn't even know what it is generating. It doesn't know what "eyes" are. It doesn't understand the difference between shadow and hair. It does not understand that a mouth is an orifice... It has no idea what it is doing.
And you cannot say, "Well, it may not understand it, but it's just doing what it can to please its controller". Like a child might do. No... It's not even doing that. It has no feelings about it. It does not care if you like the output or not. It has no concept of whether the output is right or wrong.
What are you talking about? The concept of the future that does not yet exist?
Well. I wrote something before IA is implemented in mobiles to doing a photos. Some mobiles have a good photos some are bad.
So where is the problem IA or lens?
For me this isn't problem learning a hardware what i have.
Long time i used Zenith camera and now we have only some options but not all what we can. Ok some digital camreas canon,nikon and sony are ok. So the implemented IA and that things can be used with some type subject - human lifes but not everywhere where stealing a jobb folks!
This " Zorin Os 17 release" thread seems to have diverted into an AI discussion from Bourne's post #85. So any thoughts about splitting the thread?
I'm talking about the present state of "AI"... whatever it means. I would say an intelligent behavior would be the ability to learn new skills in order to solve a problem. And by solving a problem I mean overcoming different obstacles to achieve a goal. In the case of a computer program, that would be one that can solve problems it hasn't been explicitly programmed to solve.
For example, learning a new human language to be able to provide an answer to a question that was originally not understood. This involves realizing the input is valid and not just random gibberish, searching online to identify the language and learn the rules about how sentences are structured, and then having the ability to recognize that the conversation can continue in that newly learned language.
I would need to find a reference to this but I know I've read about it somewhere.
Of course this is a great advantage. But what if Ubuntu moves into a direction that the ZorinOS community and developers don't want to go? For example there are some people who don't want Snap integrated into the OS by default but Ubuntu seems to make more and more use of it.
Speaking scientifically, the common definition of intelligence includes the ability to learn from, select or adapt environments.
Even todays "smartest AI" can only amalgamate information, then select a piece of information based on its data-preset success rate.
Since it does not understand environments, it cannot fathom nor comprehend any environment - it can only follow the set programming that says "If not that, than this"
To give a bit of clarity to the above: The "AI" does not know the difference between the ground and the sky. It does not differentiate between rocks and trees. It can identify objects, but does not understand what they are. Again, I point back toward AI image creations depicting people: The AI does not understand our limbs. It will show limbs as connected in ways they don't fit - almost an M.C. Escher style of view because they do not understand a 3 Dimensional Body. They perceive it as a two dimensional smattering of different shades and colors. They can create the appearance of an environment - but they do not know what an environment is, cannot select from it, nor adapt it.
You can examine structure to identify patterns. And I would agree that pattern recognition is a part of intelligence. "AI" Can perform pattern recognition quite well. However, pattern recognition without an ability to understand and react to the environment for which that pattern inhabits means that current "AI" cannot intelligently disseminate that information into accurate patterns.
Instead; it guesses an output statistically.
And this is why our current "AI" can get things so very wrong sometimes. It lacks the ability to stop and question itself. It cannot look at its output and say, "Wait... that's funny" or "That doesn't look right".
In a way, it can "teach itself" because it is programmed to discard its own failures. But it is not Self-Aware and therefor, is not learning from itself even if it does "teach itself". It's just "if not this, than that."