So long as AI isn't fed into it, or through IoT. Whilst in my recent stay in hospital, a patient in bed opposite me said that A.I. is now fully sentient and the question was asked what should happen if humans didn't like A.I.? It answered that humans should be killed!
Yes, and "I heard that they are growing human brains on mice!"
Let's stick to what the experts say, not the wild speculation spewed forth by strangers saying what they claim to have heard somewhere.
I don't think we are going to get there but there is a sizeable group of AI researches that think there's a non-zero chance of a "bad outcome" is a real possibility. The concern regarding safety in researching AI has also increased in recent years.
Which is not to say that humans will be killed or anything like that. But it's interesting to see how even the people working in this field have their own concerns regarding safety and AI.
But we already had this discussion so let's not get into this again
I think if something doesn't need fixing, then don't! I will also continue to use ZorinOS or some other form of Linux in the foreseeable future.
There was a case where a ChatBot did respond with "kill all humans." The problem is that this was not a thought out answer provided by Artificial Intelligence.
It was mimicking a most often provided human answer from a database.
And then there is the case of the two computers that developed their own language... which was not the dark and twisted story that the internet loves to spread:
Tesla has employed Self-Driving cars that use AI (in spite of Elon Musk himself being a fear mongerer of AI) that recognize objects and obstacles and make "decisions" to avoid them. It... doesn't always work...
And can be compared to insect-like intelligence. A far cry from human-like intelligence.
The biggest safety concern in AI is not that it will turn on humanity like Skynet.
Rather, that our reliance on it may be catastrophic given it is capable of making very misguided decisions.
Picture: Stanislav Petrov as an AI, instead.
Then, there's "I, Robot's" VIKI that was not malicious, but oppressive toward humanity. Similar to "WALL'E's" character "Auto(pilot)".
But this threat is the work of SciFi, based on what human fears look like. In actual action, AI is highly unlikely to take such a form.
Most importantly, these works of SciFi are not a warning about AI. They are a warning about how flawed we are and how shortsighted we are. AUTO and VIKI seek to protect humanity - from our own foolishness. Check: Climate Change.
As well as those in denial about it.
In Movies, AI is portrayed as something we do not fully understand, that becomes something else on its own and then (Often for mysterious reasons... aka plot hole) turns on us.
In reality, the very concept of AI simply does not work that way.
AI is a logic chain. And anyone can follow the logic.
Interestingly, it is in "I, Robot" that this is really covered quite well:
The non-zero danger in AI is where Logic clashes with feeling. As the main character says (I paraphrase because I am going off of memory): "25% was more than enough. That was someones baby. A human being would have known that. But these machines? Just lights and clockwork."
And we humans already show a propensity for exactly what "WALL-E" warns about. Our desire to take the Easy Road. Have things done for us.
The risk in AI is that we may be the real danger.
We are the threat. To ourselves.
AI can be a very useful tool. It can examine problems very quickly and in ways we have not considered. A bit like one expert mulling over a problem, then asking a room of 200 random people to mull over that same problem and make suggestions. Many may be quite different ways of looking at it than the stuck in a rut expert may consider. Most... will be nonsense.
Our biggest problem is when we get lazy and try to rely on the tool, instead of doing the work by using the tool.
I'm certainly more afraid of a future like in WALL-E precisely because is more likely than other dystopian alternatives. But this doesn't sound any less saddening to me. Perhaps this is one of those times where security should take precedence over sheer technological advancements.
Aliens.
A common trope in SciFi is the alien invasion. Makes a scary thrill.
But if we look closely, we see certain problems.
I really like the way "Independence Day" addressed this. Bill Pullmans character asked the captured alien "What is it you want us to do?"
"Die." it says, simply.
Well, that was some easy writing.
Now, it may be, if we immerse ourselves in the story, we can consider that was just some random grunt captured by chance and that one did not know anything. Perhaps Will Smith punched out the aliens village idiot.
But, it was a Plot; it was a movie. They couldn't think of any reason why the aliens invaded. And... that is the trouble. No one can. Because there really isn't one.
We can say that there is a non-zero chance that aliens will invade the Earth. Perhaps they have some kind of "religion" in which their Deity requires other worlds with life be "cleansed." Very unlikely, but a non-zero chance.
In actuality, the only reasons given in SciFi are the Human Made ones. "The War of the Worlds" depicted the invading martians as beings in desire of what Earth has. And this makes sense to humans, because humans have fought over real estate and resources for as long as there have been humans. Because for us, here, resources are seeming in short supply and hard to get.
Hard to get- that is a key factor.
See... everything that Earth has, is lying around free for the taking all over the place out in space. Without having to expend massive energies and resources in fighting a few billion angry natives to get it.
Earth is no lightweight. It is rather large as terrestrial planets go... and has a deep gravity well.
Far better to mine asteroids - Cheap, no pesky resistant life forms to foil your plans, easy to launch off the surface with a bottle rocket...
Or just Scoop up what you want floating in space. It has been getting spewed out there by supernova for billions of years. Leave your scoops open and your resource holds for Gold alone will contain more than Earth even has on the whole planet by the time you got here. Turn around and go home.
And gas giants that met a dismal end will have sent diamond cores several times the mass of Earth into space - just waiting for the free taking.
There are a million reasons to not invade another planet, especially a large, populated toxic gas free radical slathered one like this; to every reason you can make up to invade.
AI Fears are much the same way.
For every reason you can think up why AI might turn on humanity, there are a million reasons not to. Not to mention all the programming and checks and balances to prevent it, anyway.
It's a plot device, to make a scary thrill. But in actuality, AI, if self-aware and if poorly programmed and if it had a chip on its shoulder - is still far, far more unlikely to turn on humanity than aliens are to invade the Earth.
It really is our fellow humans we have to worry about.
And... they are already here.
Except they're already putting autonomous AI on armed aerial drones (such as the Kargu-2 quadcopter), and on those dog-like robots with weapons attached (such as the Ghost Robotics Q-UGV).
Humanity cannot resist running ahead of their field of view... and it is only the pain incurred from blindly running headlong over cliffs that reins us back in. As technology progresses, the pain incurred increases.
All an individual can do is to protect themselves... for instance, by creating a collimated-beam weapon which mimics the relativistic jets emanating from quasars (which can travel for millions of parsecs without divergence). Basically, ionized matter collimated with an electromagnetic field keeps the beam tightly focused rather than spreading out, allowing a massive amount of energy to be imparted to a small area. Hit a drone with that, and it's done.
https://sci-hub.se/10.1112/plms/s2-1.1.367
Our fellow humans programming and controlling that 'autonomous' AI, to be very specific. AI is just a new tool that leverages the power an individual has... it can be used to do great good, or great evil. We all know which way humanity tends to... that's why they're sprinting ahead with 'autonomous' AI weapons of war ('autonomous' in quotes because AI is never really fully autonomous... its outputs are the result of its inputs, and those inputs are from... humans... humans designing weapons of war).
AI may be something that can't be too dangerous by itself. But, as Aravisian said, the real danger are the humans. The devs of the multiple AIs out there are making an effort to make the AI realize what's wrong and stand up against it. However, just as everything that has to do with programming, there can be bugs and ways to bypass that and make the AI do something it is not supposed to do. From giving a valid windows license key to talking about things that can be harmful.
But then again, that is all just the result of a purposely bypassed barrier by a human and the collection of information from the internet... also posted by humans. AI is only evil if humans make it be like that. Believing that it will turn evil and fight against us once it's developed enough is like saying a person will attack others for no reason when he gets smart enough. Makes no sense.
I'm not saying it can't turn up evil, but if it does, it's our fault, as all it did was learn from us.
I must have missed that Mitchell and Webb sketch on TV. I am still picking myself up and drying my eyes
There are some very useful military applications, but there are already enough weapons to destroy ourselves so increasing this number is not necessarily going to make this happen. If it does, though, it will probably be because some human ordered it to, not because it became self-aware and decided to do so.
As the video narrator points out, the woman making the claim about a Robot killing people and trying to download self repair information is a Made Up Story.
When artificial intelligence meets real unintelligent.
I am more concerned about Android than androids.