Matrix Unloaded

image

What is wrong with these guys? What is the catch with this rage-against-the-machines attitude? Stephen Hawking and Elon Musk, among other Terminator haters, are dead worried about the possible implications of the AI and highly advanced robots in our lives. Have they seen too many movies?

We do not even dare to claim that at some point in time we will not witness the SkyNet scenario more or less. However, our sarcasm and fully revealed bitter tone have quite a practical cause. While we are examining various SF scenarios our friendly neighbors hackers are having the time of their cyber lives.

To tell you the truth, one efficient computer virus is a more serious reason for our deepest concern, than an entire army of Terminators. Why do you think that the Terminator in a real life situation will have time or nerves to chase you around? Why? He will soon figure out what needs to be done.

You can rest assured that our worst case scenario Terminator will gladly trade all of his weapons for some basic hacking skills. While we are thinking and predicting big, some hacker busy little bees are rocking small. Do you remember how The War Of The Worlds eventually ends? Read it again guys.

Stephen Hawking vs AI

image

Here is a tricky situation about the Artificial Intelligence (AI). On the one side, we have Stephen Hawking, who has a chance to significantly improve the quality of his personal and professional work thanks to the new Intel technology. One may think that he would be grateful for this great opportunity.

On the other side, he is extremely worried that the further AI improvement of the in this field can end the humanity as we know it today. This is at least to say a strong contradiction for this brilliant scientist. This is not a traditional conflict of interests, but it is still very confusing, either way. Right?

We have every reason to believe that the future improvement of the AI can strongly influence Hawking and other people with disabilities in a similar situation. Why stop now? Just because someone has seen the Terminator movie and become familiar with the SkyNet concept. Is this serious enough?

Do you remember one of the most famous movie lines about the aliens? They have not crossed the whole universe just to be bad and start a war. The same can be applied for the AI. Even if it becomes self-conscious one day, its first thought is not going to be to destroy something. It just does not make any sense, does it?