Here is a tricky situation about the Artificial Intelligence (AI). On the one side, we have Stephen Hawking, who has a chance to significantly improve the quality of his personal and professional work thanks to the new Intel technology. One may think that he would be grateful for this great opportunity.
On the other side, he is extremely worried that the further AI improvement of the in this field can end the humanity as we know it today. This is at least to say a strong contradiction for this brilliant scientist. This is not a traditional conflict of interests, but it is still very confusing, either way. Right?
We have every reason to believe that the future improvement of the AI can strongly influence Hawking and other people with disabilities in a similar situation. Why stop now? Just because someone has seen the Terminator movie and become familiar with the SkyNet concept. Is this serious enough?
Do you remember one of the most famous movie lines about the aliens? They have not crossed the whole universe just to be bad and start a war. The same can be applied for the AI. Even if it becomes self-conscious one day, its first thought is not going to be to destroy something. It just does not make any sense, does it?