“It is not necessary to change. Survival is not mandatory.”

 

An #AI Thought Piece

by George Achillias,

Chief Strategy Officer @ Braingraph

 

Since its beginnings in the 1950s, artificial intelligence has been a favourite matter of scientific discipline literature. Yet today, AI has entered the region of fact: several studies underline that intelligent machines will change the way we work, we move and even how wars are fought. Some experts claim, AI will outsmart humans at virtually everything in the next 45 years. Innovators and scientists around the world believe that now is the time to ensure that AI can’t override humanity. And even if sufficiently strong arguments suggest that machines could one day be more intelligent than we are, many scientists are ready to accept that challenge.

Every day we read articles or a story about AI, machine learning and how these two can shape our lives. There is also not even one day not to read about how risky for our society the so-called real pragmatic application and approach of AI can be if we don’t take the appropriate precautions. Already 4 years ago, on Twitter Elon Musk was clear: “We need to be super careful with artificial intelligence. It is potentially more dangerous than nukes.” For more than fifty years, artificial intelligence researchers were being focused on giving machines linguistic and mathematical-logical reasoning skills, modelled after the classic linguistic and mathematical-logical intelligence concepts.

So far we know the machines as if they have emotional feelings. Recently however, a more empathetic approach has been given, enabling them to appear emotionally intelligent. The reason for that is they are being programmed to learn when and how to display emotion. While from the one hand, a hands-on engineer, Elon Musk is concerned and terrified from how dangerous AI can get, on the other hand, Facebook’s Zuckerberg has a more optimistic approach. As he stated, “AI is going to make our lives better in the future”, and doomsday scenarios are “pretty irresponsible,”. During a Facebook Live back in July 2017, it was clear from Zuckerberg that he tried to oppose any potential spread of fear surrounding the potential of artificial intelligence. “I have pretty strong opinions on this. I am optimistic,” he said. “I think you can build things and the world can get better. But with AI especially, I am really optimistic”.

 

“And I think people who are naysayers and try to drum up these doomsday scenarios — I just, I don’t understand it. It’s really negative and in some ways I actually think it is pretty irresponsible,” he said. “In the next five to 10 years, AI is going to deliver so many improvements in the quality of our lives,” added Zuckerberg. In support of Musk’s approach, the most famous physicist made a statement early in 2017.

“The development of full artificial intelligence could spell the end of the human race….It would take off on its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded” Stephen Hawking told to the BBC. These different approaches of optimism, pessimism, comfort and discomfort, create a momentum that put us in a position not only to perceive and understand other periods in history were massive changes happened but also to see how to avoid critical errors on the adoption of new developments.

Two hundred years ago, a ton of scepticism and concern was raised by Flaubert, a French philosopher of the 19th century, about everything new in his world, but particularly for the trains. Flaubert was sceptical about trains because he thought (in Julian Barnes’s paraphrase) that ‘the railway would merely permit more people to move about, meet and to be stupid.’ Of course that never took place, as we saw trains shape the universe and drive the development of the endless American endless to what we call today the United States of America.

Of course, although History doesn’t exactly repeat itself, it does run in cycles. One of the most solid theories of such cycles was articulated by economic historian Carlota Perez, in her influential book Technological Revolutions and Financial Capital: The Dynamics of Bubbles and Golden Ages. It suggests that if the world’s key decision makers act in concert, humanity can get through the current period of upheaval and economic malaise and enter a new “golden age” of broad economic growth

Automation and artificial intelligence will have a great influence on all types of systems. What we were used to, all typical and common value creation chains will change, significantly. Up to today the feeling that humans are in charge of everything is very strong. Until now we were the sole masters, the programmers, the data scientists, the analysts, the strategists, the people responsible to create value for other humans. With the main purpose to make a profit or to improve other humans’ life quality and safety. AI might change this paradigm fundamentally, allowing a small number of strategic adopters almost exponential growth of AI-supported value creation – not just in easily automateable areas like transportation, production or administration, but more and more also in seemingly “creative” areas like entertainment (think AI-driven creation of music and movies) or even human communication itself (think our increasing delegation of social media driven communication onto algorithms, predicting the next thing we want to say, comment, like, react to).

This necessarily raises questions about what core elements of “humanity” will be left to us, what actually makes us human, what is the essence of humanity that cannot be copied, automated and replicated by machines? How can we avoid “algorithmic determinism” – becoming the digital habit that we have developed with our behaviour as machines are seemingly making our lives easier and easier but also replacing us in almost every field, including our very own (not any more human) communication. The question: “Am I talking to a human being now?” – the classic “Turing Test”, will be harder and harder to answer until it will effectively become irrelevant in most instances as we are getting so used to talking to human-like machines that we will perceive most humans being alike in most if not all routine contexts.

The question therefore is – and this is critical for our human machine co-evolution: how do we stay one step ahead of the machines, how do we not become their extended human interface but stay relevant and in control. For a few, technically and financially privileged, the newly emerging “algorithmic feudalism” will open up unprecedented opportunities that will come with not just exponential growth of their wealth but exponential access to their own AI-enhanced decision making, effectively rendering them technical demi-gods.

For all others, 99% of the world’s population, this AI-powered income inequality will make life more comfortable, but most probably not more meaningful as their work will become automated and their role in life become that of a completely machine-determined consumer losing all chances of social mobility because their skill-sets have become irrelevant, making them automatically part of a new “useless class”.

Conclusion

The time to make the most critical decision of humanity is now – and every individual as well as every organisation has to make it by themselves: Will we stay relevant as humans or do we capitulate to machines and surrender our human core to algorithmic control whilst losing any ability for original thought and truly creative value creation that cannot be copied by machines? Do we become the masters or the slaves of machines – hopefully at least being run by human masters?