Yesterday, 13th June 2017, I was able to attend Prof. Martyn Thomas’ talk about Artificial Intelligence at the Museum of London. Unsurprisingly, the venue was at full capacity already half hour before the talk started – reflecting the high level of interest this topic currently receives from the general public. But Artificial Intelligence has been trending for a while, with bigger peaks in general interest appearing just last year when Google DeepMind’s AlphaGo system was able to beat world champion Lee Sedol in Go.
Prof. Thomas started his talk with the general question whether computers can think, followed by all key objections that people have to come up with to date to negate this question. Alan Turing, one of the “founding fathers” of modern AI, addressed this very question in his seminal paper “Computing Machinery and Intelligence” already in 1950 and effectively refuted all of the main objections why computers would not be able to think.
After shortly referencing the well-known Turing test, Thomas made an important point by mentioning Turing’s definition of thinking as “mental processes that we don’t understand”, which elegantly answers the actual question of what thinking is by one of the key features of our own brain activity, namely the fact that we do not fully understand it.
Against this background it is fair to say that Artificial Intelligence has already reached a level of sophistication that fulfills this requirement of complexity beyond our understanding, especially when we focus on the “black box” like ways in which we use Machine Learning, Deep Learning and sophisticated algorithms to crawl through big data in search for predictive patterns that we fail to fully understand ourselves – but accept and use as long as “it works”.
In this context Thomas made a smart reference to Arthur C Clarke’s point that any sufficiently advanced technology is indistinguishable from magic.
And it seems to be exactly that kind of “magic” that is happening when we apply brain-like Neural Net techniques to extract deeper insights from large data sets – almost an applied version of the more abstract approach to create “neuromorphic computing” as part of research projects like the European “Human Brain Project”, the Swiss “Blue Brain Project” or the US “Brain Initiative”.
And it is exactly where these multi-layered, highly complex computer-cognition approaches become basically an incomprehensible “black box”, where we start to tap into the limitations of AI and Machine Learning.
Especially the points that Thomas made about the systemic bias, the mostly correlational nature of results and the lack of understanding that goes with the use of AI and Machine Learning approaches seem to be most concerning.
As with all technologies that we have invented to allow us to reach further and do more, this technology specifically somehow simultaneously prevents us from actually understanding more exactly how we get to these magical results – results that seem both mentally enabling and disabling at the same time.
The other critical challenge of AI is obviously an ethical one related to the disruptive impact it is bound to have on the economy in its current shape and form. Specifically traditional professions involving routine tasks, automateable processes and any activity that uses data in a way that can be automated, optimised or sped up will likely be suffering major cut backs where human jobs will be done by machines more effectively and efficiently.
The resulting ethical question is obviously: if we allow this process to happen, how do we intend to compensate the millions of likely even highly skilled professionals that will be replaced by AI? Would it be fair to tax the use of AI in a similar vein as Bill Gates suggests that we should tax robots? How do we avoid the emergence of a new AI feudalism in which very few clever people using AI for their business purposes will become immensely rich while the vast majority of the population will descend into a new class of “useless people” for whom there will be basically no more productive opportunities or meaningful challenges but only games, illusions and entertainment.
However, hopefully not all is lost yet. If this short reflection has taught us anything at all it is that we are all called to invest more time into thinking how we want to apply and want to interact with AI driven applications that will be becoming more pronounced and more powerful very soon. We should not only leave it in the hands of experts like Google’s DeepMind co-founder Demis Hassabis to think about what AI shall be doing in the future but we all need to identify the challenges and ways in which we want AI to work and be integrated into our life – and discuss the social and ethical implications it will likely have.
In face of the possibility of learning algorithms and be able to use “algorithmic inspiration” supporting our quest for new knowledge we will probably also have to redefine the way we are doing science. Having AIs that can think with and even for us will force us to think about our role in this Human-Artificial-Intelligence partnership altogether. But what will this new way of thinking and living make us – are we bound to become cybernetic beings that will outsource more and more of our mental activity to “thinking machines”? Is there a “human core”, a set of genuinely human impulses, feelings and activities that we have to preserve because they cannot or should not be outsourced to machines? To what degree do we want to be defined only by being able to think and is an AI-improved thinking making us a better human? With more technical possibilities there will of course be more questions – some of which will be touching upon our very own nature as human beings. And whilst AI can help us find logical answers to data-related problems, it might not help us find the right way to apply these answers in order to actually make our lives better. Because “better” can only be defined in relation to a human point of reference. Any human voice out there that wants to join this conversation?