Remember when we first saw the dancing robot online? Some people found the whole spectacle incredible. How real the thing moved. How close to us it was. For others, it was a bit too close to home. Too recognisable. Where is the line between what is artificial intelligence and what is ‘real’? For a long time, people would’ve answered with the ability to act creatively, which was, for a long time, out of the realm of the binary logic of a machine. That’s until robots started to make art.
It started out on Google, when visitors (the whole world, almost) were asked to draw what was named on the screen - for example, a house. Then Google’s artificial intelligence would try to identify it. Then we didn't get the prompt. We'd just draw something and it 'knew'.
That wasn’t too long ago, when machine learning was still considered to be in its infancy. As seems to happen in the world of technology, things snowballed at a rapid pace. Suddenly, machines are creating their own art and making serious sales at auction.
Initially, this art was something to laugh at. Something sort of nightmarish that bordered on the dreamy and the surreal. Eventually, though, the art became more realistic, more impactful and - perhaps most worryingly - developed a sense of abstract depth. It is when that precipice is crossed, when the art means something, passing a kind of contemporary Turing test, that people may have due cause for concern.
Whilst the notion of a robot developing a consciousness like ours could easily be shrugged off, we’d be remiss to look past our own shortcomings when it comes to understanding the nature and processes of this most complex part of our lives. Even within this puzzle - spanning philosophy and neurology - lies a paradox. Can the riddle really solve itself? Is our mind capable of understanding its own complexities?
DeepDream was one of the first to push the boundaries of art created by artificial intelligence. The program by Google engineer Alexander Mordvintsev modelled these hallucinatory images on a replication of the neural process that we undergo in everyday life, coded into what he referred to as algorithmic pareidolia.
The latter part - pareidolia - is when our brain sees something and imposes meaning and form onto the thing when it doesn’t actually exist. An example is seeing a face on the moon, or animals in the clouds.
A mouthful of a name matches the fairly complex process. Still, each time we drew a pig or face or truck or fence when Google prompted us to, masquerading as a kind of game, we were actually making computers sentient. Yes, we can only blame ourselves. Not so fast, though. Why would anybody - namely the brains behind these plans - want robots to become like us?
That’s a million dollar question and not one for here. What we can do, though, is appreciate some of the art that robots have created and, in that art - where we usually engage with some of the most pertinent and poignant of human thought and behaviour manifest by artists - look close enough to see if there are echoes of ourselves. Blind tested - would you know you’re looking at the work of a non-human? If so, how? If not, then where have the lines of difference been redrawn? To me, anyway, the difference between a DeepDream and a Dali might not quite be there yet, but it isn't a million miles away either.
More like this:
Please, check your email.