The algorithm that can learn to copy ANY artist: Neural network can recreate your snaps in the style of Van Gogh or Picasso

  • Researchers trained their neural network to be able to work out what makes each artists style unique
  • They were able to feed each virtual 'artist' the same picture of a row of houses
  • Images was recreated in the style of artists including Kandinsky, Picasso and van Gogh 
  • Uses technology similar to Google's 'DeepDream' art algorithm

Advertisement

Ever wanted to see your holiday snaps in the style of Van Gogh, or have your portrait painted by Picasso?

Researchers have revealed an artificial intelligence algorithm that can learn to paint in the style of any artist - and repaint any snap you feed it.

Researchers fed their system a series of old masters - and it turned a modern day snap into perfect pictures in the style of some of the world's best known paintings.

Scroll down for video 

'The Starry Night' by Vincent van Gogh, 1889, and a photo of a row of houses next to a canal 'repainted' in the same style by the German algorithm. The team say it can learn the style of any artist simply by analysing a single picture.

'The Starry Night' by Vincent van Gogh, 1889, and a photo of a row of houses next to a canal 'repainted' in the same style by the German algorithm. The team say it can learn the style of any artist simply by analysing a single picture.

The original image the team fed the algorithm this image of houses to be 'reimagined' in the style of different artists

The original image the team fed the algorithm this image of houses to be 'reimagined' in the style of different artists

'In fine art, especially painting, humans have mastered the skill to create unique visual experiences through composing a complex interplay between the content and style of an image,' the researchers from the University of Tubingen wrote in the journal Arxiv. 

'Thus far the algorithmic basis of this process is unknown and there exists no artificial system with similar capabilities. 

They hoped to create a system to address the problem, working out what makes a painter's style unique - and then learning how to copy it. 

'However, in other key areas of visual perception such as object and face recognition near-human performance was recently demonstrated by a class of biologically inspired vision models called Deep Neural Networks.'

'Here we introduce an artificial system based on a Deep Neural Network that creates artistic images of high perceptual quality. 

'The system uses neural representations to separate and recombine content and style of arbitrary images, providing a neural algorithm for the creation of artistic images. 

'Moreover, in light of the striking similarities between performance-optimised artificial neural networks and biological vision,our work offers a path forward to an algorithmic understanding of how humans create and perceive artistic imagery.' 

Google also revealed images of how its artificial neural networks learn to recognise images earlier this year.

They showcased what the systems 'see' and what happens when the software gets it wrong.

Google trains an artificial neural network by showing it millions of training examples and gradually adjusting the network parameters until it gives the classifications the team want.

"Seated Nude (Femme nue assise)" by Pablo Picasso, 1910.

'Seated Nude (Femme nue assise)' by Pablo Picasso, 1910.

"Composition VII" by Wassily Kandinsky, 1913

'Composition VII' by Wassily Kandinsky, 1913

"The Shipwreck of the Minotaur" by J.M.W. Turner, 1805

"The Shipwreck of the Minotaur" by J.M.W. Turner, 1805

The network typically consists of 10 to 30 stacked layers of artificial neurons and each image is fed into the input layer, which then talks to the next layer, until eventually the 'output' layer is reached.

The network's 'answer' comes from this final output layer.

GOOGLE'S DEEPDREAM CODE EXPLAINED

Google revealed images of how its artificial neural networks learn to recognise images last month.

They showcased what the systems 'see' and what happens when the software gets it wrong.

Woof! Here Leonardo DaVinci's Mona Lisa is adjusted with Animals refinement tool - slider image in centre shows image being transformed
Woof! Here Leonardo DaVinci's Mona Lisa is adjusted with Animals refinement tool - slider image in centre shows image being transformed
Woof! Here Leonardo DaVinci's Mona Lisa is adjusted with Animals refinement tool - slider image in centre shows image being transformed

Woof! Here Leonardo DaVinci's Mona Lisa is adjusted with Animals refinement tool - slider image in centre shows image being transformed

Google trains an artificial neural network by showing it millions of training examples and gradually adjusting the network parameters until it gives the classifications the team want.

The network typically consists of 10 to 30 stacked layers of artificial neurons and each image is fed into the input layer, which then talks to the next layer, until eventually the 'output' layer is reached.

The system has tried to learn to recognise animals - and spotted strange animals in unexpected places as a result.

The network's 'answer' comes from this final output layer.

In doing this, the software builds up a idea of what it thinks an object looked like.

Other images were created by feeding a picture into the network and then asking the software to recognise a feature of it, and modify the picture to emphasise the feature it recognises - such as animals and eyes.

That modified picture is then fed back into the network, which is again tasked to recognise features and emphasise them, and so on.

Eventually, the feedback loop modifies the picture beyond all recognition.

In doing this, the software builds up a idea of what it thinks an object looked like.

Other images were created by feeding a picture into the network and then asking the software to recognise a feature of it, and modify the picture to emphasise the feature it recognises - such as animals and eyes.

That modified picture is then fed back into the network, which is again tasked to recognise features and emphasise them, and so on.

Eventually, the feedback loop modifies the picture beyond all recognition.

'Artificial Neural Networks have spurred remarkable recent progress in image classification and speech recognition,' wrote Alexander Mordvintsev, , Christopher Olah and Mike Tyka of Google's AI team.

'But even though these are very useful tools based on well-known mathematical methods, we actually understand surprisingly little of why certain models work and others don't.'

Google's researchers also asked the system to analyse Edvard Munch's The Scream - which was turned into a portrait of a dog

Google trains an artificial neural network by showing it millions of training examples and gradually adjusting the network parameters until it gives the classifications the team want.

The team has even given the images a name - Inceptionism .

The network typically consists of 10-30 stacked layers of artificial neurons.

Each image is fed into the input layer, which then talks to the next layer, until eventually the 'output' layer is reached.

The network's 'answer' comes from this final output layer.

In doing this, the software builds up a idea of what it thinks an object looked like.

Researchers used this to ask the software to create a dumbell, for instance.

What came back was a strange image showing arm arm attached to a dumbell.

'There are dumbbells in there alright, but it seems no picture of a dumbbell is complete without a muscular weightlifter there to lift them.

 

 

 

 

 

 

 

 

The comments below have been moderated in advance.

The views expressed in the contents above are those of our users and do not necessarily reflect the views of MailOnline.

We are no longer accepting comments on this article.