Fun with Deep Learning and TensorFlow

Fun with Deep Learning and TensorFlow

So you've heard of Deep Learning, you have seen some examples of what it can do,

you took a look at the math involved...

.. and decided it's not for you. Don't worry; you can do fun stuff with deep learning without having to use the math itself.

Say you like art & photos and want to do something other than SnapChatting funny dog ears on yourself.

Maybe you want to take a famous artwork like Hokusans' The Great Wave off Kanagawa

and let your holiday picture of El Pico del Teide on Tenerife:

have the same style:

The old way of doing something like this is to use a photo editor like Adobe's Photoshop and apply lots of filters, transformations, copy & pasting and magic wands. It involves a lot of work, and you need to be a photoshop wizard to get even close to the result you want. There are other apps making this easier, for example, Topaz Impression or Synthetik's Studio Artist. Both fun to play with, but they will cost you money and are still limited to the built-in presets.

As an alternative, you can let your computer to do the hard work for you, by using Deep Learning. A number of easy to use pre-trained neural networks can be found, like this one on Github by Logan Engstrom. It comes with a couple of styles to try out, and the styling of images itself is self-explanatory and pretty fast. After you set it up the environment it's easy to turn your original cow:

into different styled versions based on famous paintings:

Of course, it's more fun to make a style of your own. The first step is to select a painting like Visione di Porto from the Italian futurist Benedetta Cappa

and start training your neural network. It turns out training is pretty easy, all the hard work is done by your computer, you just need to be patient and wait for the result. A long time...

The first time I tried in on my 2015 Mac book Pro with 16Gb ram and a 2,8 GHz Intel I7 processor it went on for more than 3 days, fans blowing continuously to get rid of all the generated heat. When I broke it off after about 80 hours, it still had days to go.

Actually, training is not that easy; the network not only uses the one example image as input but also 82.784 other images to train on. On a very high-level training this neural networks does the following: it changes the training images and checks how much the result looks like the painting. It then adjusts itself and tries again and again. It keeps repeating this process thousands of times until it reaches an endpoint you defined. In general, the longer you let it train, the better the result (in reality not really, but stick to this for now. Check out this great article (pdf) for an in-depth explanation of real time style transfer).

The reason it takes so long is that there are enormous amounts of floating point calculations to be done and on a CPU that takes a lot of time. Floating Point calculation are also used in graphical intensive software like computer games. And to cater for this, video card manufacturers like NVIDIA developed specialised hardware called GPU's. Making it possible for gamers to enjoy high-performance realistic graphics. In an unexpected turn of events, these GPU's also work with DeepLearning (a floating point calculation is a floating point calculation whatever it's used for).

I happen to have a Windows PC with an older NVIDIA GTX 780 GPU (not used for gaming but for running Mathematica).

So I installed TensorFlow on my Windows machine, made some changes to the python scripts and tried again. The same process took only about 8 hours and when applied to this image of a windmill in Delft:

I got this result:

Over the next few days, I kept trying other settings, impatiently waiting for the results (kind of annoying as you sometimes have to wait 12 hours to conclude there's no real improvement). In general, it helps to train the network for a longer time. Here's a result from the same neural network as before, but trained for a longer period:

All in all, it's fun to do a nice introduction to the power of Deep Learning. Will this be the future of Art? No, as you'll soon find out, the styles work well for only a small subset of your images. It's not so much the final result itself that's interesting to me but the fact that you don't know what the neural network will learn. And the eery thing is that whatever the result is, the network always has captured the feeling and essence of the original painting. Although the subject (Alicante's harbour) of this image:

differs from Hopper's original painting:

it still has the same feeling to it.

Of course, you're not limited to paintings, you can use other images like this one from a nebula: 

Combine it with a photo of two Dutch cows in the Wieringermeer and you get:

I really like the result, but still I wonder why the blue sky has disappeared, why the cows noses are unchanged and why some parts of the grass have more stars than others.

I want to learn deep learning too

Like
Reply

I loved reading this. Nice article.

Like
Reply

To view or add a comment, sign in

More articles by A T

  • Machine Learning with TensorFlow on Google Cloud Platform

    I've been doing Machine Learning (ML) as a hobby for a couple of years now and I started out using various Mac's which…

    2 Comments
  • AI ethics

    AI's biggest problems in the near future won't be technical but social. It will be some time before society fully…

  • The initial pain of (app) rejection.

    TL;DR After a couple of initial rejections, my personal app is approved: Besides working on iOS apps for others, I also…

  • What will go wrong with AI?

    What is with AI that makes us go overboard when thinking about its possible impact on humanity? Is it caused by the…

  • GroupClip: alive, but kicking?

    It is somewhat quiet around our iOS app GroupClip, but this is not to say that nothing happens. Behind the scenes, we…

    3 Comments
  • Freemium game apps: evil by design

    I borrowed the subtitle from a book by Cris Nodder This book is both fun and shocking to read as it shows the various…

  • Davos 2017: Artificial Intelligence

    Don't expect any groundbreaking insights from this discussion on the impact of AI, but pay close attention to the…

    1 Comment
  • My first iOS app

    And why you will not find it in the App Store (yet). Back in 2015, I worked on an app for my Udacity capstone project.

  • CTO snippet #1: There is no universal best solution for the challenges you face.

    When hiring new people, choosing tech stacks, selecting architectures, or anything else you are faced with as a CTO…

  • CTO snippet #1

    There is no universal best solution for the challenges you face. When hiring new people, choosing tech stacks…

Others also viewed

Explore content categories