Published on April 2019 | Deep Learning
This paper brings a deep learning methodology to neural style transfer that deals with lots of image content and transferring it to reference style. Neural Style Transfer is an interesting technique that represent the capabilities of neural networks. It is an optimization technique using a content image and multiple style image. The input image is transformed to look like the content image but “painted” in the style of style images. Prisma is an example of neural style transfer application and it is a very popular Android app for styling. It takes input images from phone and process it in a server and sends it back to us. The main disadvantage of Prisma is, the user can use only one style image. In our work, an input of content image and the corresponding style images are given which are combined to form a final image that has the features and semantic details of the content image and the representation of style of style image(s). This allows for generating unique images by combining various style images and content images. The proposed system uses one content image and multiple style reference images for processing. To convert an image into the styles of an artist, neural artistic style transfer technique is used. For a given content image, we match the corresponding styles and content representations at intermediate layers of convolutional neural network which is used for image classification. VGG16 model is used in the proposed system to perform style transfer at various layers in the network. It performs normalization and as requirement by VGG16 we convertsfrom RGB (red, green, blue) toBGR (red, green, blue). The system uses multiple style images which are transferred onto the specified content image and we get the corresponding output by using Keras functional API and Tensorflow. The comparison results show that this system overcomes the drawback of Prisma by using multiple style images.