Video to Cartoon Converter using style transfer and shotstack

Hey guys I’m on to something big. I just tried out a feature of another API that allows you to transfer the style of any image to another, and it does a very good job cartooning the image. Better than most cartoonizer apps out there. Now, what I would like to do is allow a user to convert a video to individual frames, apply the style transfers to each frame (there can be options for every other frame, etc.), then recompile all the frames as a video. Voila, we just made a video to cartoon converter!

How much of this could be done with shotstack?

Hi Daniel,

I think you can’t get the frames using Shotstack.
You should use ffmpeg directly.

Contact me if I can help you on this:


It might work, you could create image assets with about 0.04 length which should be 1 frame. Here is a quick example output. I use the demo here and just set the length to 0.04. It looks like some images play for 2 frames and some for one though:

It can probably be done but you would probably end up with a large JSON file and using ffmpeg might be a better option.

This article shows you how to stitch files together using ffmpeg: Merge videos using FFmpeg concat — Shotstack, it should work with individual images. Of course you’d need to set all this up yourself.

1 Like

Ok, so maybe just use shotstack in a final render to add music and effects? While we are discussing this, here is an example I manually made and also this Facebook group is for discussing two pet based apps I have been engaged to build. I am trying to bring shotstack into both these apps to allow users to generate videos of their pets in clever creatives as well as to dynamically replace treats and other branded items for advertisers. Maybe one of you can join and say hello and follow what we are doing and talk about how shotstack can get involved. :blue_heart:

It keeps trying to add the video instead of the link. Here I think this is the group

Man I am having the hardest time enabling ffmpeg on Google cloud VM. There is absolutely no clear docs anywhere on the web. :dizzy_face:

Haha, yes, this is why we created Shotstack, to save you this pain.

You might need to work out what version of Linux the VM is running and then compile ffmpeg on a Docker version of the same or actually compile ffmpeg on the VM.

Bro, Jeff hears me praise you guys all the time. You are an excellent example of how every backend service provider should be. A simple and we’ll organized api with clear documentation and strong personal support. Whoever wrote GCPs documentation needs to hire your people to completely revamp theirs. They will start their docs with a reference to 10 other links which open in the same tab, and end every section with the info you need to know at the very beginning to even follow what they are saying through any of it. I think I found a solution with ezGif API. They have a feature which will return a zip file of the frames. Now I just have to unzip, apply style transfers, recompile as a video somehow, then send to shotstack for effect, overlays, and sounds.