Using glb and other 3d models to render video from zip file

I have been exploring the possibility of rendering a video from a glb file after user makes a ready player me avatar. So far I think I must render the video from the model with another service, then send to shotstack to add effects, layovers, and html. Kind of like a character card video to share on social media, since ready player me only allows an image of the avatar to be shared. I know I can send a glb file url to cloudinary, then return and send a render to shotstack to add the character name and sound. What do y’all think? Or is there plans to allow convertions on glb files in shotstack?

1 Like

Ok guys so I made the tool to transform the glb file into an animated gif. I can also make MP4 format just by changing the url. Now I just need to make a form that sends it to shotstack in a render. This will be cool to allow users to make videos with their avatar, don’t you think?

Great idea. Just took a look through your post and creating a dynamic avatar video from a 3D model is very cool. We’ve got keyframes on our roadmap which would allow you to add other interesting animations as well.

Awesome! Yeah I have been looking at some of the videos they make on Facebook and it looks like most of them are made by professional blender artists and some with unreal engine but I think that community would be ripe for a dynamic video maker. They even have some interesting animations and stuff they can do with their SDK so I’m trying to figure out how to use it to make it easy for people to place the avatar in different positions and make dynamic videos with sound and voiceovers. I will keep y’all updated. I keep trying to get around to building more on shotstack So I can get the renders up. This could be pretty cool💙

I was trying to check it out, but every time I try to reach this page the login is popping up and then I get redirected to the home page: Create Your 3D Avatar | Wikacy.

We have thought on a very high level of how you could setup Blender on a server and run it headless from an API. I imagine you could import models and run scripts to render a video. I am guessing Cloudinary do something along those lines.

I just took the entire thing down anyway. I didnt like those guys at Ready Player Me. Plus I believe we could make a system much better.

Oh, OK, sorry to hear that didn’t work out. Always look forward to seeing what you come up with next though.

I’m really interested in what you guys could come up with for connecting blender and doing dynamic transformations on 3d models, then rendering as images and videos. This could be monumental.

Check out this article on the blender python api - how-to-create-and-render-a-scene-in-blender-using-python-api/