Audio doesn't get rendered

The rendered videos don’t include audio but I don’t understqnd what I’m doing wrong. Here is a sample of the json, if somebody is able to help:

{
   "output":{
      "fps":12,
      "format":"mp4",
      "resolution":"mobile"
   },
   "callback":"https://hook.integromat.com/ql0lgpxg8d03dgq0stfs4jch51k33dqt",
   "timeline":{
       "soundtrack": {
           "src": "https://yt-proj.s3.us-east-2.amazonaws.com/BjSaFDTzkGM/BjSaFDTzkGM.mp3"
       },
      "tracks":[
         {
            "clips":[
               {
                  "asset":{
                     "size":"small",
                     "text":"Gunnter - I Want",
                     "type":"title",
                     "color":"#6F6D50",
                     "style":"marker",
                     "offset":{
                        "x":0,
                        "y":0.2
                     },
                     "position":"bottom"
                  },
                  "start":0,
                  "length":381
               }
            ]
         },
         {
            "clips":[
               {
                  "asset":{
                     "src":"https://yt-proj.s3.us-east-2.amazonaws.com/21ba4c12-dff6-4614-9128-9ce7a4b9814d.jpeg",
                     "type":"image"
                  },
                  "start":0,
                  "length":381
               }
            ]
         }
      ]
   }
}

It appears to be an issue with the encoding. When using the probe endpoint it gives me the following:

{
    "success": true,
    "message": "ok",
    "response": {
        "metadata": {
            "streams": [{
                "index": 0,
                "codec_name": "opus",
                "codec_long_name": "Opus (Opus Interactive Audio Codec)",
                "codec_type": "audio",
                "codec_time_base": "1/48000",
                "codec_tag_string": "[0][0][0][0]",
                "codec_tag": "0x0000",
                "sample_fmt": "fltp",
                "sample_rate": "48000",
                "channels": 2,
                "channel_layout": "stereo",
                "bits_per_sample": 0,
                "r_frame_rate": "0/0",
                "avg_frame_rate": "0/0",
                "time_base": "1/1000",
                "start_pts": -7,
                "start_time": "-0.007000",
                "disposition": {
                    "default": 1,
                    "dub": 0,
                    "original": 0,
                    "comment": 0,
                    "lyrics": 0,
                    "karaoke": 0,
                    "forced": 0,
                    "hearing_impaired": 0,
                    "visual_impaired": 0,
                    "clean_effects": 0,
                    "attached_pic": 0,
                    "timed_thumbnails": 0
                },
                "tags": {
                    "language": "eng"
                }
            }],
            "chapters": [],
            "format": {
                "filename": "https://yt-proj.s3.us-east-2.amazonaws.com/BjSaFDTzkGM/BjSaFDTzkGM.mp3",
                "nb_streams": 1,
                "nb_programs": 0,
                "format_name": "matroska,webm",
                "format_long_name": "Matroska / WebM",
                "start_time": "-0.007000",
                "duration": "380.061000",
                "size": "6664632",
                "bit_rate": "140285",
                "probe_score": 100,
                "tags": {
                    "encoder": "google/video-file"
                }
            }
        }
    }
}

It looks like the Shotstack editor is having issues with the Opus encoding.

I converted it to use libmp3lame using ffmpeg and it works fine now:

{
   "output":{
      "fps":12,
      "format":"mp4",
      "resolution":"mobile"
   },
   "timeline":{
       "soundtrack": {
           "src": "https://shotstack-customer.s3.ap-southeast-2.amazonaws.com/BjSaudio.mp3"
       },
      "tracks":[
         {
            "clips":[
               {
                  "asset":{
                     "size":"small",
                     "text":"Gunnter - I Want",
                     "type":"title",
                     "color":"#6F6D50",
                     "style":"marker",
                     "offset":{
                        "x":0,
                        "y":0.2
                     },
                     "position":"bottom"
                  },
                  "start":0,
                  "length":381
               }
            ]
         },
         {
            "clips":[
               {
                  "asset":{
                     "src":"https://yt-proj.s3.us-east-2.amazonaws.com/21ba4c12-dff6-4614-9128-9ce7a4b9814d.jpeg",
                     "type":"image"
                  },
                  "start":0,
                  "length":381
               }
            ]
         }
      ]
   }
}

Is it possible to use a different audio encoding?

Did something change about that? It used to work perfectly well. I didn’t change anything in the way I generate the mp3 files (I didn’t really pay attention to encoding). Suddenly I started to generate audioless videos … so I came here.
I don’t really have a way to change the encoding of these mp3 files … any workaround?

Thank you

We introduce updates to the edit engine quite regularly. It’s unfortunate to hear it did work in the past. @Jeff.S - do you have anything on this type of encoding?

In terms of a workaround the only thing I can think of at this point is using an application like ffmpeg to re-encode your audio file to use.

ffmpeg -i input.mp3 -ab 320k -f mp3 output.mp3

I think we’d need to see the last know working render so we can compare the audio file. I tried to import the file in to Adobe Premiere and it couldn’t import the file either.

Sure! Here is the last working example I have: https://cdn.shotstack.io/au/v1/1yeg2viue7/e7cf7d95-7e17-4b5d-bfc2-1f107b5a845b.mp4

Ok, re-running that render there is no audio now. How is the audio created in the first place? Is there a different format that can be used or can it be converted before being sent to Shotstack?

I’m using a service that allows me to download an mp3 from a video. But it’s a blackbox. I have zero control on it … I’m looking to replace it but I can’t find anything else for the moment.
Ideally, this audio format will render with shotstack but I’ll keep looking for alternatives.

Send me a DM with the service and I can take a look, otherwise there may be a tool that can do the conversion, or maybe we can do the conversion. On our roadmap we have a normalisation/ingestion service to fix these typical UGC type issues with different codecs/containers etc…

Sorry but couldn’t find how to send DMs.

I use this app (deployed on heroku): GitHub - mskian/video-dl: Video Downloader 📥 - Download Facebook Video and Youtube Video and Audio.
(the videos I download audio from belong to me)

I use it this way: /audio/audio?url=YT-URL&quality=highest&format=mp3&filter=audioonly

I wonder if you could use that tool to download an mp4 instead of an mp3 and then you could use the video with volume 1 and opacity 0. Or you could also use the audio asset type but use the mp4 video file as the src - it will ignore the video and just use the audio track.

Sounds good! Will try :slight_smile:
What happens if a use an mp4 as the soundtrack? Will it ignore the video and use the audio track too?

1 Like

It worked using the mp4 as the soundtrack :+1:
Thank you so much @lucas.spielberg :pray:

1 Like

@lucas.spielberg Sorry spoke too quickly … it worked on some. But my last 2 renderings don’t have sound … I don’t know why!
Here is one example: https://cdn.shotstack.io/au/v1/1yeg2viue7/a031d879-502c-4096-b207-10dc130b611e.mp4

Can you send the JSON for the one that worked and the one that didn’t. DM me if you’d prefer.