使用带有C#的MediaTranscoder将PCM音频转码为MP3

我正在尝试对从WebRTC呼叫中保存的PCM格式的音频文件进行转码。 WebRTC报告的音频流格式为16位深度,1个通道和48000 Hz采样率。我想将音频转换为MP3,以便以后可以将音频作为背景音轨添加到Unity UWP应用的屏幕录像中(使用MediaComposition)。我在第一部分遇到麻烦:尝试将PCM音频文件转码为MP3文件。当我尝试准备转码时,preparedTranscodeResult.CanTranscode返回false。以下是我的代码。

StorageFile remoteAudioPCMFile = await StorageFile.GetFileFromPathAsync(Path.Combine(Application.temporaryCachePath,"remote.pcm").Replace("/","\\"));
StorageFolder tempFolder = await StorageFolder.GetFolderFromPathAsync(Application.temporaryCachePath.Replace("/","\\"));
StorageFile remoteAudioMP3File = await tempFolder.CreateFileAsync("remote.mp3",CreationCollisionOption.ReplaceExisting);

MediaEncodingProfile profile = MediaEncodingProfile.CreateMp3(AudioEncodingQuality.Auto);
profile.Audio.BitsPerSample = 16;
profile.Audio.ChannelCount = 1;
profile.Audio.SampleRate = 48000;

MediaTranscoder transcoder = new MediaTranscoder();
var preparedTranscodeResult = await transcoder.PrepareFileTranscodeAsync(remoteAudioPCMFile,remoteAudioMP3File,profile);

if (preparedTranscodeResult.CanTranscode)
{
    await preparedTranscodeResult.TranscodeAsync();
}
else
{
    if (remoteAudioPCMFile != null)
    {
        await remoteAudioPCMFile.DeleteAsync();
    }

    if (remoteAudioMP3File != null)
    {
        await remoteAudioMP3File.DeleteAsync();
    }

    switch (preparedTranscodeResult.FailureReason)
    {
        case TranscodeFailureReason.CodecNotFound:
            Debug.LogError("Codec not found.");
            break;
        case TranscodeFailureReason.InvalidProfile:
            Debug.LogError("Invalid profile.");
            break;
        default:
            Debug.LogError("Unknown failure.");
            break;
    }
}
jhcwengbin 回答:使用带有C#的MediaTranscoder将PCM音频转码为MP3

所以我要做的是在开始将数据写入流之前,将标头写入FileStream中。我是从this post获得的。

private void WriteWavHeader(FileStream stream,bool isFloatingPoint,ushort channelCount,ushort bitDepth,int sampleRate,int totalSampleCount)
{
    stream.Position = 0;

    // RIFF header.
    // Chunk ID.
    stream.Write(Encoding.ASCII.GetBytes("RIFF"),4);

    // Chunk size.
    stream.Write(BitConverter.GetBytes((bitDepth / 8 * totalSampleCount) + 36),4);

    // Format.
    stream.Write(Encoding.ASCII.GetBytes("WAVE"),4);



    // Sub-chunk 1.
    // Sub-chunk 1 ID.
    stream.Write(Encoding.ASCII.GetBytes("fmt "),4);

    // Sub-chunk 1 size.
    stream.Write(BitConverter.GetBytes(16),4);

    // Audio format (floating point (3) or PCM (1)). Any other format indicates compression.
    stream.Write(BitConverter.GetBytes((ushort)(isFloatingPoint ? 3 : 1)),2);

    // Channels.
    stream.Write(BitConverter.GetBytes(channelCount),2);

    // Sample rate.
    stream.Write(BitConverter.GetBytes(sampleRate),4);

    // Bytes rate.
    stream.Write(BitConverter.GetBytes(sampleRate * channelCount * (bitDepth / 8)),4);

    // Block align.
    stream.Write(BitConverter.GetBytes(channelCount * (bitDepth / 8)),2);

    // Bits per sample.
    stream.Write(BitConverter.GetBytes(bitDepth),2);



    // Sub-chunk 2.
    // Sub-chunk 2 ID.
    stream.Write(Encoding.ASCII.GetBytes("data"),4);

    // Sub-chunk 2 size.
    stream.Write(BitConverter.GetBytes(bitDepth / 8 * totalSampleCount),4);
}
本文链接:https://www.f2er.com/3163900.html

大家都在问