This demo is out-dated (published on 2013). Please check this instead:

Recording Remote Audio Streams / RecordRTC

Remote audio+video recording is supported in Chrome version >= 49. You have to enable MediaRecorder API via chrome://flags/#enable-experimental-web-platform-features

And test this demo instead: RecordRTC-and-RTCMultiConnection.html

issue: unable to record remote audio streams using RecordRTC.

  1. issue: Support multiple AudioProcessing modules for WebRtc media stream / Read Latest News

    Currently we only support one AudioProcessing module for one WebRtc VoiceEngine. This introduces a couple of problems, for example:
    # Supporting multiple microphones.
    # Different audio tracks might have different constrains.
    # When a peer connection has multiple audio tracks, the loggings are not correct in libjingle.

    The correct fix to all the problem is that we should be able to have different AudioProcessing module to different audio tracks if they are using different sources or having different constrains.

    Ref: #264611

    There is also an architectural (legacy) "problem" with the current design/implementation.
    The APM related constraints are specified at the getUserMedia layer but are currently applied in PeerConnection which is technically a different API. This means that APIs that support using a MediaStream from gUM (e.g. WebAudio), will currently not get processed audio and that a mediastream will look different (well, rather sound different) to different destination APIs.

    Ref: #264611

    Expected Solution?

    Upcoming (M37+) "--enable-audio-processor" command-line flag or maybe "--enable-audio-track-processing" or maybe:

    var audioConstraints = {
        optional: [],
        mandatory: {
            googEchoCancellation: true
    navigator.webkitGetUserMedia({ audio: audioConstraints }, onSuccess, onFailure);

  2. issue: Support feeding remote WebRTC MediaStreamTrack output to WebAudio
  3. issue: Connect WebRTC MediaStreamTrack output to Web Audio API