TPEPlatform/WebRTC B2G

From MozillaWiki
Jump to: navigation, search

Summary

This document puts focus on B2G only. Cross platform issues are not involved.

  1. WebRTC B2G META issue
  2. WebRTC workweek minutes(3rd Jun ~ 7th Jun)
  3. Relative WIKIs

In bug tables, item description with read color means blocker issues

Network

On B2G, we put focus on

  1. E10S
    • Sandboxed. Move UDP socket into Chrome process on FirefoxOS
    • Packet filter in Chrome process.
  2. Network interface enumeration and prioritization: WLAN & 3G connection transition
  3. SDP
    • SDP parsing for video/ audio frame parameter. For example, maximum frame rate, or maximum frame size.
    • Request video/audio frame parameter base on HW capability.
  4. ICE
    • Error reporting: application is able to get ICE error/ failed callback.
Bug No Description Statue Target Assigned
bug 825708 We should use interface properties to determine ICE priorities FIXED Gecko 26 Patrick
bug 869869 e10s for UDP socket Open Gecko 26 SC
bug 870660 Packet filter for UDP e10s Open Gecko 26 Patrick
bug 881761 NSS for WebRTC in content process Open Gecko 26 Patrick
bug 881982 ICE: handle dynamic network interface change Open Gecko 26 Patrick
bug 884196 ICE: report error on network interface change Open Gecko 26 Shian-Yow
bug 881935 Support negotiation of video resolution FIXED Gecko 26 Shian-Yow

Performance

Performance-wise, we need to do more optimization on B2G because of weaker hardware. FirefoxOS run on ARM, instead of x86. We need to leverage BSP/GPU to boost up whole webrtc performance. Here list options for optimization in B2G.

  1. H.264/AAC codec support: create a prototype first to eveluate performance gain by using HW codec.
  2. Pipeline optimization. To make sure no useless or redundant opertion in encode/ decode pipeline.
    • Example: bug 873003 - duplicate video frames be process in encode thread.
  3. Use reasonable timer interval in Process Thread on B2G. Less statistic data collection, no NACK.
  4. Test OPUS with lower complexity and decide whether B2G uses OPUS or G711 as default audio codec
  5. H.264 coding module
    • H.264 encoder/ decoder with HW codec
    • H.264 RTP transport.


Bug No Description Statue Target Assigned
bug 884365 Audio realtime input clock mismatch FIXED Gecko 26 Randell
bug 861050 WebRTC performance issue on B2G Open Gecko 26 Steven
bug 896391 memcpy from camera preview's GraphicBuffer is slow Open Steven
bug 877954 Adapt video encode resolution & framerate according to available bandwidth and CPU use FIXED Gecko 28 gpascutto

gUM

Bug No Description Statue Target Assigned
bug 853356 Display camera/ microphone permission acquisition prompt by ContentPermmissionReques FIXED Gecko 26 Alfredo
bug 898949 [B2G getUserMedia] Display front/back camera list on permission prompt FIXED Gecko 26 S.C
bug 913896 Display audio (microphone) permission in permission acquisition prompt FIXED Gecko 26 Fred Lin

Media Resource Management

The media resources on B2G, H/W codec, camera and mic, are limited and will be accessed by multiple processes. We need a centralized manager to handle how to dispatch these resources. We also need to define the media behavior when the process holds a media resource and switch to background.

  1. H/W codec management
  2. camera resource management
  3. microphone resource management
  4. user stories of media under multiplrocesses

WebRTC Threading Modal

WebRTC is composed by capture module, coding module and streamming protocol module. To address performance bottleneck, we need to be familiar with webrtc threading module, which include the role of each thread and relationship between threads.

Here are the threads in WebRTC(signaling threads are excluded)

  1. (MediaStreamGraph) Media stream graph run thread: audio/video coding.(MediaStreamGraphImpl::RunThread in MediaStreamGraph.cpp)
  2. (Network) Socket transport service: send/receive packets. (Entry point of user space callback function??)
  3. (Capture) Camera capture thread: On FFOS, video frames are callback through MediaEngineWebRTCVideoSource::OnNewFrame and the source is camera api. For other platforms, the images are from MediaEngineWebRTCVideoSource::DeliverFrame, the callback interface of GIPS, and the source is implemented in GIPS. Then MSG thread keeps pulling the latest frames by MediaEngineWebRTCVideoSource::NotifyPull.
  4. (Capture) Audio capture thread: recieve audio frame from microphone. All audio streams are input through MediaEngineWebRTCAudioSource::Process. In "Process" function, the audio is saving to media track. The mechanism may change since it has clock drift problem(bug 884365).
  5. (Process) Process thread (worker threads in GIPS): doing many other tasks. Process thread has a task queue for client to inject tasks into.

In a nut shell, we can divide these threads into three categories.

encode path

  • Encode path start from capture(getUserMedia).
  • MediaPipelineListner listen update notification(NotifyQueueTrackChanges) from MSG Run Thread and
    • Encode audio chunks in MSG Run Thread.
    • Encode video chunks in another thread(ViECapter Thread).
      • Put Encoded media data into Transport Service Thread to network

Decode path

  • Steven, please update whole story from network/ jitter buffer to renderer.

Process dispatcher threads

Process thread is a dispatcher thread. A client can register a Module into a ProcessThread. ProcessThread will callback to Module::Process in specific duration(>= 100 ms).
Implementation of ProcessThread is located in process_thread_impl.cc
Here are modules that implement the Process function:
call_stats.cc, vie_remb.cc, vie_sync_module.cc, monitor_module.cc, audio_device_impl.cc, paced_sender.cc, video_capture_impl.cc, audio_conference_mixer_impl.cc, rtp_rtcp_impl.cc, audio_coding_module_impl.cc, video_coding_impl.cc

  • RTCP - NACK/ Statistic

Next Step

  1. Integrate WebRTC with MediaRecord.

FAQ

  • How to know the current resolution captured by camera?

Use gdb and break at MediaEngineWebRTCVideoSource::OnNewFrame()

Reference