top of page
diegochaverri

Media streaming: the driving force behind modern entertainment

Updated: Jun 10

Digital entertainment has suffered a profound transformation throughout the years, redefining how users consume and experience content. The traditional TV and cable era has assumed a secondary role as more on-demand content to portable devices emerges; let’s dive for a few minutes into the summary of the breakthroughs of streaming technologies and the wide variety of alternatives it delivers:


Media streaming evolution

Streaming media, enabling playback through offline or online media players, is a continuous delivery method with minimal intermediate storage. It has become a go-to for various applications, from entertainment to education and beyond.


Streaming differs from traditional file downloading, allowing users to start enjoying a digital video or audio content while it's still being transmitted, providing a more seamless and instantaneous experience.


In the '90s, live sports broadcasts were the early adopters of Flash and RTMP-based streaming. As video streaming protocols evolved, giants like YouTube and Netflix emerged until eventually reaching social media apps with the launch of Vine. 


The dynamic livestream market continues to evolve, reaching new industries like telehealth, remote learning, virtual events, and video game streaming.


A nice summary of the major events, courtesy of the folks in Wowza with a few extra entries for the 2020s era shows the rapid evolution of media streaming and how we consume content:




Figure 1. Historical development of media streaming


What is a streaming protocol?

A streaming protocol serves as a set of guidelines outlining how multimedia content (audio and video) travels from one device or system to another over a network (local, Internet). These protocols, particularly in video streaming, establish a standardized approach to breaking down a video stream into smaller, more manageable chunks for seamless transmission.

Embedded within a video streaming protocol, a codec plays a crucial role in reducing file sizes by eliminating unnecessary information. For instance, if a video features a static background for an extended period, the codec intelligently discards redundant visual data after the initial frame. Instead, it retains a reference, optimizing storage space. Container formats like MP4 and FLV dictate how the transmitted video stream data, encompassing video and audio files along with metadata, is stored post-protocol transmission.


Different flavors for your streaming needs

Amidst streaming technologies growth, the landscape of video streaming protocols has expanded. The streaming protocols can grouped into three main categories:

  • Legacy Protocols: Legacy protocols use basic authentication (username + password required by the app) to connect to web services.

  • HTTP-based Protocols: A request-response protocol; HTTP empowers users to engage with web resources, including HTML files. It facilitates the transmission of hypertext messages between clients and servers, forming the backbone of seamless interactions on the Internet.

  • Modern protocols: It groups protocols that are not yet widely supported, lots of them are open-source oriented. Modern protocols are meant to address issues from the proceeding media streaming protocols.

A summary of the preferred protocols among developers:


  1. HTTP Live Streaming (HLS): Developed by Apple as part of the efforts to drop Flash from iPhone devices, HLS is widely used for streaming video and audio over the internet, it might be the most popular streaming protocol available nowadays. It breaks the media into small chunks and delivers them using standard HTTP protocols. HLS is adaptive, meaning it can adjust the quality of the stream based on the viewer's network conditions.

  2. Dynamic Adaptive Streaming over HTTP (DASH): An international standard for adaptive streaming, DASH operates similarly to HLS (developed as an alternative to HLS by MPEG) but is not tied to any specific ecosystem. It is designed to be interoperable across various devices and platforms.

  3. Real-Time Streaming Protocol (RTSP): RTSP is a network control protocol used for establishing and controlling media sessions between endpoints. It is often employed for live streaming and supports both on-demand and live media.

  4. RTMP (Real-Time Messaging Protocol): Developed by Adobe, RTMP is a protocol used for streaming audio, video, and data over the internet. While it has been widely used, it is gradually being replaced by newer protocols like HLS and DASH.

  5. SRT (Secure Reliable Transport): SRT is an open-source video streaming protocol designed for low-latency and reliable transmission of video streams over unreliable or unpredictable networks, such as the Internet. Originally developed by Haivision and later released as an open-source project.

  6. WebRTC (Web Real-Time Communication): WebRTC is an open-source project that provides web browsers and mobile applications with real-time communication via an API. WebRTC enables peer-to-peer communication for voice, video, and data sharing directly in web browsers without the need for plugins or external software. Some of the commonly used applications like Google Meet, Discord, WhatsApp, Facebook Messenger rely on WebRTC.

  7. RIST (Reliable Internet Stream Transport): RIST is a protocol for media content transmission. RIST is commonly used in the broadcast industry to transport video feeds over IP networks; the protocol is meant to address the challenge of delivering high-quality, low-latency content over unreliable or unpredictable networks and it is often used for the distribution of video content in real-time.


These streaming protocols enable users to access multimedia content seamlessly over the internet by breaking down the media into smaller chunks, adapting to varying network conditions, and providing a smooth viewing experience. The choice of protocol often depends on factors such as the platform, device compatibility, and specific requirements of the streaming application.


Choosing the right protocol for your streaming needs

When choosing a streaming protocol, there are a number of considerations based on the specific requirements and characteristics of the streaming application. Some of the key criteria to keep in mind:

  1. Content type: Video or audio: Different streaming protocols may be optimized for video or audio content; it is important to ensure the protocol supports the type of media it is being streamed.

  2. Latency: Low latency: Some applications might require a minimum delay between the time the stream is initiated and the time the end user receives it. Protocols like WebRTC are suitable for real-time communication. Standard latency: When real-time interaction is not critical, HTTP-based protocols can deliver standard latency.

  3. Adaptability to network conditions: Adaptive Bitrate (ABR): ABR protocols dynamically adjust the quality of the stream based on the viewer’s network conditions in favor of a smoother playback experience.

  4. Device and platform support: Some protocols might be more compatible with specific operating systems or browsers and, thus, it is important to verify the protocol is supported by the devices and platforms of the target viewers.

  5. Scalability: This criteria refers to the capacity of the protocol to handle a varying number of concurrent viewers. Protocols like HLS or DASH are designed to scale effectively.

  6. Security: Depending on the type of content, security features might need to be considered such as encryption and other secure delivery mechanisms.

  7. Streaming server requirements: This topic is related to the infrastructure to support the chosen streaming protocol, some of the protocols may need specific server setups or additional components for optimal performance.

  8. Cost: This topic refers to the overall cost associated with implementing and maintaining the chosen streaming protocol. Some of the protocols may have licensing fees whereas others may be open-source.

  9. Compatibility with CDN (Content Delivery Networks): CDNs refer to a network interconnected server that speeds up content delivery for data-heavy applications, this can significantly improve the delivery speed and reliability of the streaming content.

  10. Content protection: Sensitive content might require protection against piracy; some protocols support Digital Rights Management (DRM) and other mechanisms.

  11. Ease of implementation: Considerations around ease of integration and development; some protocols may have extensive documentation, community support and readily available tools.

  12. Industry standards: It is important to check if the protocol is aligned with a specific standard; standardized protocols are likely to have better support and interoperability.


Given these factors, we can proceed to summarize the PROs and CONs of some of the protocols we have described so far and that are commonly used for streaming applications:


Protocol

Type

PROs

CONs

HLS

HTTP-based

- Compatibility: (iOS, Android, smartTVs and several web browsers)

- Security: Supports encryption

- Adaptability

- CDN integration

- Client-side control: Viewer can pause, rewind, fast-forward the content enhancing user experience.

- Latency: Unable to maintain a low latency compared to other protocols.- HLS performs segmentation of video into small chunks, causing buffering in low-bandwidth conditions.- Complexity: Can be challenging to implement  for live streaming

DASH

HTTP-based

- Adaptability

- Compatibility

- Interoperability: Compatible with various codecs and container formats which promotes interoperability among different devices and software.

- Content protection: Supports DRM.


- Latency: Not optimal for low-latency applications due to adaptive streaming characteristics.

- DASH is resource intensive on both the server and client sides.

RTSP

Legacy 

- Low latency: RTSP is specifically designed for real-time streaming applications (live broadcasting, video conferencing and other time-sensitive multimedia applications)- Interoperability: As it relies on a standard, it promotes interoperability between different devices and software.- Client-side control: Viewer can control the stream.- Scalability

- RTSP is prone to connectivity issues when dealing with firewalls and NAT traversals.- Security: No built-in encryption- Limited error handling 

RTMP

Legacy

- Low latency- Adaptability- Compatibility- CDN integration- RTMP supports bi-directional communication, making it suitable for interactive applications. - Security: RTMP can be used with secure protocols like RTMPS (RTMP over TLS/SSL) to enhance security

- Historically associated with Adobe Flash (obsolete) which has led RTMP to decreased popularity.- RTMP is prone to connectivity issues as it uses non-standard port numbers which can be blocked by firewalls- No native browser support as flash support is obsolete in modern web browsers, making it difficult to use on web-based applications without extra plugins.- Not scalable.


- One single point of failure (server)

WebRTC

Modern 

- Low latency: WebRTC implements peer-to-peer communication which reduces latency and enhances performance.


- Cross-browser compatibility: Supported by most popular web browsers including Chrome, Firefox, Safari, Microsoft Edge.- Security: WebRTC has built-in support for encryption- Not limited to audio and video, WebRTC also supports data channels (file sharing or collaborative editing)

- WebRTC is prone to connectivity issues when dealing with firewalls and NAT traversals.- Complexity: The API might require a learning curve for new developers. - There might be variations between the implementations of WebRTC among browsers, leading to a lack of standardization.

SRT

Modern

- Low latency- SRT implements error recovery, packet loss recovery and retransmission mechanisms.- Security:SRT has built-in support for encryption. - Adaptability- Compatibility: SRT is device and operating system agnostic.

- Complexity- SRT is non-standard which carries additional effort and infrastructure to integrate it into existing networks.- SRT is widely supported for broadcasting applications but might not be readily available in web browsers.

Table 1. Streaming protocols comparison 


RTSP vs WebRTC

RidgeRun has been established as a pioneering company in the embedded systems development market. Consequently, its array of solutions has broadened to encompass cutting-edge media content streaming solutions within the Linux ecosystem. With the widespread adoption of WebRC and RTSP by developers shaping their products and distributing content to end users, RidgeRun has seamlessly embraced this trend, let’s now explore some of the details derived from these 2 protocols and the off-the-shelf solutions RidgeRun has to offer to address our customers’ requirements:


WebRTC

RTSP

Network topology

- Network independent. Regardless of the topology, the stream can start after a negotiation between endpoints. Both endpoints must have access to the same signaler server to exchange SDPs and negotiate formats and configuration- WebRTC requires external TURN and STUN servers to perform NAT traversal over the network to deliver RTP packets from one endpoint to the other. WebRTC guarantees that, unless blocked, a connection will always be found

- Local network only. VPN may be used.- HTTP tunneling is also available

Negotiation

- Done via SDP negotiation via a third-party server called the signaler. One of the endpoints provides an offer SDP to the receiver with information of supported formats, RTP, RTCP, general media information, ICE candidates, DTLS information for encryption. - ICE negotiation may use TURN and STUN servers to generate candidates with address and ports information to send the video over NATs and complex network topologies.- Everything happens transparently to the user.

- Format negotiation performed by exchanging SDP files.- Everything happens transparently to the user.

Latency

- Depends on the network topology and the components in the receiver and sender side. - Latency response in complex network topologies is reported to be better than in other protocols.

- For LANs, latency should have a similar response compared to WebRTC.- For NAT traversal, WebRTC will likely have a lower latency because it might still find a peer-to-peer connection.

Congestion control

- WebRTC web applications already provide congestion control feedback over RTCP. The receiver side estimates the bandwidth available in the network based on packet loss, latency,etc; sending this estimation to the sender. -The WebRTC on the sender side uses this bandwidth estimation to configure the encoder's bitrate, image size and/or framerate.

It is not available out-of-the-box in the protocol. The receiver and sender application must implement its own congestion control mechanism.

Security

- It is a peer-to-peer negotiation, video is sent only to the address and ports negotiated in the SDP and ICE candidates exchange.- WebRTC uses packet encryption through DTLS. Endpoints receiving the video stream can only decode it using the DTLS key negotiated in the SDP.- Encryption is mandatory.

- Streams may or may not be encrypted using TLS certificates.- Provides simpler authentication mechanisms (like user/password).

GStreamer Support

- Supported through GStreamer webrtcbin elements or RidgeRun’s offering (GstWebRTCWrapper, GstKinesisWebRTC)- Each WebRTC session needs a dedicated pipeline, in case the session is stopped, the pipeline needs to be restarted to restart the negotiation process with the other endpoint.- A third-party server is required to perform the signaling between peers (may be local).- Two extra third-party servers (STUN and TURN) are required if NAT traversal is required.

- Supported through as an API, gst-rtsp-server or RidgeRun’s offering (GstRTSPSink)

Client-side implementation

- WebRTC is supported in multiple browsers like Google Chrome, Safari and Firefox. Web developers have to implement a Web application with the following considerations:1. Web applications must use the WebRTC API.2. Web applications must implement the signaler handshaking to retrieve and exchange information with the other peer. Information negotiated through the signaler is the one used in the WebRTC API, such as the SDP and the ICE candidates.

- No native browser support- Large amount of open implementations available

Table 2. WebRTC vs RTSP implementation details


RidgeRun’s RTSP/WebRTC solutions offering

Finally, take a few minutes to go over some of the off-the-shelf solutions RidgeRun has to offer to address some of your immediate needs; we keep analyzing common requirements from our customers and focus our efforts on releasing new products matching the latest technologies.





At this point, you might have already guessed that media streaming is a trend that moves along the evolution of hardware resources, all intended to satisfy a more demanding target audience. Choose wisely and let the protocol lead the way…happy streaming!!



105 views
bottom of page