• February 22, 2024
  • Blogs
  • Presentations

FOSDEM ’24: Q&A With Sergio Ammirata, Ph.D

FOSDEM ’24: Q&A With Sergio Ammirata, Ph.D

As part of his recent presentation at FOSDEM’24, Sergio Ammirata, Ph.D., engaged in an insightful Q&A session. Ammirata was able to highlight RIST’s features, including its ability to achieve zero packet loss, ease of defining binary payloads, and its expanding use cases beyond point-to-point transmissions to address broader distribution needs.

Read on below for the Q&A, or you can watch the full presentation titled ‘Streaming Live Using RIST On Demand to Thousands: How You Can Have Your Cake and Eat It Too’ by clicking here.

Participant: What is an acceptable packet loss when pushing a contribution to a continent with poor internet connectivity?

Sergio Ammirata: To me, zero is an acceptable packet loss, and the protocol is capable of achieving zero if you give it enough buffer. If you give it a second buffer and the round is 200 milliseconds, you will get zero packet loss. We’ve done tests and transmissions between Australia and Spain. Two weeks ago, we did a demo for a transmission from Sydney to Madrid. Sixteen cameras at 10 megabits per second each were being transmitted in real-time using RIST, and they were being used in Madrid for a production of the event. The transmission didn’t have a single packet loss, and it was all done across open internet. We use one-second buffers there because the connections were relatively good. But if you know your transmission is bad, just increase your buffer and therefore latency, and the protocol will recover.

Participant: Does the protocol support simultaneous bit rates?

Sergio Ammirata: Yes, we support multiplexing. In my example, I’ve done just one UDP input – however, you can configure the library and the command lines to ingest multiple UDP inputs, give it a different ID, and then on the other side, you can demultiplex them.

Participant: Is there a platform in place for defining binary payloads in the Advanced Profile?

For Advanced Profile, there’s a GitHub repository that has mappings already. We have a dozen or two dozen of them. I’m one of the administrators of the repository. All you need to do is go in and put an MR for whatever binary payload you want to define.

Participant: Is it possible to multiplex subtitles?

Sergio Ammirata: Yes, the protocol itself doesn’t care what you put in. We consider each of them as a binary payload of some sort. You’re the one that determines what the format of that payload is. You put multiple UDP streams. One of them is going to be your video payload or close caption or whatever you want to put in, with whatever format you want. We don’t define or control the format of what you put in. It’s up to you to decide and multiplex.

Participant: How do you handle timing when multiplexing different streams? And how do you ensure synchronization when multiplexing multiple streams within the same tunnel?

Sergio Ammirata: Because we are taking care of the multiplexing when we ingest all the different UDP streams, the timing is guaranteed. The minute we receive that UDP stream, we grab the timestamp at the network card. We know that a certain stream came in at an exact time, and then we reproduce that exact timing on the other end. We reproduce the pacing and the latency.

We make it fixed so that it’s not variable. That means that when you multiplex many things in the same tunnel, you’re guaranteed they’re in sync on the other side or at least as they were when they came in.

Participant: What are the current use cases of the protocol?

Sergio Ammirata: The original idea was to just do point-to-point transmissions. That was the original scope when we created the first version of the spec. That has changed. We achieved that and now we have gone beyond that. We want to tackle the distribution, we want to tackle the one-to-many, the media servers. We have a project going on with MistServer to add a lot of this functionality and the scalability as part of the project itself so that we have at least one media server that already supports that in a very scalable way. It becomes very simple for an application like VLC or GStreamer to hook up to this media server and start the playback immediately using this protocol.