-
-
Notifications
You must be signed in to change notification settings - Fork 1.7k
Description
Describe the problem to be solved
For individuals who want to use a CDN/S3 bucket to serve their data, S3 egress costs are usually very high regardless of provider, so aggressive CDN caching is required. The issue with PeerTube is that HLS segments cannot be cached for a long time in terms of permanent lives because the path to the HLS segment remains the same, only the playlist file is updated. This means that if a stream is cut off or stops and then is restarted in the period of time the .ts segments are cached, viewers will be served stale data.
Example:
- Administrator sets HLS segment cache to 24 hours
- Streamer A experiences network connectivity issue 5 minutes into the stream
- Stream "ends" at HLS segment 0-000099.ts
- Stream automatically restarts at 0-000001.ts
- Viewers are served old stream data up until 0-000100.ts
Describe the solution you would like
I believe the best solution to this would be to add a unique session key to the HLS segments, so .ts files can be considered immutable and not subject to being served as stale data for permanent lives, restarted livestreams, et cetera. This fix is simple, in my opinion.
/packages/ffmpeg/src/ffmpeg-live.ts
- command.outputOption(`-hls_segment_filename ${join(outPath, '%v-%06d.ts')}`)
+ const sessionToken = Date.now() // create unique session "token"
+ command.outputOption(`-hls_segment_filename ${join(outPath, `%v-${sessionToken}-%06d.ts`)}`) // apply token to HLS segment nameThe HLS segment would then be named something like *-1707500000-000001.ts. When a permanent live is restarted, a new sessionToken is generated, bypassing stale cache data without the user having to create a new Live (generating a new UUID), enabling administrators to more aggressively cache HLS segments.