Try   HackMD

Netflix Tech blogs

Table of content


Implementing Match Cutting

Link to article

The term match cutting is explained in the terminology section.

Items in the Netflix catalogue (series/movies/shows) have millions of frames and to create manually match cuts, one has to label cuts and match them based on memory. This method misses out on lot of possible combinations and is very time consuming.

To automate selecting similar shots for transitions, we make use of neural networks.

A frame can be understood as a snapshot and a shot is a collection of frames.

Approaches

Simplying the problem

  1. Frames are visually similar within a single shot so only middle frame of each shot was considered.
  2. Similar frames can be captured in different shots, so to remove redundancies, image deduplication was performed.
  3. Keep frames having humans, discard others to keep things simple

Removing redundancies

  1. Shot de duplication
    Early attempts surfaced many near-duplicate shots.
    Imagine two people having a conversation in a scene. It’s common to cut back and forth as each character delivers a line.
    Near-duplicate shots are not very interesting for match cutting. Given a sequence of shots, we identified groups of near-duplicate shots and only retained the earliest shot from each group.

  2. Identifying near-duplicate shots
    Shots are put into an encoder model, which computes a vector representation of each shot and similarity is calculated using cosine similarity.
    Shots with very similar vector representations are removed.

  3. Avoiding very small shots
    These can arise from a clip of the cast having a conversation, the camera shifts very frequently and can falsely create many such redundant clips.


Rough implementation

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More →


Match silhouettes of people using Instance segmentation

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More →

Output of segmentation models is a pixel mask telling which pixels belong to which object.
Basically the similarity between character-outlines is calculated.
Compute IoU for two different frames, pairs with high IoU are selected as candidates.

Action Matching using Optical Flow

Match cut involving continuation of motion of an object or person.

Intensity of the color represents the magnitude of the motion. Cosine similarity is once again used here.
Brought out scenes with similar camera movement.


End of Article 1


Improving Video Quality with Neural Nets

Link to article

Not much is given in article, so skip this if you want

Why ?

As netflix is accessed by devices with different screen resolutions which work on different network qualities, video downscaling is deemed necessary.
A 4K source video will be downscaled to 1080p, 720p, 540p and so on, for different users.

Approach

  1. Preprocessing block
    Prefilter the video signal prior to the subsequent resizing operation.
  2. Resizing block
    Yields lower-resolution video signal that serves as input to an encoder.

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More →


End of Article 2


Scaling Machine Learning

Link to article

  1. Challenges of applying machine learning to media assets
  2. Infrastructure components built to address them
  3. Case study : To optimize, scale, and solidify an existing pipeline

Infrastructure Components

Image Not Showing Possible Reasons
  • The image file may be corrupted
  • The server hosting the image is unavailable
  • The image path is incorrect
  • The image format is not supported
Learn More →

Jasper for Media Access

To streamline and standardize media assets

Amber Feature Store for Media Storage

Memoizes features/embeddings tied to media entities.
Prevents computation of identical features for same asset, enables different pipelines have access to these features.

Amber Compute for handling data streams

  • Models run over newly arriving assets, and to handle the new incoming data, various trigger-mechanisms and Orchestration components were developed for each pipeline.
  • Over time this became difficult to manage, so to handle this Amber Compute was developed.
  • It is a suite of multiple infrastructure components that offers triggering capabilities to initiate the computation of algorithms with recursive dependency resolution.

To lower computational Load

  1. Multi GPU/ multi node distributed training. => Notes for the same
  2. Pre-compute the dataset
  3. Offload pre-processing to CPU instances
  4. Optimize model operators within the framework
  5. Use file system to resolve data-loading bottleneck

Scaling match cutting

Need to control amount of resources used per step

Initial Approach

Step 1 : Define shots

  • Download a video file, and produce boundary shot metadata.
    That is, divide a video into various shots.
  • Materialize each shot into an individual file clip.

Step 2 : Deduplication

  1. Extract representation/embedding of each file using an encoder model.
    Use the encoder values (vector) to identify and remove duplicate shots (performing de-duplication)
  2. Surviving files are passed on to step 3.

Step 3 (vaguely mentioned in article)

Compute another representation per shot, depending on the flavor of match cutting

Step 4 : Score the pairs

Enumerate all pairs and compute a score for each pair of representations. Scores are stored along with the shot metadata

Step 5 : Sort the pairs

Sort the pairs based on similarity score, and use only the top k-pairs, k being the number of match-cuts required by design team.

Problems with the initial approach

  1. Lack of Standardization

    The representations we extract in Steps 2 and Step 3 are sensitive to the characteristics of the input video files.

    In some cases such as instance segmentation, the output representation in Step 3 is a function of the dimensions of the input file.

    Not having a standardized input file format creat quality-matching issues when representations across titles with different input files needed to be processed together (e.g. multi-title match cutting).

  2. Wasteful repeated computations

    Segmentation at the shot level is a common task used across many media ML pipelines.
    Also, deduplicating similar shots is a common step that a subset of those pipelines share.

    Memoizing these computations not only reduces waste but also allows for congruence between pipelines that share the same preprocessing step.

  3. Pipeline triggering
    Triggering logic : whenever new files land, trigger computation

    • Lack of standardization meant that the computation was sometimes re-triggered for the same video file due to changes in metadata, without any content change.
    • Many pipelines independently developed similar bespoke components for triggering computation, which created inconsistencies.

Final solution for scaling match cutting

Standardized video encoder

Entire Netflix catalog is pre-processed and stored for reuse. Match Cutting benefits from this standardization as it relies on homogeneity across videos for proper matching.

Shot segmentation and deduplication reuse

Videos are matched at the shot level.
Breaking videos into shots is a common task, the infrastructure team provides this canonical feature that can be used as a dependency for other algorithms.
Using this feature values were memoized, saving on compute costs and guaranteeing coherence of shot segments across algos.

image alt

Match cutting pipeline. Interactions are expressed as a feature mesh.


End of article


Terminologies

Match Cutting

Video editing technique, that acts as a transition between two shots using similar visual frames, composition, action etc.

In film-making, a match cut is a transition between two shots that uses similar visual framing, composition, or action to fluidly bring the viewer from one scene to the next.

IoU

Also reffered to as the Jaccard Index.
Intersection over Union, has theoretical maximum value of 1, when both sets are equal.

J(A,B)=

ABAB

Cosine Similarity

To visualize, consider two vectors in 2D space. Thier cosine similarity is simply given by computing the cosine of two vectors.

Video explaning cosine similarity

Multi node distributed training

Notes
Using many worker nodes to make use of parallelization to help speed up computation.

Memoization

Orchestration

Orchestration coordinates multiple microservices to achieve a common goal using a central platform like Kubernetes

Congruency

congruency can be considered as a factor that influences the convergence of optimization methods

Coherence

cohesion refers to the degree to which the elements inside a module belong together. In one sense, it is a measure of the strength of relationship between the methods and data of a class and some unifying purpose or concept served by that class.