Skip to main content

Encoding Job Processing

An encoding job is a long-running process (can take minutes and even hours).

When a job is submitted, the service performs a validation of the request body (JSON). It also extracts, decrypts, and validates the supplied Content Keys for encryption, or, in the DRM Managed Mode, acquires the necessary keys from the Key Service. It returns to the client a response containing the assigned JobID and the ID(s) of the Content Keys which will be used for encryption.

Request Processing

Job Processing Phases Overview

The actual job execution happens asynchronously. The job execution involves several distinct phases:

Job processing phases

note

The phases shown above are conceptional. In reality, some of the processing steps happen simultaneously, e.g. encryption and packaging.

The table below briefly describes each processing phase, including the configuration sections and events for each phase.

PhaseDescriptionConfiguration section(s)Events
Pre-ValidationValidating the submitted job description and the credentials supplied to the external storages.JobCreated
AcquisitionDownloading the content from the input storage to a temporary storage used by the subsequent operations.ContentAcquisitionAcquistionProgress, ContentAcquired
Media MappingThe input storage is a folder and can contain multiple files. Based on the supplied settings, the Encoding Service decides which file contains the video stream and which files contain the audio and subtitles streams in which language.MediaMappingsContentMapped, ContentPreProcessed
EncodingEncoding the video and audio according to the supplied settings.ContentProcessingVideoEncodingStarted, EncodingProgress, EncodingFinished
DRM ProtectionIf required, encrypting the content according to the supplied settings.ContentProcessing
PackagingPackaging the encoded video, audio and subtitles as DASH, HLS, or CMAF, depending on the settings and, optionally, archiving the output into one or more TAR files.ContentProcessing
Image ExtractionEncoding Service can extract frames from the video stream at specified time indexes and store them as JPEG images in a specified location. External systems (like Content Management) can use the generated images. Encoding Service can also generate thumbnails (used as preview images) at regular intervals and include them as a part of the DASH representation in accordance with the DASH IF IOP 4.3, section 6.2.6 "Tiles of thumbnail images".ImageExtraction, ThumbnailsImagesExtracted
PublishingPublishing produced output to the specified output storage.ContentPublishingContentPublished, JobSuccess, FinalError

Once this Publishing phase finishes, the Encoding Service doesn’t keep any data related to the job (except the log files).

Encoding Job Phases

Pre-Validation

To enable faster feedback in case of wrong credentials supplied ("fail fast"), the Encoding Service tries to connect to the specified input storage and output storage before it does any further processing. For the output storage. it also uploads a small dummy file to ensure it has the write privileges. The job fails immediately if access doesn’t work.

note

The Location specified for the Image Extraction is not validated here. The job will also not fail if Image Extraction is configured, but the specified location is not accessible. The extracted images will just not be uploaded.

Acquisition

In the Acquisition phase, the content is downloaded from the input storage specified by the Storage Providers in the job description’s section Acquisition to a temporary storage. The specified credentials shall allow reading the content. If the option "DeleteFilesFromSourceWhenDone" is used, the write/delete privilege is also required.

Interactions between the Encoding Service and other systems during the Acquisition Phase

See also: ContentAcquisition section, AcquistionProgress event, ContentAcquired event.

Media Mapping

To decide which input files represent which streams, the Encoding Service uses the settings in the MediaMapping section. There are three layers of settings which enable more and more focused filtering of the input files:

  • General regular expression to match the files containing video, audio, subtitles, and captions files
  • Exact mapping of the specific files to their content type and language
  • Filter for the list of accepted languages.

Read more: Media Mapping

See also: MediaMappings section, ContentMapped event, ContentPreProcessed event

Encoding

In the Encoding phase, the video and audio are encoded using a specific Codec. Audio can be encoded using AAC or AC3. For video encoding, H.264/AVC or H.265/HVEC can be selected alongside with their optimization settings. It is also possible to skip encoding using a packaging-only mode, if the video is provided in the desired format already.

Read more: Encoding

See also: ContentProcessing section, VideoEncodingStarted event, EncodingProgress event, EncodingFinished event

DRM Protection

Axinom Encoding protects the videos to be used with the major DRM technologies, such as Widevine, PlayReady, and FairPlay. All DRM technologies use the AES encryption with a 128-bit Content Key. The Content Key could be handled either in the Direct mode (content key is supplied as a part of the Job) or in the Managed mode (credentials for the Key Service are supplied and the Encoding Service acquires the necessary keys on its own). Moreover, the Encoding Service supports using multiple keys which means that you can encrypt different streams with different Content Keys.

Read more: DRM Protection

See also: ContentProcessing section

Packaging

The encoded video, audio, and subtitles are further packaged as MPEG DASH, HLS, or CMAF, depending on the settings in the job description’s ContentProcessing section. for DASH both the Live and OnDemand profiles are supported. DASH and HLS can be produced simultaneously, to get better compatibility with the end-user devices. Alternatively, CMAF can be used to supply the video content only once while still enabling both the DASH- and HLS-compatible players.

While the Encoding Service makes a choice of the bitrates and resolutions to generate for optimal experience, it is also possible to override the default settings and to request the exact set of bitrates.

Read more: Packaging

See also: ContentProcessing section

Image Extraction

Encoding Service can extract frames from the video stream at specified time indexes and store them as JPEG images in a specified location. External systems (like Content Management) can use the generated images.

Encoding Service can also generate thumbnails (used as preview images) at regular intervals and include them as a part of the DASH representation in accordance with the DASH IF IOP 4.3, section 6.2.6 "Tiles of thumbnail images". Any player supporting this DASH standard can display the thumbnails for any time index.

Read more: Image Extraction

See also: ImageExtraction section, Thumbnails section, ImagesExtracted event

Publishing

In the Publishing phase, the encoded and protected content is published to the output storage specified by the Storage Provider in the section ContentPublishing of the job description. The specified credentials shall allow writing the content.

Interactions between the Encoding Service and other systems in the Publishing Phase

See also: ContentPublishing section, ContentPublished event, JobSuccess event, FinalError event

Job Status and Reporting

As the Encoding Service informs the client about the job progress every time it reaches a certain phase, you can easily keep track of it. You are notified of the events via message publishers that are defined in the job description.

You could also acquire the job status by using the Encoding API endpoint GET /reporting/<job_id>.

It is also possible to acquire a list of all jobs in a month in the same format using the Encoding API endpoint GET /reporting/<year>/<month>.

Read more about progress tracking and reporting from the Reporting page.