US Pat. No. 9,451,379

SOUND FIELD ANALYSIS SYSTEM

Dolby Laboratories Licens...

1. A sound field mapping method comprising:
generating electrical signals in response to sound in the sound field;
applying the generated electrical signals to a spatial angle and diffusivity information extraction module operable to extract
spatial angle and diffusivity information from the electrical signals; and

applying the extracted spatial angle and diffusivity information to a mapping module;
using the mapping module to map the spatial angle and diffusivity information for representation in the form of a Riemann
sphere, wherein spatial angle varies longitudinally and diffusivity varies latitudinally along the sphere.

US Pat. No. 9,460,728

METHOD AND APPARATUS FOR ENCODING MULTI-CHANNEL HOA AUDIO SIGNALS FOR NOISE REDUCTION, AND METHOD AND APPARATUS FOR DECODING MULTI-CHANNEL HOA AUDIO SIGNALS FOR NOISE REDUCTION

Dolby Laboratories Licens...

1. A method for encoding multi-channel Higher Order Ambisonics (HOA) audio signals for noise reduction, comprising steps of
decorrelating the channels using an inverse adaptive Discrete Spherical Harmonics Transform (DSHT), the inverse adaptive DSHT
comprising a rotation operation and an inverse DSHT, with the rotation operation rotating the spatial sampling grid of the
iDSHT, wherein the spherical sample grid is rotated such that the logarithm of the term

is minimized, wherein
are the absolute values of the elements of ?WSd with a row index l and a column index j, and
are the diagonal elements of ?WSd, where ?WSd=WSdWSdH and WSd is a matrix having a size of number of audio channels by number of block processing samples, and WSd the result of the inverse adaptive DSHT;
perceptually encoding each of the decorrelated channels;
encoding rotation information, wherein the rotation information is a spatial vector ?rot with three components defining said rotation operation; and

transmitting or storing the perceptually encoded audio channels and the encoded rotation information.

US Pat. No. 9,397,771

METHOD AND APPARATUS FOR ENCODING AND DECODING SUCCESSIVE FRAMES OF AN AMBISONICS REPRESENTATION OF A 2- OR 3-DIMENSIONAL SOUND FIELD

DOLBY LABORATORIES LICENS...

1. A method for carrying out an encoding on received successive frames of a higher-order Ambisonics representation of a 2-
or 3-dimensional sound field, denoted as_HOA coefficients, said method comprising:
transforming a number of O=(N+1)2 input HOA coefficients of a frame into a number of O spatial domain signals representing a regular distribution of reference
points on a sphere, wherein N is an order of said input HOA coefficients and is greater or equal to 3, and each one of said
O spatial domain signals represents a set of plane waves which come from associated directions in space;

encoding each one of said O spatial domain signals using perceptual compression encoding steps or stages, thereby using encoding
parameters selected such that a coding error is inaudible; and

multiplexing the encoded spatial domain signals of the frame into a joint bit stream for providing improved lossy compression
of HOA representations of audio scenes.

US Pat. No. 9,411,882

INTERACTIVE AUDIO CONTENT GENERATION, DELIVERY, PLAYBACK AND SHARING

Dolby Laboratories Licens...

1. A method, comprising:
generating, based on user input, independent of a plurality of audio elements, one or more control data templates, the user
input relating to a plurality of parameter values and a plurality of control inputs for a plurality of operations, the one
or more control data templates comprising the plurality of parameter values and the plurality of control inputs for the plurality
of operations;

wherein the one or more control data templates are generated, before the one or more control data templates are applied to
the plurality of audio elements to create a plurality of audio objects storing audio sample data representing the plurality
of audio elements;

after generating the one or more control data templates, receiving the plurality of audio elements; and
in response to receiving the plurality of audio elements,
creating the plurality of audio objects to store the audio sample data representing the plurality of audio elements;
generating control data based on the plurality of parameter values and the plurality of control inputs for the plurality of
operations, the control data specifying the plurality of operations to be performed while rendering the plurality of audio
objects; and

storing the control data separately from the audio sample data in the plurality of audio objects;
wherein the method is performed by one or more computing devices.

US Pat. No. 9,373,341

METHOD AND SYSTEM FOR BIAS CORRECTED SPEECH LEVEL DETERMINATION

Dolby Laboratories Licens...

11. A system for determining speech level, said system including:
at least one computer processor with a memory
a voice detection stage, coupled to receive an audio signal and configured to identify at least one voice segment of the audio
signal, and for each said voice segment, to generate frequency banded, frequency-domain data indicative of the voice segment;

a model determination stage, coupled to receive the frequency banded, frequency-domain data indicative of each said voice
segment, and configured to generate, in response to the data, a Gaussian parametric spectral model of each said voice segment,
and to determine, for each said voice segment, from the parametric spectral model of the voice segment an estimated mean speech
level and a standard deviation value for each frequency band of the data indicative of the voice segment;

a correction stage, coupled and configured to generate, for each said voice segment, speech level data indicative of a bias
corrected mean speech level for said each frequency band of the data indicative of the voice segment, including by using at
least one correction value to correct the estimated mean speech level for the frequency band, wherein each said correction
value has been predetermined using a reference speech model; and

a speech level signal generation stage, coupled and configured to generate, in response to the speech level data generated
in the correction stage for each said voice segment, a speech level signal indicative, for each said voice segment, of a level
of speech level indicated by the voice segment.

US Pat. No. 9,373,178

HIGH DYNAMIC RANGE DISPLAYS HAVING WIDE COLOR GAMUT AND ENERGY EFFICIENCY

Dolby Laboratories Licens...

1. A display system comprising:
an emitter, the emitter comprising an array of light emitting elements of a same color controlled in a locally dimmed configuration
and emanating light into an optical path;

a diffuser, the diffuser diffusing the light emanating from the emitter;
a color layer configured to receive light from the diffuser and mitigate crosstalk by emitting different passband-like color
bands of distinct colors and provide a specific color gamut;

wherein the emitter as locally dimmed comprises a first modulator configured to transmit light through the diffuser and the
color layer;

wherein the color layer placed in the optical path for convolving light together with the first modulator and a second modulator
to produce a desired color gamut of the display;

a second modulator, comprising an LCD panel, the second modulator modulating light from the first modulator in a manner without
color overlap such that each color filter of the LCD panel acts independent of adjacent color filters; and

a controller, the controller receiving input image data and sending control signals to the first and the second modulators.

US Pat. No. 9,479,680

TWO-DIMENSIONAL COLOR TRANSFORMATIONS FOR WIDE GAMUT WORKFLOWS IN DIGITAL CAMERAS

Dolby Laboratories Licens...

1. A method to generate color output signals in response to input signals from an image capture device, the method comprising:
receiving first, second, and third input signals from the image capture device;
generating an input scale factor (?) in response to the input signals;
generating first and second chromaticity signals (p, q) in response to the input signals;
mapping the first and second chromaticity signals to first and second preliminary color signals (x, y), wherein the mapping
involves two-dimensional transformation of the first and second chromaticity signals to the first and second preliminary color
signals respectively;

mapping the first and second chromaticity signals to a preliminary scale factor (?out), wherein the mapping involves a two-dimensional transformation of the first and second chromaticity signals to the preliminary
scale factor;

generating an output scaling factor (?out) by multiplying the preliminary scale factor with the input scale factor; and

generating a set of output color signals (X, Y, Z) in response to the output scaling factor and the first and second preliminary
color signals.

US Pat. No. 9,368,128

ENHANCEMENT OF MULTICHANNEL AUDIO

Dolby Laboratories Licens...

1. A method for enhancing an audio signal, wherein the audio signal comprises two or more channels of audio content, the method
comprising:
examining a portion of the audio signal to determine whether the portion contains one or more characteristics of speech, and
if the portion contains one or more characteristics of speech, classifying the portion as a speech portion, said examining
including:

applying a first portion of the audio signal to a speech versus other sound (SVO) detector,
applying a second portion of the audio signal to a voice activity detector (VAD), the second portion overlapping the first
portion and being smaller than the first portion, and

biasing a decision by the VAD based on the SVO output;
calculating a gain for the speech portion; and
applying the calculated gain to the audio signal.

US Pat. No. 9,478,176

RAPID ESTIMATION OF EFFECTIVE ILLUMINANCE PATTERNS FOR PROJECTED LIGHT FIELDS

Dolby Laboratories Licens...

1. A method for estimating an effective luminance pattern representing a distribution of projected light in a display apparatus,
the method comprising:
determining driving values for one or more light sources arranged to project light;
determining an effective luminance pattern for the projected light by:
for each of the one or more light sources determining a contribution to the effective luminance pattern for each of a plurality
of components of a point spread function for the light source; and,

combining the contributions of the components the effective luminance pattern to yield an estimated effective luminance pattern;wherein each of the components comprises a set of point spread function values, a first one of the plurality of components
comprises a plurality of higher order bits of the point spread function values and excludes a plurality of lower order bits
of the point spread function values and a second one of the plurality of components comprises the plurality of lower order
bits of the point spread function values and excludes the plurality of higher order bits of the point spread function values.

US Pat. No. 9,390,660

IMAGE CONTROL FOR DISPLAYS

Dolby Laboratories Licens...

1. A method for adapting image data to a high dynamic range display, wherein the image data is specified for an input gamut
with a range that is lower than the range of a display gamut of the high dynamic range display, the method comprising:
applying a boost to the image data, wherein the image data is expanded to the range of the high dynamic range display, and
wherein the image data comprises pixels and applying a boost to the image data comprises scaling the pixels according to their
brightness by a boost factor, the boost factor being a function of the pixel values such that the boost factor increases for
increasing brightness of the pixel values;

dithering the boosted image data, comprising:
applying a variation to values of a plurality of the pixels wherein a pixel value changes differently compared to a neighboring
pixel value so as to reduce artifacts along boundaries within the image data, wherein the artifacts between neighboring pixels
are introduced by applying the boost to neighboring pixel values to achieve the expanded high dynamic range of the display;
and

rounding the value of the one or more pixels;
color correcting the image data specified for the input gamut to a display gamut by performing a transformation on the color
values and performing an affine transformation on color values according to the expression 0=sMi +c; wherein O is an output
color vector, i is an input color vector, s is a scaling value; M is a transformation vector and c is a color shift vector;

constraining the image data to the display gamut, wherein the constrained image data does not specify an intensity for any
color channel that is greater than an intensity that can be reproduced by the high dynamic range display.

US Pat. No. 9,445,121

OVERLAPPED BLOCK DISPARITY ESTIMATION AND COMPENSATION ARCHITECTURE

Dolby Laboratories Licens...

1. A method for motion compensation of images with overlapped block disparity compensation (OBDC), the method comprising the
steps of:
determining that OBDC is enabled in a video bit stream;
in response to determining that OBDC is enabled in a video bit stream, determining that OBDC is enabled for one or more transform
coded macroblocks that (i) neighbor a particular macroblock within the video bit stream, and (ii) are ordered within the video
bit stream to be transform coded before the particular macroblock;

in response to determining that OBDC is enabled for the one or more transform coded macroblocks in the video bitstream, determining
one or more regions of the particular macroblock for generating, using OBDC, residual information, wherein the one or more
regions include one or more overlapping regions between the particular macroblock and the one or more transform coded macroblocks,
and exclude one or more overlapping regions between the particular macroblock and one or more future macroblocks that (i)
neighbor the particular macroblock, and (ii) are ordered within the video bit stream to be transform coded after the particular
macroblock, and wherein the one or more excluded overlapping regions do not overlap with any one of the one or more transform
coded macroblocks;

for each respective region of the one or more regions:
performing OBDC prediction for the respective region of the particular macroblock using the particular macroblock and only
the transform coded macroblocks that overlap with the respective region;

generating residual information for the respective region of the particular block based on the OBDC prediction; and
transform coding the residual information.

US Pat. No. 9,373,334

METHOD AND SYSTEM FOR GENERATING AN AUDIO METADATA QUALITY SCORE

Dolby Laboratories Licens...

1. A method, comprising the steps of:
receiving an audio bitstream including at least two metadata parameters,
assessing the at least two metadata parameters, including by
determining metadata parameter quality values, including a metadata parameter quality value for each of the at least two metadata
parameters, wherein the audio bitstream is indicative of audio content of a program, the metadata parameters are indicative
of at least one of playback level, playback dynamic range, mixing level, or channel configuration of the audio content, and
at least one of the metadata parameters is specifically intended for use in changing sound of the program as delivered to
a listening environment, and each said metadata parameter quality value indicates whether or not the respective metadata parameter:

has been set correctly by a content creator, or
has been generated correctly during an encoding of the audio bitstream, and has not changed during a distribution and a transmission
of the audio bitstream; and

generating a metadata score based on a combination of the metadata parameter quality values, wherein at least two of the metadata
parameter quality values on which the metadata score is based correspond to the same segment of the audio bitstream.

US Pat. No. 9,462,399

AUDIO PLAYBACK SYSTEM MONITORING

Dolby Laboratories Licens...

26. A system for monitoring status of N speakers in a playback environment, where N is a positive integer, said system including:
M microphones positioned in the playback environment, where M is a positive integer; and
a processor coupled to each of the M microphones, wherein the processor is configured to process audio data to perform a status
check on each speaker of the N speakers, including by comparing, for each said speaker and each of at least one microphone
in the M microphones, a status signal captured by the microphone and a template signal,

wherein the template signal is indicative of response of a template microphone to playback by the speaker, in the playback
environment at an initial time, of a channel of the sound track corresponding to said speaker, and

wherein the audio data are indicative of a status signal captured by each microphone of the M microphones during playback
of an audiovisual program whose soundtrack has N channels,

wherein said playback of the program includes emission of sound determined by the program from the speakers in response to
driving each speaker of the N speakers with a speaker feed for a different one of the channels of the soundtrack.

US Pat. No. 9,451,232

REPRESENTATION AND CODING OF MULTI-VIEW IMAGES USING TAPESTRY ENCODING

Dolby Laboratories Licens...

1. A method for generating at least one view of a scene from a tapestry image, each of the at least one view being associated
with one desired viewpoint of the scene, the method comprising:
providing the tapestry image; wherein the tapestry image comprises a 2 D array of pixels comprising information from a plurality
of views associated with the scene;

providing a coordinates map associated with the tapestry image; wherein position data associated with the pixels of the tapestry
image comprise depth data for each pixel in the tapestry image; and wherein the position data associated with the pixels of
the tapestry image further comprise horizontal disparity data and/or vertical disparity data for each pixel in the tapestry
image;

deriving one or more views of the scene based on the tapestry image and the coordinates map.

US Pat. No. 9,343,071

RECONSTRUCTING AN AUDIO SIGNAL WITH A NOISE PARAMETER

Dolby Laboratories Licens...

1. A method for generating a reconstructed audio signal having a baseband portion and a highband portion, the method comprising:
deformatting an encoded audio signal into a first part and a second part;
obtaining a decoded baseband audio signal by decoding the first part, wherein the first part includes spectral components
of the baseband portion and does not include spectral components of the highband portion, wherein the number of the spectral
components of the baseband portion may vary dynamically;

extracting, from the second part, a noise parameter and an estimated spectral envelope of the highband portion;
obtaining a plurality of subband signals by filtering the decoded baseband audio signal;
generating a high-frequency reconstructed signal by copying a number of consecutive subband signals of the plurality of subband
signals;

obtaining an envelope adjusted high-frequency signal by adjusting, based on the estimated spectral envelope of the highband
portion, a spectral envelope of the high-frequency reconstructed signal, wherein a frequency resolution of the estimated spectral
envelope is adaptive;

generating a noise component based on the noise parameter, wherein the noise parameter indicates a level of noise contained
in the highband portion;

obtaining a combined high-frequency signal by adding the noise component to the envelope adjusted high-frequency signal; and
obtaining a time-domain reconstructed audio signal by combining the decoded baseband audio signal and the combined high-frequency
signal;

wherein the method is implemented by an audio decoding device comprising one or more hardware elements.

US Pat. No. 9,460,736

MEASURING CONTENT COHERENCE AND MEASURING SIMILARITY

Dolby Laboratories Licens...

1. A method of measuring content similarity between two audio segments, comprising:
extracting first feature vectors from the audio segments, wherein all the feature values in each of the first feature vectors
are non-negative and normalized so that the sum of the feature values is one;

generating statistical models for calculating the content similarity based on Dirichlet distribution from the feature vectors;
and

calculating the content similarity based on the generated statistical models, wherein the extracting comprises:
extracting second feature vectors from the audio segments; and
for each of the second feature vectors, calculating an amount for measuring a relation between the second feature vector and
each of reference vectors, wherein all the amounts corresponding to the second feature vectors form one of the first feature
vectors, wherein the reference vectors are determined through one of the following methods:

random generating method where the reference vectors are randomly generated;
unsupervised clustering method where training vectors extracted from training samples are grouped into clusters and the reference
vectors are calculated to represent the clusters respectively;

supervised modeling method where in the reference vectors are manually defined and learned from the training vectors; and
eigen-decomposition method where the reference vectors are calculated as eigenvectors of a matrix with the training vectors
as its rows.

US Pat. No. 9,445,110

VIDEO COMPRESSION AND TRANSMISSION TECHNIQUES

Dolby Laboratories Licens...

1. A method for rate control, comprising:
determining a bit target, and a number of bits used;
determining a rate factor associated with each of a plurality of previously coded frames;
for each rate factor associated with each of the plurality of previously coded frames, determining an absolute difference
between the bit target and a number of bits that would have been used if each rate factor was used to obtain the quantization
parameters for each of the previously coded frames;

selecting a minimum value from the absolute differences to obtain a current rate factor;
obtaining a current quantization parameter based on the current rate factor and a complexity value of a current frame; and
encoding the current frame using the current quantization parameter.

US Pat. No. 9,374,589

HDR IMAGES WITH MULTIPLE COLOR GAMUTS

Dolby Laboratories Licens...

1. A method to encode with a processor a high dynamic range (HDR) image, the method comprising:
receiving an input HDR image (HDR) with an input color gamut and an input dynamic range;
applying a tone mapping and a gamut mapping function to the input HDR image to generate a first base image in a first dynamic
range and a first color space, wherein the first color space comprises a first color gamut, wherein the first dynamic range
is lower than the input dynamic range and the first color gamut is narrower than the input color gamut;

applying a color transform process to the first base image to generate a second base image in a second color space with a
second color gamut, wherein the second color gamut is different than the first color gamut;

generating HDR metadata in response to the input HDR image and the second base image; and
generating an output coded HDR image based on the first base image and the HDR metadata.

US Pat. No. 9,075,897

STORING AND SEARCHING FINGERPRINTS DERIVED FROM MEDIA CONTENT BASED ON A CLASSIFICATION OF THE MEDIA CONTENT

DOLBY LABORATORIES LICENS...

1. A method comprising:
determining whether each of one or more perceptible characteristics is present in a first media content, wherein each of one
or more attributes corresponds to a respective perceptible characteristic in the one or more perceptible characteristics,
wherein each of the one or more perceptible characteristics is one of a specific perceptible audio characteristic or a specific
perceptible visual characteristic;

computing an attribute value for each attribute in the one or more attributes, wherein the attribute value comprises a binary
indicator to indicate a presence or absence of a respective perceptible characteristic in the one or more perceptible characteristics
in the first media content;

combining, from the one or more attributes, one or more attribute values into a first classification value of the first media
content, wherein the one or more attribute values comprise the attribute value computed for the each attribute in the one
or more attributes; and

storing a first fingerprint derived from the first media content at a location within a database, the location within the
database to store the first fingerprint being determined by the first classification value;

wherein the method is performed by one or more computing devices.

US Pat. No. 9,100,591

SYSTEMS AND METHODS OF MANAGING METAMERIC EFFECTS IN NARROWBAND PRIMARY DISPLAY SYSTEMS

Dolby Laboratories Licens...

1. A method for driving a display system, said display system comprising at least one plurality of narrowband emitters, said
at least one plurality of narrowband emitters emitting lights selected in frequency substantially near a first primary color
such that said at least one plurality of narrowband emitters emit a substantially even power distribution in a desired band
when said at least plurality of narrowband emitters are turned on, the steps of said method for driving a display system comprising:
inputting image data;
calculating input image data color saturation values;
calculating input image luminance values; and
selectively determining which narrowband emitters to turn on, depending upon the calculated saturation and luminance values
of the input image data;

wherein said step of selectively determining which narrowband emitters to turn on further comprises:
selecting an initial set of narrowband emitters to turn on to an initial luminance value, depending upon the color saturation
of the input image data;

comparing the luminance of said initial set of narrowband emitters to the luminance value of the input image data;
driving the initial set of narrowband emitters to a luminance value higher than the initial luminance value, if the sum of
the initial luminance value of the initial set of narrowband emitters is lower than the luminance value of the input image
data.

US Pat. No. 9,311,921

AUDIO DECODER AND DECODING METHOD USING EFFICIENT DOWNMIXING

Dolby Laboratories Licens...

1. A method of operating an audio decoder to decode audio data that includes encoded blocks of N.n channels of audio data
to form decoded audio data that includes M.m channels of decoded audio, M?1, n being the number of low frequency effects channels
in the encoded audio data, and m being the number of low frequency effects channels in the decoded audio data, the method
comprising:
accepting the audio data that includes blocks of N.n channels of encoded audio data encoded by an encoding method, the encoding
method including transforming N.n channels of digital audio data, and forming and packing frequency-domain exponent and mantissa
data; and

decoding the accepted audio data, the decoding including:
unpacking and decoding the frequency-domain exponent and mantissa data;
determining transform coefficients from the unpacked and decoded frequency-domain exponent and mantissa data;
ascertaining whether M upon ascertaining that M and upon determining for a particular block to apply frequency-domain downmixing, downmixing in the frequency domain according
to downmixing data such that the frequency-domain data is data after downmixing;

inverse transforming the frequency-domain data and applying further processing to determine sampled audio data; and
if for the case M sampled audio data according to downmixing data.

US Pat. No. 9,313,593

RANKING REPRESENTATIVE SEGMENTS IN MEDIA DATA

Dolby Laboratories Licens...

1. A method for ranking candidate representative segments within media data, comprising:
creating one or more media fingerprints each of which comprises a plurality of hash bits generated from the media data;
extracting features from the media data;
detecting a plurality of scenes within the media data based at least in part on the one or more media fingerprints and a distance
analysis for the features extracted from the media data;

assigning a plurality of ranking scores to a plurality of candidate representative segments in the media data, each individual
candidate representative segment in the plurality of candidate representative segments comprises at least one scene of the
plurality of scenes in the media data, each individual ranking score in the plurality of ranking scores being assigned to
an individual candidate representative segment in the plurality of candidate representative segments;

selecting from the plurality of candidate representative segments, based on the plurality of ranking scores, a representative
segment;

wherein the method is performed by one or more computing devices.

US Pat. No. 9,311,923

ADAPTIVE AUDIO PROCESSING BASED ON FORENSIC DETECTION OF MEDIA PROCESSING HISTORY

Dolby Laboratories Licens...

1. A method, comprising the steps of: accessing a media signal that is generated with one or more first processing operations
that occurred prior to the media signal access to change at least one characteristic of the media signal and generate one
or more sets of artifacts comprising unintended traces of the one or more first processing operations on the media signal
and characterize, at least in part, a processing history of the media signal prior to the accessing;
extracting one or more features from the accessed media signal, wherein the extracted features each respectively correspond
to the one or more sets of artifacts;

computing from the extracted features a conditional probability value relating a probability of observing the extracted features
given the one or more first processing operations; and

if the conditional probability value is above a defined threshold,
adapting one or more second processing operations respectively corresponding to each of the one or more first processing operations
such that each of the one or more second processing operations economizes computational resources and prevents formation of
further artifacts.

US Pat. No. 9,357,197

MULTI-LAYER BACKWARDS-COMPATIBLE VIDEO DELIVERY FOR ENHANCED DYNAMIC RANGE AND ENHANCED RESOLUTION FORMATS

Dolby Laboratories Licens...

1. A method comprising:
accessing a standard dynamic range (SDR) video signal (902), a first view of a first 3D enhanced dynamic range (EDR) video signal (904), and a second view of a second 3D EDR video signal (906), wherein the second view of the second 3D EDR video signal has a higher resolution than the first view of the first 3D EDR
video signal;

applying a normalization process to the SDR video signal and the first view of the first EDR video signal to generate a normalized
SDR* signal (912);

encoding the normalized SDR* signal with an encoder (920) to generate a coded base layer (BL) stream (922);

applying an inverse normalization process and an image registration process to a signal based on the normalized SDR* signal
to generate an estimate EDR video signal with the same resolution as the second view of the second 3D EDR video signal; and

encoding using a dual-view-dual-layer (DVDL) encoder the estimate EDR video signal and the second view of the second EDR video
signal to generate a coded enhancement layer (EL) stream (942).

US Pat. No. 9,226,048

VIDEO DELIVERY AND CONTROL BY OVERWRITING VIDEO DATA

Dolby Laboratories Licens...

1. A method of providing video data to a display subsystem, comprising:
capturing a sequence of video frames to provide video data;
editing on a reference display an image provided by the video data;
generating metadata identifying configuration parameters of the reference display and characteristics of the edited image,
wherein the metadata corresponds to a new scene in the video data and wherein the metadata is capable of controlling the display
subsystem according to a director's creative intent for displaying the video data;

locating black video frames in the video data preceding the new scene;
embedding the metadata in one or more chrominance portions of pixels in the black video frames, if available;
if not available, embedding the metadata in alternative portions of the video data;
delivering the video data including the embedded metadata to the display subsystem;
extracting the metadata at the display subsystem; and
configuring the display subsystem or processing the video data for the display subsystem based at least in part on the metadata.

US Pat. No. 9,313,597

SYSTEM AND METHOD FOR WIND DETECTION AND SUPPRESSION

Dolby Laboratories Licens...

1. A pickup system comprising:
a wind detector configured to receive first and second input signals, the wind detector including:
a plurality of analyzers each configured to analyze the first and second input signals; and
a combiner configured to combine outputs of the plurality of analyzers and issue, based on the combined outputs, a wind level
indication sinal indicative of wind activity; and

a wind suppressor including:
a ratio calculator configured to generate a ratio of sub-band powers of the first and second input signals; and
a mixer configured to select one of the first or second input signals and to apply to said selected input signal one of first
or second panning coefficients based on the wind level indication signal and on the ratio, the other of the first or second
input signals being unselected, wherein:

application of the first or second panning coefficients is a function of a ratio of the first and second input signals; and
one of the first or second panning coefficients ? is defined as
?=10?2*WindLevel*(Ratio-RatioTgt)/20Ratio?RatioTgt<0

where WindLevel is wind detector output signal provided to the wind suppressor, Ratio is a current ratio of the sub-band powers
(in dB) for the first and second input signals, and RatioTgt is a pre-selected ratio value for the sub-band powers (in dB)
of the first and second input signals.

US Pat. No. 9,251,740

STEREOSCOPIC DUAL MODULATOR DISPLAY DEVICE USING FULL COLOR ANAGLYPH

Dolby Laboratories Licens...

1. A method of displaying three-dimensional image data on a display system, the display system having a light source modulation
layer and a display modulation layer, the method comprising:
receiving a frame of image data comprising left eye image data and right eye image data;
based at least in part on the left eye image data, determining a first plurality of light source modulator control values
for driving a first plurality of light sources of the light source modulation layer to provide spatially modulated light having
a first spectral composition;

based at least in part on the right eye image data, determining a second plurality of light source modulator control values
for driving a second plurality of light sources of the light source modulation layer to provide spatially modulated light
having a second spectral composition which is complementary to the first spectral composition;

determining a first effective luminance pattern of light received on the display modulation layer from the first plurality
of light sources of the light source modulation layer, based at least in part on the first plurality of light source modulator
control values;

determining a second effective luminance pattern of light received on the display modulation layer from the second plurality
of light sources of the light source modulation layer, based at least in part on the second plurality of light source modulator
control values;

based at least in part on the left eye image data and the first effective luminance pattern, determining a first plurality
of display modulation layer control values for driving a plurality of pixels of the display modulation layer;

based at least in part on the right eye image data and the second effective luminance pattern, determining a second plurality
of display modulation layer control values for driving the plurality of pixels of the display modulation layer;

applying the first plurality of light source modulator control values to the light source modulation layer and the first plurality
of display modulation layer control values to the display modulation layer; and

applying the second plurality of light source modulator control values to the light source modulation layer and the second
plurality of display modulation layer control values to the display modulation layer;

comparing the first and second pluralities of light source modulator control values over an image area to determine a comparison
value; and

if the comparison value is above or equal to a threshold value,
selecting one of the first and second pluralities of the light source modulator control values and using the selected light
source modulator control values for both the first and second pluralities of light source modulator control values in said
steps of

applying the first plurality of light source modulator control values,
applying the second plurality of light source modulator control values,
determining the first effective luminance pattern, and
determining the second effective luminance pattern.

US Pat. No. 9,078,002

INTERPOLATION OF VIDEO COMPRESSION FRAMES

Dolby Laboratories Licens...

1. A decoder for video image decoding, the decoder providing a sequence of predicted and bidirectional predicted frames each
comprising pixel values arranged in picture regions, wherein
at least one picture region within a bidirectional predicted frame is determined using an interpolative motion vector prediction
mode based on one or more motion vectors from each of a plurality of referenceable frames, wherein the at least one picture
region is determined using an unequal weighting of pixel values corresponding to the plurality of referenceable frames, and
wherein the unequal weighting of the pixel values comprises weights received in a bitstream by the decoder.

US Pat. No. 9,374,652

CONFERENCING DEVICE SELF TEST

Dolby Laboratories Licens...

1. A method of operating a plurality of acoustic sensors arranged in a non-anechoic environment, the method comprising the
steps of:
exciting each acoustic sensor by producing, in the non-anechoic environment, an acoustic stimulus comprising a noise burst
of finite duration, and concurrently sensing an output from each acoustic sensor only during a time period beginning a finite
interval after the end of the acoustic stimulus;

computing a frequency-dependent magnitude response function of each acoustic sensor based on at least a non-initial portion
of the output from each acoustic sensor, said computation being based on the sensed output from the acoustic sensor and a
spectral composition of the noise burst; and

deriving a frequency-dependent calibration function based on a comparison of the magnitude response functions thus obtained
and a target response function which is common to all acoustic sensors.

US Pat. No. 9,306,524

AUDIO SIGNAL LOUDNESS DETERMINATION AND MODIFICATION IN THE FREQUENCY DOMAIN

Dolby Laboratories Licens...

1. A method of using a signal processing apparatus to determine an objective measure of the perceived loudness of an audio
signal, the method comprising:
accepting blocks of frequency-domain audio data into the signal processing apparatus, each accepted block of frequency-domain
audio data having a block size of a set of block sizes, each accepted block of frequency-domain audio data being the transform
of a corresponding block of time samples of an audio signal, the set of block sizes comprising a smallest block size and a
largest block size, each block size being the smallest block size or an integer multiple of the smallest block size each block
size of the set being no smaller than the smallest block size and no larger than the largest block size;

for each block size smaller than the largest block size, combining a plurality of accepted blocks having the same size as
each other and having a size which is one of the plurality of block sizes size, the combining of the plurality forming a formed
block of frequency domain data having the largest block size;

determining and/or accepting one or more perceived loudness parameters of the accepted blocks or of delayed versions of the
accepted blocks,

wherein each one of the one or more perceived loudness parameters comprises a respective set of parameter values, with one
parameter value of the set for each critical frequency band of a set of critical frequency bands,

wherein the critical frequency bands are at a frequency resolution that corresponds to the largest block size, and
wherein the one or more perceived loudness parameters provide an objective measure of perceived loudness, and include at least
one of:

a critical-band power spectrum of the accepted blocks or of delayed versions of the accepted blocks, the critical-band power
spectrum comprising spectrum values at the set of critical frequency bands and

a specific loudness of the accepted blocks or of delayed versions of the accepted blocks, the specific loudness comprising
specific loudness values at the set of critical frequency bands; and

determining at least one perceived loudness modification to the audio signal, the modification applicable in the frequency
domain at the frequency resolution that corresponds to the largest block size, the determining of the at least one perceived
loudness modification using the one or more perceived loudness parameters.

US Pat. No. 9,129,445

EFFICIENT TONE-MAPPING OF HIGH-BIT-DEPTH VIDEO TO LOW-BIT-DEPTH DISPLAY

Dolby Laboratories Licens...

1. A system for mapping input image data having a first dynamic range, to output image data having a second dynamic range,
said second dynamic range being a lower dynamic range than said first dynamic image range, said system comprising:
a tone mapping module, said tone mapping module inputting a luminance signal (Y?) from said input image data and outputting
a tone-mapped luminance signal (y?) for said output image data; and

a color mapping module, said color mapping module inputting chroma signals from said input image data and inputting said tone-mapped
luminance signal from said tone mapping module, said tone-mapped luminance signal being associated with said chroma signals,
and said color mapping module outputting chroma signals for said output image data; and wherein

said input image data and said output image data are YCbCr image data;
said input image data has a high dynamic range (HDR) image format for an HDR image capture device;
said output image data has a low dynamic range (LDR) image format for an LDR display device;
said color mapping module is capable of mapping said chroma signals (CB and CR) of said input image data to said chroma signals (cb and cr) of said output image data using a function dependent upon (y?, s, ?, ?) where y? comprises the tone mapped luminance signal,
s comprises a saturation adjustment term, ? comprises the gamma value for said HDR image capture device, and y comprises the
gamma value for said LDR display device; and

said function comprises

wherein cb=cr=128 when Y?=0, T is an offset for said HDR image format, t is an offset for said LDR image format, and MB, mb,MR, and mr are scaling constants.

US Pat. No. 9,324,328

RECONSTRUCTING AN AUDIO SIGNAL WITH A NOISE PARAMETER

Dolby Laboratories Licens...

1. A method for reconstructing an audio signal having a baseband portion and a highband portion, the method comprising:
obtaining a decoded baseband audio signal by decoding an encoded audio signal, wherein the encoded audio signal includes spectral
components of the baseband portion and does not include spectral components of the highband portion, wherein the number of
the spectral components of the baseband portion is capable of varying dynamically;

obtaining a plurality of subband signals by filtering the decoded baseband audio signal;
generating a high-frequency reconstructed signal by copying a number of consecutive subband signals of the plurality of subband
signals;

obtaining an envelope adjusted high-frequency signal by adjusting, based on an estimated spectral envelope of the highband
portion, a spectral envelope of the high-frequency reconstructed signal, wherein the estimated spectral envelope is extracted
from the encoded audio signal, and wherein a frequency resolution of the estimated spectral envelope is adaptive;

generating a noise component based on a noise parameter, wherein the noise parameter is extracted from the encoded audio signal,
and wherein the noise parameter indicates a level of noise contained in the highband portion;

obtaining a combined high-frequency signal by adding the noise component to the envelope adjusted high-frequency signal;
obtaining a time-domain reconstructed audio signal by combining the decoded baseband audio signal and combined high-frequency
signal;

wherein the method is implemented by an audio decoding device comprising one or more hardware elements.

US Pat. No. 9,313,505

METHOD AND SYSTEM FOR SELECTIVELY BREAKING PREDICTION IN VIDEO CODING

Dolby Laboratories Licens...

1. A decoder, comprising:
one or more processors; and
a storage medium coupled to the one or more processors having instructions stored thereon which, when executed by the one
or more processors, cause the one or more processors to perform operations comprising:

obtaining a coded video picture including a first slice and a second slice that are divided by at least one slice boundary
and a first tile and a second tile that are divided by a tile boundary, wherein the first tile and the second tile do not
have tile headers;

obtaining, from the coded video picture, a parameter set including a first flag for controlling a deblocking filter operation
across the tile boundary;

determining that the first flag indicates an application of the deblocking filter operation across the tile boundary; and
in response to determining that the first flag indicates the application of the deblocking filter operation across the tile
boundary, applying the deblocking filter operation across the tile boundary;

obtaining, from the coded video picture, a slice header including a second flag for controlling a deblocking filter operation
across the at least one slice boundary;

determining that the second flag indicates an application of the deblocking filter operation across the at least one slice
boundary;

in response to determining that the second flag indicates the application of the deblocking filter operation across the at
least one slice boundary, applying the deblocking filter operation across the at least one slice boundary; and

wherein applying a deblocking filter operation across a boundary comprises:
deriving a boundary strength of the boundary based on a prediction mode of a block, values of transform coefficients of the
block, and a motion vector of the block; and

filtering the boundary based on the derived boundary strength.

US Pat. No. 9,298,018

EYEWEAR WITH LENS RETENTION FEATURE AND METHOD OF MANUFACTURE

Dolby Laboratories Licens...

1. Eyewear comprising:
at least one lens;
a frame adapted to receive and support said at least one lens;
at least one arm;
a primary attachment feature adapted to attach said arm to said frame; and
a secondary attachment feature different from said primary attachment feature and being adapted to attach said arm to said
frame,

wherein said secondary attachment feature includes a fastener receiving structure, said at least one arm is coupled to said
frame using one or the other of said primary attachment feature and said secondary attachment feature, and said fastener receiving
structure is at least partially concealed.

US Pat. No. 9,154,102

SYSTEM FOR COMBINING LOUDNESS MEASUREMENTS IN A SINGLE PLAYBACK MODE

Dolby Laboratories Licens...

1. A method for providing perceptually relevant loudness related values for loudness normalization to a media player, the
method comprising:
providing a first loudness related value associated with an audio signal; wherein the first loudness related value has been
determined according to a first procedure; wherein the first procedure comprises processing the audio signal in accordance
to human loudness perception;

converting the first loudness related value into a second loudness related value using a model comprising a reversible relation;
wherein the second loudness related value is associated with a second procedure for determining loudness related values; wherein
the reversible relation is an approximation of the actual relationship between the first and second loudness related values,
and is given by either:

L2=A+BL1
wherein L2 is the second loudness related value measured in dB, L1 is the first loudness related value measured in dB, and either:

?17?A??15 and ?0.7?B??0.9;
or:
?19?A??18 and B=?1.0;
or:
L2=A+BL1+CL12
wherein L2 is the second loudness related value measured in dB, L1 is the first loudness related value measured in dB and A, B and C are real numbers;

storing the second loudness related value in metadata associated with the audio signal; and
providing the metadata to the media player to enable the media player to render the audio signal using the second loudness
related value.

US Pat. No. 9,317,561

SCENE CHANGE DETECTION AROUND A SET OF SEED POINTS IN MEDIA DATA

Dolby Laboratories Licens...

1. A method for scene change detection in media data, comprising:
deriving a set of filtered values from the media data;
identifying a plurality of seed time points among time points at which the set of filtered values derived from the media data
reach extremum values;

determining one or more statistical patterns of media features in a plurality of time-wise intervals around the plurality
of seed time points of the media data using one or more types of features extractable from the media data, at least one of
the one or more types of features comprising a type of features that captures structural properties, tonality including harmony
and melody, timbre, rhythm, loudness, stereo mix, or a quantity of sound sources as related to the media data;

detecting, based on the one or more statistical patterns, a plurality of beginning scene change points and a plurality of
ending scene change points in the media data for the plurality of seed time points in the media data;

wherein the method is performed by one or more computing devices.

US Pat. No. 9,386,321

INTERPOLATION OF VIDEO COMPRESSION FRAMES

Dolby Laboratories Licens...

1. An apparatus comprising a video bitstream stored on one or more non-transitory machine-readable media, the video bitstream
characterized by:
data representing predicted and bidirectional predicted picture frames that include picture regions in a compressed format,
wherein a portion of the data that represent at least one picture region in the compressed format within a bidirectional predicted
picture frame comprises:

a signal indicating whether a bidirectional prediction mode for the at least one picture region is used;
a signal indicating whether an interpolative motion vector prediction mode for the at least one picture region based on two
referenceable frames in the video bitstream is used;

a signal indicating whether a pixel interpolation mode using pixel weighting from the two or more referenceable frames in
the video bitstream is used; and

two pixel weight values for the pixel weighting, wherein the two pixel weight values are unequal.

US Pat. No. 9,357,230

BLOCK DISPARITY ESTIMATION AND COMPENSATION ARCHITECTURE

Dolby Laboratories Licens...

1. A method for decoding a video bit stream that adaptively utilizes block motion compensation, the method comprising:
receiving, by a decoder having one or more processors, a video bit stream including a first block partition, a second block
partition, a third block partition, and an explicit signal indicating a mode for the first block partition, wherein the first
block partition is edge adjacent to the second block partition, and wherein the first block partition and the second block
partition have a prediction type of inter-prediction;

determining, by the decoder, whether the mode for the first block partition includes a first mode and whether the mode for
the first block partition includes a second mode;

in response to determining that the mode for the first block partition includes the first mode, utilizing, by the decoder,
motion vector information of the second block partition for performing block motion compensation for the first block partition;
and

in response to determining that the mode for the first block partition includes the second mode, utilizing, by the decoder,
motion vector information of the third block partition for performing block motion compensation for the first block partition
without utilizing the motion vector information of the second block partition.

US Pat. No. 9,373,335

PROCESSING AUDIO OBJECTS IN PRINCIPAL AND SUPPLEMENTARY ENCODED AUDIO SIGNALS

Dolby Laboratories Licens...

1. A method for generating an encoded audio output signal, wherein the method comprises:
receiving a principal encoded signal encoded in the Dolby TrueHD format, the principal encoded signal including encoded data
representing discrete audio content and spatial location for each of one or more principal audio objects;

receiving a supplementary encoded signal that includes encoded data representing discrete audio content and spatial location
for each of one or more supplementary audio objects; and

assembling the encoded data from the principal encoded signal with the encoded data from the supplementary encoded signal
to generate the encoded audio output signal, wherein said assembling comprises either:

adding the encoded data from the supplementary encoded signal to the encoded data from the principal encoded signal, including
by identifying an access unit of the principal encoded signal, expanding the access unit to include space for a new substream
in the principal encoded signal and placing the encoded data from the supplementary encoded signal into the new substream;
or

modifying the principal encoded signal to include the encoded data from the supplementary encoded signal, including by using
control data of the principal encoded signal to locate an existing section of the principal encoded signal, the existing section
being of a size large enough to accommodate the encoded data from the supplementary encoded signal, and by placing the encoded
data from the supplementary encoded signal into the existing section; or

modifying the principal encoded signal to include the encoded data from the supplementary encoded signal, including by using
control data of the principal encoded signal to locate and determine the size of an existing section of the principal encoded
signal, by determining whether the size of the existing section is large enough to accommodate the encoded data from the supplementary
encoded signal, by expanding the existing section if it is not large enough to accommodate the encoded data from the supplementary
encoded signal, and by placing the encoded data from the supplementary encoded signal into the existing section.

US Pat. No. 9,325,976

DISPLAYS, INCLUDING HDR AND 3D, USING BANDPASS FILTERS AND OTHER TECHNIQUES

Dolby Laboratories Licens...

9. A 3D display, comprising:
a backlight comprising a plurality of light sources;
a controller configured to energize the plurality of light sources to produce a low resolution version of a desired image
based on image data; and

a display panel comprising a modulator configured to further modulate the low resolution image to produce the desired image;
and wherein

the backlight comprises a first set of a plurality of primary light sources forming a first channel of a 3D image to be displayed,
and a second set of a plurality of primary light sources forming a second channel of the 3D image to be displayed, and a third
set of a plurality of primary light sources forming a common channel of the 3D image to be displayed;

the first set of a plurality of primary light sources emits light at first wavelengths in the red, green, and blue regions
of the visible spectrum;

the second set of a plurality of primary light sources emits light at second wavelengths in the red, green, and blue regions
of the visible spectrum, the second wavelengths being different than the first wavelengths; and

the third set of a plurality of primary light sources emits light at third wavelengths in the red, green, and blue regions
of the visible spectrum.

US Pat. No. 9,225,951

GENERATING ALTERNATIVE VERSIONS OF IMAGE CONTENT USING HISTOGRAMS

Dolby Laboratories Licens...

1. A method for generating additional versions of video content, the method comprising:
obtaining first histogram data for a first version of the video content adapted to be viewed in a first ambient lighting condition;
obtaining a second version of the video content adapted to be viewed in a second ambient lighting condition; wherein the first
and second ambient lighting conditions are different;

obtaining second histogram data for the second version of the video content;
generating a mapping from the second version of the video content to an additional version of the video content by performing
a number of iterations of a progressive histogram matching algorithm, the number of iterations being fewer than a maximum
number of iterations of the progressive histogram matching algorithm; wherein the number of iterations is determined based
upon a signal from an ambient light sensor representing a third ambient light condition; wherein generating a mapping comprises
determining a histogram transformation which morphs the second histogram data to become more like the first histogram data;
and

applying the mapping to generate the additional version of the video content from the second version of the video content;
wherein the additional version is a recreation of the first version.

US Pat. No. 9,589,571

METHOD AND DEVICE FOR IMPROVING THE RENDERING OF MULTI-CHANNEL AUDIO SIGNALS

Dolby Laboratories Licens...

1. A method for encoding audio data, comprising:
detecting for the audio data an audio data type out of at least three different types, the types comprising a first Higher-Order
Ambisonics (HOA) format, a microphone recording with a given setup of a plurality of microphones and a multichannel audio
stream mixed according to a specific panning;

transforming coefficients of the audio data of a first HOA format based on an inverse Discrete Spherical Harmonics Transform
(iDSHT) to coefficients of a second HOA format based on a determination that the audio data has the first HOA format;

encoding the coefficients of the spatial domain of the second HOA format and auxiliary data that indicate at least metadata
about virtual or real loudspeaker positions and mixing information about the audio data, the mixing information comprising
details of at least one of details of the first HOA format, and the given setup of the plurality of microphones and details
of said specific panning.

US Pat. No. 9,094,771

METHOD AND SYSTEM FOR UPMIXING AUDIO TO GENERATE 3D AUDIO

Dolby Laboratories Licens...

1. A method for generating 3D output audio comprising N+M full range channels, where N and M are positive integers and the
N+M full range channels are intended to be rendered by speakers including at least two speakers at different distances from
a listener, said method including the steps of:
(a) providing N channel input audio, comprising N full range channels;
(b) upmixing the input audio to generate the 3D output audio, and
(c) providing source depth data indicative of distance from the listener of at least one audio source,
wherein step (b) includes a step of upmixing the N channel input audio to generate the 3D output audio using the source depth
data,

wherein the N channel input audio is a soundtrack of a stereoscopic 3D video program comprising left and right eye frame images,
and step (c) includes generating the source depth data, including by identifying at least one visual image feature determined
by the 3D video program, and generating the source depth data to be indicative of determined depth of each said visual image
feature,

wherein generating the source depth data comprises measuring a disparity of the least one visual image feature of the left
and right eye frame images, using the disparity to create a visual depth map, and using the visual depth map to generate the
source depth data.

US Pat. No. 9,373,287

REDUCED POWER DISPLAYS

Dolby Laboratories Licens...

1. A backlight for a display, the backlight comprising:
a plurality of groups of light emitters, each group comprising at least one light emitter;
a power supply, the power supply configured to supply power to the groups of light emitters; and
one or more controllers, the controllers configured to receive a new set of image data to render upon the display and control
brightness levels of the light emitters within the groups by applying pulse width modulation (PWM) driving signals to each
light emitter, each PWM driving signals having an initial period, a duty cycle proportional to the brightness level, and a
phase offset that varies between the groups;

wherein the brightness levels depend upon the received new image data and wherein the brightness levels of at least two light
emitters are independently controllable;

wherein the duration of a first PWM cycle of the PWM driving signals for the new image is controllable to be different from
the duration of subsequent PWM cycles for the new image;

wherein the duration of the first and subsequent PWM cycles are derived from image data so as to illuminate a primary modulator
with a low resolution version of an image represented by the image data; and

further wherein the total power of the backlight is configured to be ramped up from zero to a desired value for the first
PWM cycle and fluctuate for subsequent PWM cycles for the new image to be displayed.

US Pat. No. 9,374,576

FUSED REGION-BASED VDR PREDICTION

Dolby Laboratories Licens...

1. An image prediction method comprising:
receiving an input image that comprises pixels;
dividing the input image into a plurality of non-overlapping regions;
assigning one of the non-overlapping regions as a current region;
(a) for the current region, predicting first output data, wherein the first output data is predicted with a first prediction
function, input image data specific to the current region and input prediction parameters data for the first prediction function;

(b) determining a neighbor region that is adjacent to the current region, wherein the neighbor region represents another of
the one or more non-overlapping regions;

defining a border portion of the current region that is proximate to a border of the neighbor region, wherein the border portion
of the current region lies outside the neighbor region;

for the pixels in the border portion, predicting second output data, wherein the second output data is predicted with a second
prediction function, input image data from the border portion and input prediction parameter data from the neighbor region,
wherein the neighbor region comprises previously predicted output prediction values from a previously fused output;

fusing the first output prediction data with the second output data into a fused output; and
predicting a final set of output prediction values from the fused output.

US Pat. No. 9,269,312

RAPID ESTIMATION OF EFFECTIVE ILLUMINANCE PATTERNS FOR PROJECTED LIGHT FIELDS

Dolby Laboratories Licens...

1. A method for estimating an effective luminance pattern representing a distribution of projected light in a display apparatus,
the method comprising:
determining driving values for one or more light sources arranged to project light;
determining an effective luminance pattern for the projected light by:
for each of the one or more light sources determining a contribution to the effective luminance pattern for each of a plurality
of components of a point spread function for the light source; and,

combining the contributions of the components the effective luminance pattern to yield an estimated effective luminance pattern;
wherein one of the plurality of components consists essentially of a higher-order part of a set of point spread function values
and a second one of the components consists essentially of a lower-order part of the set of point spread function values and
the set of point spread function values comprises 16-bit words and the higher-order and lower-order parts of the set of point
spread function values each comprises 8-bit words.

US Pat. No. 9,300,714

UPSTREAM SIGNAL PROCESSING FOR CLIENT DEVICES IN A SMALL-CELL WIRELESS NETWORK

Dolby Laboratories Licens...

1. A method of processing media data for quality enhancement processing, the method comprising:
receiving the media data at a network node of a wireless network for upstream signal processing on one or both of audio data
and video data, the network node having a media rendering device wirelessly coupled thereto, the network node including or
coupled to processing hardware that is external to media rendering device, the media rendering device being battery operated,
computational-power-limited, or both battery operated and computational-power-limited;

receiving at the network node to which the media rendering device is wirelessly coupled, one or more environmental quantities
determined from one or more sensors located in the vicinity of but not in or on the wireless media rendering device, the environmental
quantities related to at least one or both of acoustic noise and lighting in the environment in the vicinity of the media
rendering device;

data processing of the received media data at the network node using the processing hardware included in or coupled to the
network node, the data processing being based on the one or more received environmental quantities, the data processing generating
processed data; and

transmitting wirelessly the processed data to the media rendering device for rendering by the media rendering device, or for
further processing and rendering by the media rendering device, thereby limiting the use of electric and processing power
at the media rendering device,

wherein the data processing is carried out on one or both of the audio data included in the media data and the video data
included in the media data,

wherein when performing the data processing includes performing the data processing on the audio data included in the media
data, the one or more environmental quantities include at least one quantity indicative of an acoustic profile of acoustic
noise in the environment of the media rendering device, including a spectral estimate of the acoustic noise sensed in the
environment of the media rendering device, the processing the media data for quality enhancement includes the data processing
of the audio data by the processing hardware, and comprises generating modification parameters for modifying the audio data,
the modifying generating quality-enhanced audio data, and the quality enhancement of the audio includes noise compensation,
and

wherein when performing the data processing includes performing the data processing on the video data included in the media
data, the one or more environmental quantities include one or more parameters indicative of lighting in the environment of
the media rendering device, the media rendering device includes a flat panel display device that has a display panel and location
dependent backlighting elements each backlighting element being configured to provide a variable amount of back-light to a
corresponding region of the display panel according to modulation data for the corresponding region while a scene is being
displayed, the processing the media data for quality enhancement includes the data processing of the video data by the processing
hardware and comprises generating the modulation data according to at least one of the one or more parameters indicative of
lighting in the environment of the media rendering device, the generated modulation data is further configured to modulate
the intensity of the backlighting elements according to one or more image properties in the corresponding region of the scene
being displayed, the generated modulation data is further configured to modulate the intensity of the backlighting elements
according to one or more image properties in the corresponding region of the scene being displayed, the one or more image
properties including a contrast-related property, a brightness-related property, or both a contrast-related property and a
brightness-related property,
such that at least some of the processing of the media data for quality enhancement is carried out by the processing hardware
included in or coupled to the network node.

US Pat. No. 9,291,830

MULTIVIEW PROJECTOR SYSTEM

Dolby Laboratories Licens...

1. A projector system comprising:
a reflective screen;
an imaging device operable to direct light from a light source toward the screen so that the light is reflected by the screen
to a viewing area;

a light steering element positioned in a light path between the imaging device and the screen, the steering element operable
to steerably direct the light from the light source to selected locations on the screen; and

a resolution management system operable to control the light steering element so as to adjust at least one of spatial, temporal,
color and viewpoint resolutions of images displayed by the imaging device,

wherein the imaging device is operable to display images for multiple viewpoints, and for each image the steering element
is operable to scan the light to locations on the screen such that the light is reflected by the screen to a corresponding
viewpoint location.

US Pat. No. 9,224,363

METHOD AND APPARATUS FOR IMAGE DATA TRANSFORMATION

Dolby Laboratories Licens...

1. An apparatus comprising one or more processors and at least one memory storing instructions that are executable by the
one or more processors, the apparatus further comprising:
an image data input device, implemented at least partially in hardware, that receives image metadata for a color-timing display
and image data comprising pixel values;

a pixel coordinate mapping device, implemented at least partially in hardware, that transforms the image data into transformed
image data according to a parameterized transfer function characterized by a plurality of anchor points and a free parameter,
the parameterized transfer function having a mid-range slope controlled by the free parameter, wherein transformations at
the anchor points are not affected by the free parameter; wherein the anchor points comprise black level and white level anchor
points and a mid-tone anchor point, the black level anchor point corresponding to a transformation from a black level of the
color-timing display used in color-timing or approving the image data to a black level of a target display used for display
of the transformed image data, the white level anchor point corresponding to a transformation from a white level of the color-timing
display to a white level of the target display; wherein the mid-tone anchor point affects a target brightness of the image
data on the target display; wherein the black level of the color-timing display differs from the black level of the target
display and/or wherein the white level of the color-timing display differs from the white level of the target display;

wherein at least one of the black level of the color-timing display or the white level of the color-timing display is a value
specified by the image metadata;

an image data output device, implemented at least partially in hardware, that outputs the transformed image data.

US Pat. No. 9,083,979

INTERPOLATION OF VIDEO COMPRESSION FRAMES

Dolby Laboratories Licens...

1. A non-transitory computer-readable medium storing a bitstream, the bitstream comprising:
data in a compressed format corresponding to a plurality of picture regions of predicted and bidirectional predicted frames,
wherein the data in the compressed format for at least one picture region within a bidirectional predicted frame comprises:

a flag indicating an interpolative motion vector prediction mode for the at least one picture region based on two or more
referenceable frames;

a signal indicating a pixel interpolation mode using an unequal pixel weighting from the two or more referenceable frames;
and

two or more pixel weight values for the unequal pixel weighting.

US Pat. No. 9,078,008

ADAPTIVE INTER-LAYER INTERPOLATION FILTERS FOR MULTI-LAYERED VIDEO DELIVERY

Dolby Laboratories Licens...

1. A multi-layered video system adapted to receive an input video signal, the input video signal having an original base layer
video signal and at least one original enhancement layer video signal, the multi-layered video system comprising:
a base layer, comprising a base layer video encoder, wherein the base layer video encoder is adapted to receive the original
base layer video signal;

at least one enhancement layer, associated with the base layer, the at least one enhancement layer comprising an enhancement
layer video encoder, wherein each enhancement layer video encoder is adapted to receive one or more of the at least one original
enhancement layer video signal; and

an encoder processing module comprising at least one adaptive filter to filter an output of the base layer video encoder to
form a processed encoder output and input the processed encoder output into at least one of the enhancement layer video encoders
as an enhancement layer estimate signal,

wherein:
the original base layer video signal and the at least one enhancement layer video signal are adapted to be segmented into
a plurality of partitions,

the at least one adaptive filter is selected for applying to a full set or subset of partitions in the plurality of partitions,
the at least one adaptive filter comprises filter coefficients, wherein the filter coefficients of the at least one adaptive
filter are adjusted, prior to the applying of the at least one adaptive filter, based on at least one of image information
in each partition in the full set or subset of partitions and image information in one or more partitions neighboring each
partition in the full set or subset of partitions,

the neighboring partitions comprise partitions from temporal neighbors of the full set or subset of partitions, partitions
from spatial neighbors of the full set or subset of partitions, and partitions from inter-layer neighbors of the full set
or subset of partitions, and
the image information in the inter-layer neighbors comprises texture and motion information from the at least one enhancement
layer associated with the full set or subset of partitions, wherein the enhancement layer further comprises temporal predictors
of a current enhancement layer video based on a previously coded enhancement layer video, wherein a pre-analysis is performed
to identify the enhancement video-blocks likely to be predicted using temporal predictors, and wherein the adaptive filters
are calculated by eliminating from calculation, at least in part, the enhancement video blocks likely to be predicted using
temporal predictors.

US Pat. No. 9,369,712

BUFFERED ADAPTIVE FILTERS

Dolby Laboratories Licens...

1. A method for encoding or decoding a current picture or slice of a video signal, the method comprising the steps of:
establishing at least one filter buffer in one or more of a video encoder or a video decoder;
buffering a plurality of adaptive filters in the at least one filter buffer, wherein the plurality of adaptive filters are
used to perform one or more of motion interpolation, motion estimation, motion compensation, motion estimation interpolation,
motion compensation interpolation, motion prediction interpolation or motion prediction of an encoding or decoding step;

encoding or decoding the current picture or slice with at least two (2) reference pictures, wherein one or more of the adaptive
filters for a first of the at least two reference pictures are signaled independently from at least one of the adaptive filters
for at least a second of the at least two reference pictures; and

based on the encoding or decoding of the current picture or slice, tracking a usage count of each filter of the plurality
of filters; and

based on the tracking, managing the at least one filter buffer, wherein the managing step comprises the step of:
arranging an order with which the plurality of adaptive filters are buffered in the at least one filter buffer wherein the
order is arranged according to the usage count of each filter in the at least one filter buffer, and by sending an ordering
syntax in a video bitstream from the encoder to the decoder, and by periodically re-ordering the plurality of adaptive filters
in the at least one filter buffer dependent upon the usage count of each filter in the at least one filter buffer and upon
the ordering syntax.

US Pat. No. 9,262,951

STEREOSCOPIC 3D DISPLAY SYSTEMS AND METHODS FOR IMPROVED RENDERING AND VISUALIZATION

Dolby Laboratories Licens...

1. A method of three-dimensional imaging comprising:
determining, from image data, a first left-eye frame corresponding to a first media time, wherein the first left-eye frame
is among a sequence of left-eye frames respectively corresponding to a sequence of media times;

determining, from the image data, a first right-eye frame corresponding to the first media time, wherein the first right-eye
frame is among a sequence of right-eye frames respectively corresponding to the sequence of media times;

creating a first composite frame of a first type, the first composite frame of the first type comprising an upper-left portion
of left pixel values from an upper-left portion of the first left-eye frame and a lower-right portion of right pixel values
from a lower-right portion of the first right-eye frame;

wherein the first composite frame of the first type comprises scan lines each of which comprises both left and right pixel
values;

outputting the first composite frame of the first type to a display area;
determining, from the image data, a second left-eye frame corresponding to a second media time, wherein the second left-eye
frame is among the sequence of left-eye frames, and wherein the second media time immediately follows the first media time;

creating a first composite frame of a second type, the first composite frame of the second type comprising an upper-left portion
of right pixel values from an upper-left portion of the first right-eye frame and a lower-right portion of left pixel values
from a lower-right portion of the second left-eye frame;

wherein the first composite frame of the second type comprises scan lines each of which comprises both left and right pixel
values; and

outputting the first composite frame of the second type, subsequent in time to the first composite frame of the first type,
to the display area;

wherein the method is performed by one or more computing devices.

US Pat. No. 9,450,551

AUDIO CONTROL USING AUDITORY EVENT DETECTION

Dolby Laboratories Licens...

1. A method for processing an audio signal in an audio processing apparatus, the method comprising:
monitoring a characteristic of the audio signal;
identifying a change in the characteristic;
establishing an auditory event boundary to identify the change in the characteristic, wherein an audio portion between consecutive
auditory event boundaries constitutes an auditory event; and

applying a modification to the audio signal based in part on an occurrence of an auditory event,
wherein the audio processing apparatus is implements at least in part in hardware.

US Pat. No. 9,210,322

3D CAMERAS FOR HDR

Dolby Laboratories Licens...

1. A method for creating one or more high dynamic range (HDR) images using a stereoscopic camera, the method comprising:
capturing a scene using a stereoscopic camera with a synopter;
generating from the captured scene a first input image and a second input image, wherein the second input image uses a different
exposure than the first input image; and

combining said two input images into a single output image.

US Pat. No. 9,076,224

IMAGE PROCESSING FOR HDR IMAGES

Dolby Laboratories Licens...

1. A method to encode a high dynamic range (HDR) image with a processor, the method comprising:
computing a histogram of logarithmic (log) luminance pixel values in the HDR image;
generating a tone mapped curve based on the histogram;
computing a log global tone-mapped luminance image based on the log luminance pixel values in the HDR image and the tone mapped
curve;

computing a downscaled log global tone-mapped luminance image based on the log global tone-mapped luminance image;
computing a log ratio image based on the log luminance pixel values in the HDR image and the log global tone-mapped luminance
image;

performing multi-scale resolution filtering to the log ratio image to generate a log multi-scale ratio image;
generating a second log tone-mapped image based on the log multi-scale ratio image and the log luminance pixel values in the
HDR image;

normalizing the second log tone-mapped image to generate an output tone-mapped image based on the downscaled log global tone-mapped
luminance image and the second log tone-mapped image;

generating a second ratio image based on the input HDR image and the output tone-mapped image; and
quantizing the second ratio image to generate a quantized second ratio image.

US Pat. No. 9,075,806

ALIGNMENT AND RE-ASSOCIATION OF METADATA FOR MEDIA STREAMS WITHIN A COMPUTING DEVICE

DOLBY LABORATORIES LICENS...

1. A method for re-associating dynamic metadata with media data:
creating, with a first media processing stage, binding information comprising dynamic metadata and a time relationship between
the dynamic metadata and media data, the binding information being derived from the media data;

while the first media processing stage delivers the media data to a second media processing stage in a first data path, passing,
by the first media processing stage, the binding information to the second media processing stage in a second data path; and

re-associating, with the second media processing stage, the dynamic metadata and the media data using the binding information;
wherein creating binding information includes:
deriving, with the first media processing stage, one or more fingerprints from a first version of the media data;
creating the time relationship between the dynamic metadata and the media data as one or more time correspondences between
one or more units of the dynamic metadata and the one or more fingerprints derived from the first version of the media data;

storing the one or more fingerprints in the binding information along with the dynamic metadata and the time relationship;
and

wherein re-associating the dynamic metadata and the media data includes:
regenerating, with the second media processing stage, one or more second fingerprints from a second version of the media data
delivered in the first data path;

time aligning the one or more fingerprints received in the second data path with the one or more second fingerprints regenerated
with the second media processing stage;

wherein the method is performed by one or more computing devices.

US Pat. No. 9,299,167

IMAGE PROCESSING METHODS AND APPARATUS USING LOCALIZED GAMUT DEFINITIONS

Dolby Laboratories Licens...

1. A method of encoding image data for transmission to a downstream device, the method comprising:
receiving image data, the image data having a format and a content gamut;
determining one or more gamut characteristics of the image data;
based at least in part on the gamut characteristics, determining a localized gamut definition for the image data; and
applying a gamut transform to the image data wherein color coordinates specified by the image data are mapped to corresponding
color coordinates of the localized gamut definition; and wherein

a gamut of the image data having the format is a range of colors capable of being represented by the format;
the content gamut of the image data is a range of colors present in the image data; and
the localized gamut definition defines fewer color coordinate values than the gamut of the image data having the format.

US Pat. No. 9,282,343

MULTIVIEW AND BITDEPTH SCALABLE VIDEO DELIVERY

Dolby Laboratories Licens...

1. A method of encoding input video data into bitstreams, the input video data comprising a plurality of video data categories
being two or more views or being two views and a depth map indicating multi-view depth information, the method comprising:
providing a first basic dynamic range (BDR) layer, the first BDR layer comprising a frame-compatible BDR representation of
the plurality of video data categories from the input video data; and

providing a layer grouping, the layer grouping comprising at least one second higher dynamic range (HDR) layer and one third
HDR layer,

the second HDR layer comprising a second frame-compatible HDR representation of the plurality of video data categories from
the input video data, and

the third HDR layer comprising a third frame-compatible HDR representation of the plurality of video data categories from
the input video data,

the third frame-compatible HDR representation being complementary with respect to the second frame-compatible HDR representation;
providing a further layer grouping, the further layer grouping comprising at least one fourth BDR layer comprising a fourth
frame-compatible BDR representation of the plurality of video data categories from the input video data, the fourth frame-compatible
BDR representation being complementary with respect to the first frame-compatible BDR representation;

encoding the first BDR layer to generate a first layer bitstream;
generating a first layer filtered reconstructed image based on the first layer bitstream in a first layer reference picture
buffer;

encoding the fourth BDR layer to generate a fourth layer bitstream, wherein the fourth layer bitstream is generated by considering
at least one selected from the group consisting of: a difference between the fourth frame-compatible BDR representation and
the first layer filtered reconstructed image, inter prediction of temporally decoded pictures of the fourth BDR layer, and
intra prediction of the fourth BDR layer;

generating a fourth layer filtered reconstructed image based on the fourth layer bitstream in a fourth layer reference picture
buffer

encoding the second HDR layer to generate a second layer bitstream, wherein the second layer bitstream is generated by considering
at least one selected from the group consisting of: a difference between the second frame-compatible HDR representation and
the first layer filtered reconstructed image, inter prediction of temporally decoded pictures of the second HDR layer, and
intra prediction of the second HDR layer;

generating a second layer filtered reconstructed image based on the second layer bitstream in a second layer reference picture
buffer; and

encoding the third HDR layer to generate a third layer bitstream, wherein the third layer bitstream is generated by considering
at least one selected from the group consisting of: a difference between the third frame-compatible HDR representation and
the second layer filtered reconstructed image, inter prediction of temporally decoded pictures of the third HDR layer, intra
prediction of the third HDR layer, and a difference between the third frame-compatible HDR representation, the second layer
filtered reconstructed image, and the fourth layer filtered reconstructed image;

wherein the first BDR layer is a base layer, and the fourth BDR layer, the second HDR layer, and the third HDR layer are enhancement
layers; and

wherein the method further comprises:
reference processing one or more of the first layer filtered reconstructed image, the fourth layer filtered reconstructed
image, and the second layer filtered reconstructed image with a plurality of reference processing units,

the reference processing comprising at least one of de-multiplexing, up-sampling, de-interlacing, frequency filtering, and
interpolating the filtered reconstructed images,

wherein reference processing is performed on reference pictures at the base layer picture puffer for enhancing inter-layer
prediction for one or more of the enhancement layers and on reference pictures in the reference picture buffers of higher-priority
enhancement layers belonging to a same layer grouping having at least two enhancement layers for enhancing inter-layer prediction
for one or more of the enhancement layers in that same layer grouping.

US Pat. No. 9,299,317

LOCAL MULTISCALE TONE-MAPPING OPERATOR

Dolby Laboratories Licens...

1. A method comprising:
converting luminance values in an input high dynamic range (HDR) image into a logarithmic domain;
applying a global ratio operator to the luminance values in the logarithmic domain derived from the input high dynamic range
(HDR) image to generate a high resolution gray scale ratio image in the logarithmic domain comprising luminance ratio values;

generating, based at least in part on the high resolution gray scale ratio image in the logarithmic domain, at least two different
gray scale ratio images, merging the at least two different gray scale ratio images to generate a local multiscale gray scale
ratio image that comprises a weighted combination of the at least two different gray scale ratio images, each being of a different
spatial resolution level;

wherein the local multiscale gray scale ratio image is obtained using recursive processing;
using ratio values in the local multiscale gray scale ratio image with the luminance values in the logarithmic domain derived
from the input HDR image to generate a tone-mapped gray scale image;

wherein the tone-mapped gray scale image comprises a fixed-point grayscale image with the same spatial dimensions as the spatial
dimensions of the input HDR image; and

computing a reference maximum over the tone-mapped gray scale image such that a percentage of pixels that lie outside a color
gamut are not more than a threshold.

US Pat. No. 9,275,605

CONTENT METADATA ENHANCEMENT OF HIGH DYNAMIC RANGE IMAGES

Dolby Laboratories Licens...

1. A method for distributing image data, the method comprising:
determining a range of the image data;
mapping the image data to a reduced bit-depth format to produce a lower bit-depth representation of the image data with a
mapping such that a ratio of a range of the lower bit depth representation to a maximum range of the lower bit depth representation
is greater than a ratio of the range of the image data to a maximum range of the image data;

generating metadata characterizing the mapping;
associating the metadata with the lower bit depth representation; and
identifying a portion of the range of the image data containing greater tone detail and generating the mapping such that the
identified portion maps to a portion of the lower bit depth representation that occupies a larger proportion of the maximum
range of the lower bit depth representation than the proportion of the range of the image data occupied by the identified
portion.

US Pat. No. 9,191,682

VIDEO CODECS WITH INTEGRATED GAMUT MANAGEMENT

Dolby Laboratories Licens...

1. An image decoder comprising:
a motion compensation stage;
a gamut transformation unit upstream from the motion compensation stage, the gamut transformation unit configured to modify
pixel values in image data being decoded;

a frequency domain to spatial domain converter, wherein the gamut transformation unit is configured to modify pixel values
in regions outputted by the frequency domain to spatial domain converter; and

first and second parallel decoding pipelines upstream from the motion compensation stage wherein the gamut transformation
unit is configured to modify values in regions in the second decoding pipeline,

wherein the second decoding pipeline is connected to decode a difference signal representing a difference between a first
video signal processed by the first decoding pipeline and a second video signal and the image decoder comprises a combining
stage configured to receive and combine the difference signal and the first video signal to yield the second video signal.

US Pat. No. 9,460,726

METHOD AND DEVICE FOR DECODING AN AUDIO SOUNDFIELD REPRESENTATION FOR AUDIO PLAYBACK

Dolby Laboratories Licens...

1. A method for decoding an audio soundfield representation for audio playback, comprising:
calculating, for each of a plurality of loudspeakers, a function using a method based on the positions of the loudspeakers
and a plurality of source directions;

calculating a mode matrix ?N from the source directions;

calculating a pseudo-inverse mode matrix ?+ of the mode matrix ?; and

decoding the audio soundfield representation, wherein the decoding is based on a decode matrix that is obtained from at least
the function and the pseudo-inverse mode matrix ?+.

US Pat. No. 9,420,372

METHOD AND APPARATUS FOR PROCESSING SIGNALS OF A SPHERICAL MICROPHONE ARRAY ON A RIGID SPHERE USED FOR GENERATING AN AMBISONICS REPRESENTATION OF THE SOUND FIELD

Dolby Laboratories Licens...

1. A method for processing microphone capsule signals of a spherical microphone array on a rigid sphere, said method comprising:
converting said microphone capsule signals representing a pressure on the surface of said microphone array to a spherical
harmonics or Ambisonics representation Anm(t);

computing per wave number k an estimation of the time-variant signal-to-noise ratio SNR(k) of said microphone capsule signals,
using the average source power |P0(k)|2 of the plane wave recorded from said microphone array and the corresponding noise power |Pnoise(k)|2 representing the spatially uncorrelated noise produced by analog processing in said microphone array;

computing per wave number k the average spatial signal power at the point of origin for a diffuse sound field, using reference,
aliasing and noise signal power components, and forming the frequency response of an equalization filter from the square root
of the fraction of a given reference power and said average spatial signal power at the point of origin,

and multiplying per wave number k said frequency response of said equalization filter by a transfer function, for each order
n at discrete finite wave numbers k, of a noise minimizing filter derived from said estimation of the time-variant signal-to-noise
ratio estimation SNR(k), and by an inverse transfer function of said microphone array, in order to get an adapted transfer
function Fn,array(k);

applying said adapted transfer function Fn,array(k) to said spherical harmonics or Ambisonics representation Anm(t) using a linear filter processing, resulting in adapted directional time domain coefficients dnm(t), wherein n denotes the Ambisonics order and index n runs from 0 to a finite order and m denotes the degree and index m
runs from ?n to n for each index n.

US Pat. No. 9,271,011

ENCODING, DECODING, AND REPRESENTING HIGH DYNAMIC RANGE IMAGES

Dolby Laboratories Licens...

1. A method to decode a high dynamic range (HDR) image with a processor, the method comprising:
parsing an image file to generate a base image and HDR reconstruction data, wherein the HDR reconstruction data comprises
a quantized luma ratio image and one or more sets of quantized residual chroma values;

extracting quantization parameters from the HDR reconstruction data;
generating a dequantized luma ratio image and one or more sets of dequantized residual chroma values based on the quantization
parameters, the quantized luma ratio image, and the one or more sets of quantized residual chroma values;

linearizing the dequantized luma ratio image to generate a linearized luma ratio image; and
generating an output HDR image based on the one or more sets of dequantized residual chroma values, the linearized luma ratio
image, and the base image.

US Pat. No. 9,667,964

REDUCED COMPLEXITY MOTION COMPENSATED TEMPORAL PROCESSING

Dolby Laboratories Licens...

1. A method, implemented by a video coder including a processor, for motion analysis of a video signal, the method comprising
steps of:
receiving, by the video coder, the video signal, wherein the video signal comprises at a selected time at least one of the
following input pictures: a current picture, one or more past pictures, and one or more future pictures;

temporal down sampling, by the video coder, the video signal to generate one or more sampled input pictures at a temporal
resolution;

spatial down sampling, by the video coder, the input pictures at a selected spatial resolution,
selecting, by the video coder, a number of reference pictures to generate one or more reference pictures for a sampled input
picture,

calculating, by the processor, motion parameters based on a number of the reference pictures or a combination of the one or
more reference pictures and a selection of input pictures;

determining, by the processor, if the calculated motion parameters have a desired motion accuracy;
if the calculated motion parameters do not have the desired motion accuracy based on a prediction error threshold, repeating
the temporal down sampling, the spatial down sampling, and the selecting steps by using a binary search algorithm to reselect
the one or more reference pictures, reselecting the one or more reference pictures, calculating motion parameters, and determining
if the calculated motion parameters have the desired motion accuracy, wherein the steps are repeated until a desired motion
accuracy is reached;

designating, by the processor, the calculated motion parameters with the desired motion accuracy as the final calculated motion
parameters;

predicting, by the processor, one or more selected sampled input pictures based on the final calculated motion parameters,
wherein motion analysis comprises the predictions of the one or more selected sampled input pictures; and
outputting, from the video coder, an output video signal of output pictures based on the predicting.

US Pat. No. 9,136,881

AUDIO STREAM MIXING WITH DIALOG LEVEL NORMALIZATION

Dolby Laboratories Licens...

1. A method for mixing two input audio signals into a single, mixed audio signal while maintaining a perceived sound level
of the mixed audio signal, the method comprising:
receiving a main input audio signal;
receiving an associated input audio signal; wherein the associated input audio signal is coupled with the main input audio
signal;

receiving mixing metadata, which contains scaling information for scaling the main input audio signal and which specifies
how the main input audio signal and the associated input audio signal should be mixed, in order to generate a mixed audio
signal at the perceived sound level; wherein the scaling information from the mixing metadata comprises a metadata scale factor
for the main input audio signal, for scaling the main input audio signal relative to the associated input audio signal;

receiving a mixing balance input, which denotes an adjustable balance between the main input audio signal and the associated
input audio signal, wherein the mixing balance input comprises scaling information which allows a deviation from a weighting
of the main input audio signal and the associated input audio signal in the mixed audio signal as specified in the mixing
metadata;

identifying a dominant signal as either the main input audio signal or the associated input audio signal from the scaling
information provided by the mixing metadata and from the mixing balance input, wherein the respective other input audio signal
is then identified as a non-dominant signal; and wherein the dominant signal is identified by comparing the mixing balance
input with the metadata scale factor for the main input audio signal;

scaling the non-dominant signal in relation to the dominant signal; and
combining the scaled non-dominant signal with the dominant signal to yield the mixed audio signal.

US Pat. No. 9,495,970

AUDIO CODING WITH GAIN PROFILE EXTRACTION AND TRANSMISSION FOR SPEECH ENHANCEMENT AT THE DECODER

Dolby Laboratories Licens...

1. An audio encoding system for producing, based on an audio signal, a gain profile to be distributed with said audio signal,
the gain profile comprising a time-variable voice activity gain and a time-variable and frequency-variable cleaning gain,
wherein the audio encoding system comprises:
a voice activity detector adapted to determine the voice activity gain by at least determining voice activity in the audio
signal; and

a noise estimator adapted to determine the cleaning gain by at least estimating noise in said audio signal,
wherein the cleaning gain is separable from the voice activity gain in the gain profile.

US Pat. No. 9,264,681

EXTENDING IMAGE DYNAMIC RANGE

Dolby Laboratories Licens...

1. A method for processing a video signal, comprising the steps of:
(a) normalizing a first image, with which a scene is encoded in the video signal with a first luminance dynamic range, with
a second image, with which the scene is renderable or displayable with a second luminance dynamic range;

wherein the second luminance dynamic range is greater than the first luminance dynamic range;
wherein the normalizing step comprises the steps of:
receiving the first image;
receiving the second image;
normalizing the first image with the second image to output a third image, a fourth image, and an invertible mapping function;
the third image having the second luminance dynamic range and the fourth image having the first luminance dynamic range;
the fourth image having a higher precision than the first image;
wherein the mapping function is losslessly invertible in relation to the first luminance dynamic range and color space and
the second luminance dynamic range and color space; and

(b) subsequent to the normalizing step (a), converting an input video signal that is represented in a first color space with
a first color gamut which is related to the first luminance dynamic range, the input video signal being provided by the fourth
image, to a video signal that is represented in a second color space of a second color gamut, wherein the second color space
is associated with the second luminance dynamic range, by mapping at least two color-related components of the converted video
signal over the second luminance dynamic range using the invertible mapping function.

US Pat. No. 9,269,309

DUAL MODULATION USING CONCURRENT PORTIONS OF LUMINANCE PATTERNS IN TEMPORAL FIELDS

Dolby Laboratories Licens...

1. A method of generating an image, the method comprising:
receiving input image data;
predicting a first luminance pattern and a second luminance pattern for a first spectral power distribution and a second spectral
power distribution, respectively, said first luminance pattern and said second luminance pattern generated from said input
image data and comprise a low resolution version of said input image data;

distributing portions of the first luminance pattern and portions of the second luminance pattern in one or more temporal
fields;

interlacing the portions of the first luminance pattern and the portions of the second luminance pattern in the one or more
temporal fields; and

modulating light intensities of the first luminance pattern of the first spectral power distribution and the second luminance
pattern of the second spectral power distribution to produce the image with other luminance patterns.

US Pat. No. 9,100,660

GUIDED IMAGE UP-SAMPLING IN VIDEO CODING

Dolby Laboratories Licens...

1. A method comprising:
receiving a first image of a first spatial resolution and a guide image of a second spatial resolution, wherein both the first
image and the guide image represent similar scenes and the second spatial resolution is higher than the first spatial resolution,
wherein the first image comprises a first color component and a second color component;

selecting an up-scaling filter to up-scale the first image to a third image with a spatial resolution equal to the second
spatial resolution;

computing filtering coefficients for the up-scaling filter wherein the filtering coefficient computation is based, at least
in part, on minimizing an error measurement between pixel values of the guide image and the third image, wherein the up-scaling
filter comprises a first set of filtering coefficients and a second set of filtering coefficients, wherein generating the
third image comprises combining the result of filtering the first color component of the first image with the first set of
filtering coefficients with the result of filtering the second color component of the first image with the second set of filtering
coefficients; and

transmitting the filtering coefficients to a decoder.

US Pat. No. 10,027,963

PRE-DITHERING IN HIGH DYNAMIC RANGE VIDEO CODING

Dolby Laboratories Licens...

1. A method for pre-dithering images to be coded by an encoder, the method comprising:receiving an input low dynamic range (LDR) image in a first bit depth to be encoded by the encoder at a target bit rate;
generating a random noise image;
filtering the random noise image with a spatial filter to generate a spatial-filtered noise image;
storing the spatial-filtered noise image in a ring buffer;
repeating the generation of the spatial-filtered noise images and storing the same in the ring buffer;
applying a temporal filter to images in the ring buffer to generate a temporal-filtered noise image;
adding the temporal-filtered noise image to the input LDR image to generate a noise-enhanced LDR image; and
quantizing the noise-enhanced image to a second bit-depth to generate an output dithered LDR image, wherein the second bit-depth is lower than the first bit depth.

US Pat. No. 9,420,200

3D CAMERAS FOR HDR

Dolby Laboratories Licens...

1. A method comprising:
receiving a left input frame created with a first luminance range and a right input frame created with a second luminance
range, the left input frame comprising a plurality of left input scanlines, the right input frame comprising a plurality of
right input scanlines, each individual left input scanline in the plurality of left input scanlines corresponding to an individual
right input scanline in the plurality of right input scanlines, and the first luminance range being different from the second
luminance range;

determining a first left input scanline in the plurality of left input scanlines and a first right input scanline in the plurality
of right input scanlines, the first left input scanline corresponding to the first right input scanline, the first left input
scanline comprising a first set of left input pixels, and the first right input scanline comprising a first set of right input
pixels;

selecting a first subset of left input pixels among the first set of left input pixels;
selecting a first subset of right input pixels among the first set of right input pixels;
computing, with the first subset of left input pixels and the first subset of right input pixels, first input scanline disparity
between the first subset of left input pixels and the first subset of right input pixels;

generating, based at least in part on the first input scanline disparity, a first left output scanline for a left high dynamic
range (HDR) output frame and a first right output scanline for a right HDR output frame;

wherein the first subset of left input pixels is selected, based on one or more criteria, from the set of left input pixels;
wherein the one or more criteria comprise an upper luminance level and wherein zero or more pixels, in the set of left input
pixels, that are above the upper luminance level are excluded from being included in the first subset of left input pixels.

US Pat. No. 9,357,326

EMBEDDING DATA IN STEREO AUDIO USING SATURATION PARAMETER MODULATION

Dolby Laboratories Licens...

1. A method for embedding data in a stereo audio signal comprising a sequence of frames, said method comprising:
modifying the stereo audio signal to generate a modulated stereo audio signal comprising a sequence of modulated frames having
modulated saturation values indicative of the data; and

embedding one data bit in each frame of the stereo audio signal by modifying said frame to produce a modulated frame whose
modulated saturation value matches a target value indicative of the data bit,

wherein the modification of each said frame includes steps of applying a gain to a first modification signal to produce a
first scaled signal, adding the first scaled signal to a first channel signal indicative of a first channel of the frame to
determine a first channel of the modulated frame, applying the gain to a second modification signal to produce a second scaled
signal, and adding the second scaled signal to a second channel signal indicative of a second channel of the frame to determine
a second channel of the modulated frame.

US Pat. No. 9,628,887

TELECOMMUNICATIONS DEVICE

Dolby Laboratories Licens...

1. A telecommunications device comprising:
means to capture sound from one or more users, wherein the capturing means comprises a plurality of sound capturing elements,
wherein an angle between a first sound capturing element and a second and adjacent sound capturing element is approximately
120 degrees and wherein the plurality of sound capturing elements comprises one or more sound microphones, and

means to render sound to one or more users, wherein the rendering means comprises a plurality of sound rendering elements
and wherein an angle between a first sound rendering element and a second and adjacent sound rendering element is approximately
120 degrees.

US Pat. No. 9,464,769

TECHNIQUES FOR USING QUANTUM DOTS TO REGENERATE LIGHT IN DISPLAY SYSTEMS

Dolby Laboratories Licens...

1. A display system, comprising:
a plurality of light source components that are configured to emit a first light; and
a light converter configured to be illuminated by the first light to convert the first light into second light, the second
light comprising a mixture of two or more primary color components, and to illuminate pixels in one or more light modulation
layers with the mixture of two or more primary color components;

wherein the mixture of two or more primary color components comprises light of a first primary color and light of one or more
second primary colors;

wherein the light of the first primary color in the second light is generated by a first type of light conversion materials
in the light converter that convert a first portion of the first light emitted by a first set of light source components in
the plurality of light source components;

wherein the light of the one or more second primary colors in the second light is generated by one or more second types of
light conversion materials in the light converter that convert one or more second portions of the first light emitted by a
second set of light source components in the plurality of light source components;

wherein the first type of light conversion materials in the light converter is separate in space from the one or more second
types of light conversion materials in the light converter;

wherein an intensity of the first portion of the first light emitted by the first set of light source components in the plurality
of light source components to illuminate the first type of light conversion materials is controlled separately from intensities
of the one or more second portions of the first light emitted by the second set of light source components in the plurality
of light source components to illuminate the one or more second types of light conversion materials;

wherein the light converter represents a layer separate from the one or more light modulation layers.

US Pat. No. 9,338,445

METHOD AND APPARATUS FOR FULL RESOLUTION 3D DISPLAY

Dolby Laboratories Licens...

1. A method for displaying full resolution, high definition (HD) three-dimensional (3D) images in a liquid crystal display
(LCD) device, the method comprising:
spatially dividing pixels of said LCD device into left and right views;
modulating said pixels of said LCD device based on image data to simultaneously generate a full left view and a full right
view;

analyzing light from a first set of pixels displaying said left view in full resolution with an analyzer of a first type;
analyzing light from a second set of pixels displaying said right view in full resolution with an analyzer of a second type;
and

analyzing light from a third set of pixels associated with a common view with an analyzer of a third type; and wherein
said analyzer of said first type includes a first set of color filters and a polarizer, said first set of color filters being
aligned with said first set of pixels and configured to pass a first set of wavelengths comprising at least three bands;

said analyzer of said second type includes said polarizer and a second set of color filters different from said first set
of color filters, said second set of color filters being aligned with said second set of pixels and configured to pass a second
set of wavelengths comprising at least three bands different than said bands of said first set of wavelengths;

said analyzer of said third type includes said polarizer and a third set of color filters aligned with said third set of pixels,
said third set of color filters being configured to pass a third set of wavelengths comprising at least three bands;

said first and second sets of wavelengths facilitate separation of said left view and said right view by different lenses
of eyewear;

said third set of wavelengths facilitates said common view by passing through each of said different lenses of eyewear;
pixel data common to said left view and said right view are displayed using said third set of pixels; and
each pixel belongs to only one of said first set of pixels, said second set of pixels, and said third set of pixels.

US Pat. No. 9,935,599

TECHNIQUES FOR DISTORTION REDUCING MULTI-BAND COMPRESSOR WITH TIMBRE PRESERVATION

Dolby Laboratories Licens...

1. A method for audio presentation, the method comprising:splitting an audio signal into a plurality of frequency bands;
determining a time-varying threshold for a frequency band of the plurality of frequency bands, the time-varying threshold determined from a level of the audio signal and a fixed signal level threshold corresponding to each of a plurality of neighboring frequency bands, but not all of the plurality of frequency bands; and
providing the time-varying threshold to a compression function element dedicated to the frequency band; and
audibly presenting an audio signal outputted from the compression function element.

US Pat. No. 9,412,337

PROJECTION DISPLAYS

Dolby Laboratories Licens...

1. A display comprising:
a projector arranged to project light which has been spatially modulated according to image data onto a screen comprising
a diffuser, the projector comprising separate light sources for each of a plurality of colors;

a monochrome light modulator arranged in a path of the light projected by the projector;
a control system configured to adjust luminance of the light at points on the diffuser by adjusting the modulation of the
light by the projector and the modulation of the light by the monochrome light modulator.

US Pat. No. 9,368,087

DISPLAY BACKLIGHT NORMALIZATION

Dolby Laboratories Licens...

1. A method for rendering images on a high dynamic range display system comprising:
receiving a first image;
determining, based on the first image, a first dynamic range of luminous intensity;
normalizing the first dynamic range of luminous intensity to a first configured dynamic range of luminous intensity;
rendering the first image on the display system using the configured dynamic range of luminous intensity;
outputting first image on the display system;
receiving a second image, wherein the second image is different from the first image;
determining, based on a second different image, a second dynamic range of luminous intensity, the second dynamic range being
determined from the second image and differing from the first dynamic range;

wherein the first dynamic range represents a wider dynamic range than the second dynamic range;
normalizing the second dynamic range of luminous intensity to the second configured dynamic range of luminous intensity the
first configured dynamic range is less than the second configured dynamic range; and

rendering the second image on the display system using the second configured dynamic range of luminous intensity;
outputting second image on the display system;
wherein at least one of the first image and the second image comprises a low dynamic range (LDR) image, one of the first and
second configured dynamic range comprises a high dynamic range (HDR), and at least one of the first dynamic range and the
second dynamic range is smaller than one of the first and second configured dynamic range;

wherein the steps of rendering the images comprises rendering the images on a locally dimmed display and normalizing the backlights
of the display to a range of values having an upper limit that is less than the maximum light output level to which a backlight
can be set but which comprises a light output level to produce an upper limit of a dynamic range selected for display of the
images; and

wherein the display system further comprises a dynamic range control to a user; the dynamic range control configured to accept
a selection by the user of a configured dynamic range from at least two configurable dynamic ranges as supported by the display
system.

US Pat. No. 9,324,337

METHOD AND SYSTEM FOR DIALOG ENHANCEMENT

Dolby Laboratories Licens...

1. A method for enhancing dialog determined by an audio input signal, said method including the steps of:
(a) analyzing the input signal to generate filter control values without use of feedback; and
(b) providing at least one of the control values to a peaking filter, filtering a speech channel determined by the input signal
in the peaking filter in a manner steered by said at least one of the control values to generate a dialog-enhanced speech
channel, and attenuating non-speech channels determined by the input signal in ducking circuitry steered by at least a subset
of the control values to generate attenuated non-speech channels, where the control values are distinct from the speech channel,
the control values are distinct from the non-speech channels, the peaking filter is distinct from the ducking circuitry, the
peaking filter is coupled and configured to filter the speech channel but not the non-speech channels, the ducking circuitry
is coupled and configured to attenuate the non-speech channels but not the speech channel, the peaking filter is configured
to emphasize frequency components of the speech channel in a frequency range critical to intelligibility of speech, relative
to frequency components of the speech channel outside the frequency range, and said frequency range has a center frequency,

wherein the step of attenuating the non-speech channels includes reducing gain application to the non-speech channels in response
to a change in said at least a subset of the control values indicative of increase of power of the speech channel relative
to combined power of the non-speech channels, and the step of filtering the speech channel includes applying more gain to
the frequency components of the speech channel at the center frequency in response to a change in said at least one of the
control values indicative of an increase in power of the speech channel relative to power of at least one of the non-speech
channels.

US Pat. No. 9,185,507

HYBRID DERIVATION OF SURROUND SOUND AUDIO CHANNELS BY CONTROLLABLY COMBINING AMBIENCE AND MATRIX-DECODED SIGNAL COMPONENTS

Dolby Laboratories Licens...

1. A method for obtaining two surround sound audio channels from two input audio signals, wherein said audio signals comprise
components generated by matrix encoding, comprising
obtaining ambience signal components from said audio signals,
obtaining matrix-decoded direct signal components from said audio signals using a matrix decoder that receives the input audio
signals having direct and ambient signal components, and that are denoted Lo/Lt and Ro/Rt, and

controllably combining ambience signal components and matrix-decoded signal components using gain scale factors obtained in
a time-frequency domain by:

splitting each of the input signals Lo/Lt and Ro/Rt into three paths comprising a control path that computes front/back ratio
gain scale factors and direct/ambient gain scale factors, a direct signal path, and an ambience signal path;

calculating front and back gain scale factors for each path, and direct and ambient gain scale factors for each path; and
blending each direct signal path with a respective ambience signal path using the direct/ambient gain scale factors to provide
right and left surround channel signals by applying the calculated back, direct and ambient gain scale factors to the ambience
signal components and the matrix-decoded signal components.

US Pat. No. 9,164,293

ADJUSTABLE EYEWEAR WITH FIXED TEMPLE AND METHOD OF MANUFACTURE

Dolby Laboratories Licens...

1. Adjustable eyewear, comprising:
a frame having a first side and a second side;
a first temple piece coupled to said first side of said frame, said first temple piece having a length and including a first
adjustment mechanism capable of adjusting both the length of said first temple piece and an angle of said first temple piece
with respect to a line extending between said first side of said frame and said second side of said frame; and

a second temple piece coupled to said second side of said frame, said second temple piece having a length and including a
second adjustment mechanism capable of adjusting both the length of said second temple piece and an angle of said second temple
piece with respect to said line extending between said first side of said frame and said second side of said frame; and wherein

said eyewear is fixed-temple eyewear; and
each of said first temple piece and said second temple piece includes a first temple portion rigidly affixed to said frame
and extending away from said line extending between said first side of said frame and said second side of said frame, and

a second temple portion slidably engaged with said first temple portion and adapted to engage the ear of a wearer of said
eyewear, and

whereby sliding said second temple portion with respect to said first temple portion facilitates adjusting the length of said
temple piece.

US Pat. No. 9,076,449

MULTISTAGE IIR FILTER AND PARALLELIZED FILTERING OF DATA WITH SAME

Dolby Laboratories Licens...

1. An audio encoder configured to generate encoded audio data in response to input audio data, said encoder including at least
one multistage filter coupled and configured to filter the audio data, wherein the multistage filter includes:
a buffer memory;
at least two biquad filter stages, including a first biquad filter stage and a subsequent biquad filter stage; and
a controller, coupled to the biquad filter stages and configured to assert a single stream of instructions to both the first
biquad filter stage and the subsequent biquad filter stage, wherein said first biquad filter stage and said subsequent biquad
filter stage operate independently and in parallel in response to the stream of instructions,

wherein the first biquad filter stage is coupled to the memory and configured to perform biquadratic filtering on a block
of N input samples in response to the stream of instructions to generate intermediate values, and to assert the intermediate
values to the memory, wherein the intermediate values include a filtered version of each of at least a subset of the input
samples, and

wherein the subsequent biquad filter stage is coupled to the memory and configured to perform biquadratic filtering on buffered
values retrieved from the memory in response to the stream of instructions to generate a block of output values, wherein the
output values include an output value corresponding to each of the input samples in the block of N input samples, and the
buffered values include at least some of the intermediate values generated in the first biquad filter stage in response to
the block of N input samples, wherein the first biquad filter stage is configured to perform a multiplication on at least
one of the input samples in a step of a processing loop determined by the instructions, and the subsequent biquad filter stage
is coupled and configured to perform a multiplication on at least one of the intermediate values in said step of the processing
loop;

wherein the multistage filter is configured to perform multistage filtering of the block of N input samples in a single processing
loop with iteration over a sample index but without iteration over a biquadratic filter stage index.

US Pat. No. 9,357,307

MULTI-CHANNEL WIND NOISE SUPPRESSION SYSTEM AND METHOD

Dolby Laboratories Licens...

1. A wind noise suppression device for suppressing wind noise in one or more of at least first and second channels, the device
comprising:
a differencing module configured to obtain a magnitude difference of signals in the first and second channels;
a summing module configured to obtain a magnitude sum of signals in the first and second channels;
a ratioing module configured to obtain a ratio of the magnitude difference to the magnitude sum;
a first attenuator associated with the first channel and a second attenuator associated with the second channel;
an attenuation generator configured to generate an attenuation value based on the ratio from the ratioing module; and
an attenuation steering module configured to select the first or second attenuator based on the magnitude difference, the
selected attenuator operative to attenuate the signal in the associated channel by the attenuation value.

US Pat. No. 9,269,360

USING MULTICHANNEL DECORRELATION FOR IMPROVED MULTICHANNEL UPMIXING

Dolby Laboratories Licens...

1. A method performed by a device for deriving M output audio signals from one or more input audio signals, the method comprising:
receiving the one or more input audio signals;
analyzing the one or more input audio signals to derive one or more non-diffuse audio signals and N diffuse audio signals,
wherein M is greater than N and is greater than two;

processing the one or more non-diffuse audio signals to derive M processed non-diffuse audio signals;
deriving K intermediate audio signals from the N diffuse audio signals such that each of the K intermediate audio signals
is psychoacoustically decorrelated with each of the N diffuse audio signals and, if K is greater than one, is psychoacoustically
decorrelated with all other of the K intermediate audio signals, wherein K is greater than or equal to one and is less than
or equal to M?N;

mixing the N diffuse audio signals and the K intermediate audio signals to derive M diffuse audio signals, wherein the mixing
is performed according to a system of linear equations with coefficients of a matrix that specify a set of N+K vectors in
an M-dimensional space, and wherein at least K of the N+K vectors are substantially orthogonal to all other vectors in the
set of N+K vectors; and

combining the M processed non-diffuse audio signals and the M diffuse audio signals to generate the M output audio signals.

US Pat. No. 9,270,956

IMAGE DISPLAY

Dolby Laboratories Licens...

1. A display comprising:
a first light modulation stage combining a light source and light modulation in the form of a plurality of independently-controllable
pixels each arranged to emit light having an intensity that is variable, the first light-modulation stage operative to produce
light modulated according to image data;

a second light-modulation stage in series with the first light-modulation stage;
an optical system comprising one or more mirrors and a diffuser in an optical path between the first and second light-modulation
stages, the optical system configured to image the light modulated by the first light-modulation stage onto the second light
modulation stage such that light from different pixels of the first light-modulation stage overlaps on the second light-modulation
stage to provide light having a smooth variation in light intensity with position for further modulation by the second light-modulation
stage;

a controller connected to control the pixels of the first light-modulation stage and controllable-elements of the second light-modulation
stage to supply image features having a high spatial resolution.

US Pat. No. 9,787,268

AUDIO CONTROL USING AUDITORY EVENT DETECTION

Dolby Laboratories Licens...

1. A method for processing an audio signal in an audio processing apparatus, the method comprising:
receiving the audio signal, the audio signal comprising at least one channel of audio content;
dividing the audio signal into a plurality of subband signals with an analysis filterbank, each of the plurality of subband
signals including at least one subband sample;

deriving a characteristic of the audio signal, wherein the characteristic is a power measure of the audio signal;
smoothing the power measure to generate a smoothed power measure of the audio signal;
detecting a location of an auditory event boundary by monitoring the smoothed power measure, wherein an audio portion between
consecutive auditory event boundaries constitutes an auditory event,

generating a gain vector based on the location of the auditory event boundary;
applying the gain vector to a version of the plurality of subband signals to generate modified subband signals; and
synthesizing the modified subband signals with a synthesis filterbank to produce a modified audio signal,
wherein the detecting further includes applying a threshold to the smoothed power measure to detect the location of the auditory
event boundary,

wherein the detecting further includes comparing the smoothed power measure with a second smoothed power measure of the audio
signal, and

wherein the audio processing apparatus is implemented at least in part with hardware.

US Pat. No. 9,521,263

LONG TERM MONITORING OF TRANSMISSION AND VOICE ACTIVITY PATTERNS FOR REGULATING GAIN CONTROL

Dolby Laboratories Licens...

1. A method for leveling an audio signal using a leveling gain, the method comprising:
updating the leveling gain for a current segment of the audio signal based on a target level;
wherein the audio signal comprises a sequence of segments that include the current segment;
applying the leveling gain to the current segment of the audio signal; and
repeating the updating and applying for the sequence of segments of the audio signal; wherein the updating of the leveling
gain is suspended, subject to determining a pre-determined number of aberrant voice bursts within the audio signal.

US Pat. No. 9,462,240

LOCALLY DIMMED CINEMA PROJECTION SYSTEM WITH REFLECTIVE MODULATION AND NARROWBAND LIGHT SOURCES

Dolby Laboratories Licens...

1. A cinema projector, comprising:
a plurality of laser light sources;
an optical fiber connecting the laser light sources to a light integrator;
wherein the light integrator is disposed such that light output from the light integrator so as to generate an array of light
emissions, each having a Point Spread Function (PSF), positioned such that the array of light emissions and their corresponding
PSFs illuminate a modulator; and

wherein the PSFs of the array of light emissions are controlled so as to mix together in a manner where adjacent light emissions
of the array of light emissions mix heavily at a rate of greater than 45% of peak value within an area covered by a full width
at half maximum of the PSFs.

US Pat. No. 9,426,453

METHODS AND APPARATUS FOR 3D SHUTTER GLASSES SYNCHRONIZATION

Dolby Laboratories Licens...

1. A method for generating a signal to synchronize 3D glasses comprising:
at a display, driving a spatial light modulator illuminated by a light source to display a sequence of images, the images
comprising left images and right images;

between display of left and right images, refreshing the spatial light modulator, the refreshing comprising, in a sequence,
changing driving signals for a plurality of parts of the spatial light modulator starting with a first-to-be-updated part
and completing with a last-to-be-updated part; and

controlling transmission of a synchronization signal at least in part by the driving signals for the last-to-be-updated part;
wherein:
controlling transmission of the synchronization signal comprises setting the synchronization signal to a first state having
a first electromagnetic radiation level to control a left shutter of the 3D glasses to be open;

controlling transmission of the synchronization signal comprises setting the synchronization signal to a second state having
a second electromagnetic radiation level to control a right shutter of the 3D glasses to be open;

controlling transmission of the synchronization signal comprises setting the synchronization signal to a third state to control
the right shutter and the left shutter of the 3D glasses to be open;

wherein the first electromagnetic radiation includes a frequency not included in the second electromagnetic radiation;
wherein:
the display comprises one or more radiation sources operable to emit spectra of radiation which comprise one or more frequencies
that are not included in a spectrum of electromagnetic radiation emitted by the light source; and

the synchronization signal comprises electromagnetic radiation from the one or more radiation sources and the synchronization
signal is at a frequency not included in the spectrum of light over which the left and right images are displayed.

US Pat. No. 9,319,652

METHOD AND APPARATUS FOR MANAGING DISPLAY LIMITATIONS IN COLOR GRADING AND CONTENT APPROVAL

Dolby Laboratories Licens...

1. A color grading tool for producing color-graded display video data from input video data, the color grading tool comprising:
a color grading adjustment module operable to color-grade the input video data to yield color-graded video data while displaying
the color-graded video data on a color-grading display; and

a display conformer configured to modify the color-graded video data for display on a target display and having a greater
fidelity range than the color-grading display, by:

modeling the display of all or part of the color-graded video data on the color-grading display by a computational display
model of the target display to determine one or more modified pixel values that represent the appearance of one or more corresponding
pixels of the color-graded video data when displayed on the color-grading display;

identifying at least one pixel having values outside the fidelity range of the color-grading display; and
modifying the color-graded video data by replacing such pixel values of one or more identified pixels of the color-graded
video data for which the identified pixel values are outside a fidelity range of the color-grading display with the corresponding
ones of the modified pixel values, wherein the modified pixel values include substitute chromaticities and/or luminances related
to the target display, and wherein

the display conformer is configured to apply the input video data to the computational display model of the target display,
the computational display model configured to determine said substitute chromaticities and/or luminances with which the target
display would display such pixels of the input video data which are outside said fidelity range of the color-grading display.

US Pat. No. 9,218,821

MEASURING CONTENT COHERENCE AND MEASURING SIMILARITY

Dolby Laboratories Licens...

1. A method of measuring content coherence between a first audio section and a second audio section, comprising:
for each of audio segments in the first audio section,
determining a predetermined number of audio segments in the second audio section, wherein content similarity between the audio
segment in the first audio section and the determined audio segments is higher than that between the audio segment in the
first audio section and all the other audio segments in the second audio section; and

calculating an average of the content similarity between the audio segment in the first audio section and the determined audio
segments; and

calculating first content coherence as an average, the minimum or the maximum of the averages calculated for the audio segments
in the first audio section.

US Pat. No. 9,204,236

SYSTEM AND TOOLS FOR ENHANCED 3D AUDIO AUTHORING AND RENDERING

Dolby Laboratories Licens...

1. An apparatus, comprising:
an interface system; and
a logic system configured for:
receiving, via the interface system, audio reproduction data comprising one or more audio objects and associated metadata;
wherein the associated metadata includes trajectory data for at least one of the one or more audio objects indicating a time-variable
audio object position of the at least one audio object within the three-dimensional space; wherein the audio object position
is constrained to a two-dimensional surface; wherein the audio reproduction data has been created with respect to a virtual
reproduction environment comprising a plurality of speaker zones at different elevations;

receiving, via the interface system, reproduction environment data comprising an indication of a number of reproduction speakers
of an actual three-dimensional reproduction environment and an indication of the location of each reproduction speaker within
the actual reproduction environment;

mapping the audio reproduction data created with reference to the plurality of speaker zones of the virtual reproduction environment
to the reproduction speakers of the actual reproduction environment; and

rendering the one or more audio objects into one or more speaker feed signals based, at least in part, on the associated metadata,
wherein each speaker feed signal corresponds to at least one of the reproduction speakers within the actual reproduction environment.

US Pat. No. 9,197,888

OVERLAPPED RATE CONTROL FOR VIDEO SPLICING APPLICATIONS

Dolby Laboratories Licens...

1. A method, comprising:
dividing an input video sequence into a plurality of final splices to be coded in a final coding pass;
performing one or more non-final coding passes before the final coding pass, at least one of the one or more non-final coding
passes comprising a non-final splice that corresponds to a final splice in the plurality of splices to be coded in the final
coding pass, and the non-final splice comprising more frames than frames in the final splice;

allocating one or more rate control related budgets for the final splice based on information derived from the non-final splice
in the one or more non-final coding passes; and

coding the final splice in the final coding pass using the one or more rate control related budgets.

US Pat. No. 9,159,270

AMBIENT BLACK LEVEL

Dolby Laboratories Licens...

1. A method, comprising:
detecting, on a display panel, a luminance level of ambient light, the display panel being illuminated by a plurality of light
sources, and each individual light source in the plurality of light sources being individually settable to an individual light
output level;

determining whether the luminance level of the ambient light is above a minimum ambient luminance threshold;
wherein if the luminance level of the ambient light is at or below the minimum ambient luminance threshold the display panel
is to operate with an intrinsic black level;

wherein the intrinsic black level being determined at least in part on a minimal luminance in a portion of the display panel
as produced by light leakage from neighboring light sources, in the plurality of light sources, that are illuminating other
portions of the display panel; and

in response to determining that the luminance level of the ambient light is above the minimum luminance threshold:
calculating an ambient black level using the luminance level of the ambient light as an input variable, wherein the ambient
black level is at or near the luminance level of the ambient light; and

elevating one or more light output levels of one or more light sources in the plurality of light sources to first light output
levels, the one or more light sources in the plurality of light sources being designated to illuminate one or more dark portions
of an image, and the first light output levels creating a new black level at the ambient black level in the display panel
for the image.

US Pat. No. 9,842,596

ADAPTIVE PROCESSING WITH MULTIPLE MEDIA PROCESSING NODES

Dolby Laboratories Licens...

1. A method for processing audio data, comprising:
determining, by a first audio data processing device in a media processing chain, whether a type of audio data processing
has been performed on an output version of audio data;

in response to determining, by the first audio data processing device, that the type of audio data processing has been performed
on the output version of the audio data, performing:

creating or modifying, by the first audio data processing device, a state of the audio data, the state specifying the type
of audio data processing performed on the output version of the audio data, and comprising a hash-based message authentication
code determined by applying a cryptographic hash function to a message comprising a combination of the audio data and the
state of the audio data, wherein the hash-based message authentication code is to be authenticated by a second audio data
processing device downstream in the media processing chain;

communicating, from the first audio data processing device to the second audio data processing device downstream in the media
processing chain, the output version of the audio data and the state of the audio data.

US Pat. No. 9,552,827

ECHO CONTROL THROUGH HIDDEN AUDIO SIGNALS

Dolby Laboratories Licens...

1. A method for determining an estimate of an echo path property of an echo path of an electronic device configured to render
a total audio signal using a loudspeaker, thereby yielding a rendered audio signal, and configured to record an echo of the
rendered audio signal using a microphone, thereby yielding a recorded audio signal; the electronic device comprising an acoustic
echo cancellation unit; the method comprising:
inserting, in an inaudible manner, an auxiliary audio signal into the total audio signal to be rendered by the loudspeaker;
wherein the auxiliary audio signal comprises a tonal audio signal at a first frequency;

isolating the echo of the auxiliary audio signal from the recorded audio signal;
determining the estimate of the echo path property based on the inserted auxiliary audio signal and based on the isolated
echo of the auxiliary audio signal, the estimate of the echo path property being suitable for detecting a discontinuity occurring
on the echo path;

using the estimate of the echo path property to determine at least one of
an estimate of a rate of occurrence of discontinuities occurring on the echo path, and
an estimate of the extent of discontinuities occurring on the echo path; and
bypassing the acoustic echo cancellation unit if the estimate of the rate of occurrence of discontinuities exceeds a pre-determined
rate threshold or if the estimate of the extent of discontinuities exceeds a pre-determined discontinuity threshold.

US Pat. No. 9,451,363

METHOD AND APPARATUS FOR PLAYBACK OF A HIGHER-ORDER AMBISONICS AUDIO SIGNAL

Dolby Laboratories Licens...

1. Method for playback of an original Higher-Order Ambisonics audio (HOA) signal assigned to a video signal that is to be
presented on a current screen but was generated for an original and different screen, said method including:
decoding an input vector Ain of input HOA coefficients of said HOA signal so as to provide decoded audio signals sin in a space domain for regularly positioned loudspeaker positions by calculating sin=?1?1 Ain using the inverse ?1?1 of an HOA mode matrix ?1;

receiving or establishing reproduction adaptation information derived from the difference between said original screen and
said current screen in their widths and possibly their heights and possibly their curvatures;

adapting said decoded audio signals by warping and encoding them in the space domain into an output vector Aout of adapted output HOA coefficients by calculating Aout=?2 sin, wherein mode vectors of a mode matrix ?2 are modified with respect to mode matrix ?1 according to a warping function by which the angles of the original loudspeaker positions for said original screen are in
the HOA coefficients output vector Aout mapped into the target angles of the target loudspeaker positions for the current screen and remaining angles of the original
loudspeaker positions are shifted accordingly, and wherein said reproduction adaptation information controls said warping
function; and

rendering and outputting for loudspeakers the adapted HOA signals, wherein said rendering includes an HOA decoding.

US Pat. No. 9,369,722

METHOD AND SYSTEM FOR SELECTIVELY BREAKING PREDICTION IN VIDEO CODING

Dolby Laboratories Licens...

1. An apparatus comprising a non-transitory computer-readable medium for storing data accessible by a decoder, the computer-readable
medium comprising:
a bitstream stored in the computer-readable medium, the bitstream including data representing:
a first slice and a second slice of a coded video picture,
wherein the first slice and the second slice are divided by at least one slice boundary,
wherein at least the first slice is divided by a tile boundary to belong to two tiles, and
wherein the two tiles do not have tile headers;
a parameter set including a first flag that enables the decoder to determine whether to apply a sample adaptive offset operation
across the tile boundary; and

a slice header of the coded video picture including a second flag that enables the decoder to determine whether to apply a
sample adaptive offset operation across the at least one slice boundary.

US Pat. No. 9,208,789

REDUCED COMPLEXITY CONVERTER SNR CALCULATION

Dolby Laboratories Licens...

1. An audio encoder configured to encode at least one frame of an audio signal at a first target data rate in accordance with
an E-AC-3 codec system, thereby generating a bitstream indicative of encoded audio content including quantized mantissas,
said encoder comprising:
a transform subsystem coupled and configured to determine spectral coefficients indicative of audio content of the frame of
the audio signal;

a floating-point encoding subsystem configured to determine mantissas and encoded exponents based on the spectral coefficients;
a bit allocation and quantization subsystem configured to determine a first control parameter indicative of an allocation
of available bits for quantizing the mantissas in accordance with the E-AC-3 codec system, and to quantize the mantissas in
accordance with the first control parameter to determine the quantized mantissas;

a bitstream packing subsystem coupled and configured to generate the bitstream at the first target data rate such that said
bitstream is indicative of a second control parameter and encoded audio content of the frame, said encoded audio content including
the quantized mantissas; and

a transcoding simulation subsystem configured to simulate transcoding, said transcoding including decoding of the encoded
audio content of the frame to generate decoded data including de-quantized mantissas and re-encoding of the decoded data at
a second target data rate in accordance with an AC-3 codec system to generate a second bitstream indicative of re-encoded
audio content including re-quantized mantissas, wherein the transcoding simulation subsystem is configured to execute an iterative
bit allocation process to determine the second control parameter such that said second control parameter is indicative of
an allocation of available bits for quantizing the de-quantized mantissas to generate said re-quantized mantissas during generation
of the second bitstream in accordance with the AC-3 codec system at the second target data rate, wherein each bit allocation
iteration of the iterative bit allocation process assumes a candidate allocation of available bits determined by a different
candidate second control parameter of a set of candidate second control parameters, the set of candidate second control parameters
having been predetermined by statistical analysis of results of bit allocation processing of audio data in accordance with
the E-AC-3 codec system assuming the first target data rate, and results of bit allocation processing of the audio data in
accordance with the AC-3 codec system assuming the second target data rate.

US Pat. No. 9,202,255

IDENTIFYING MULTIMEDIA OBJECTS BASED ON MULTIMEDIA FINGERPRINT

Dolby Laboratories Licens...

1. An apparatus for identifying a multimedia object, comprising:
an acquiring unit, implemented at least in part by one or more computing processors, that acquires query fingerprints fq,1 to fq,T which are derived from the multimedia object according to fingerprint algorithms F1 to FT respectively, where the fingerprint algorithms F1 to FT are different from each other, and T>1;

a plurality of classifying units, wherein each fingerprint algorithm Ft corresponds to at least one of the classifying units, and each of the classifying units, implemented at least in part by one
or more computing processors, calculates decisions through a classifier based on the query fingerprint fq,t and reference fingerprints derived from a plurality of reference multimedia objects according to the fingerprint algorithm
Ft, each of the decisions indicating a possibility that the query fingerprint and the reference fingerprint for calculating
the decision are not derived from the same multimedia content; and

a combining unit, implemented at least in part by one or more computing processors, that, for each of the reference multimedia
objects, calculates a distance D as a weighted sum of the decisions relating to the reference fingerprints derived from the
reference multimedia object according to the fingerprint algorithms F1 to FT respectively; and

an identifying unit, implemented at least in part by one or more computing processors, that identifies the multimedia object
as matching the reference multimedia object with the smallest distance which is less than a threshold THc;

wherein for each of at least one of the classifiers, the corresponding classifying unit further calculates the decisions through
the classifier based on the query fingerprint and the reference fingerprints by:

searching a tree to find at least one leaf node having an bit error rate between the query fingerprint and the reference fingerprint
represented by the leaf node less than a maximum tolerable error rate; and

calculating the decisions by deciding that only the reference fingerprint represented by the at least one leaf node and the
query fingerprint are derived from the same multimedia content;

wherein the reference fingerprints have a fixed length L=S×K bits, and S and K are positive integers;
wherein the tree is a 2K-ary tree having S levels, and each node in the l-th level, 0?l?S, represents a bit sequence of K×l bits;

wherein each level among the S levels has a look-up table defining an estimated bit error rate between the query fingerprint
and its closest reference fingerprint under a reached node of the level, such that the probability of observing at least E
errors between b bits represented by the reached node and first b bits of the query fingerprint is greater than a threshold
pt.

US Pat. No. 9,131,097

METHOD AND SYSTEM FOR BLACK BAR IDENTIFICATION

Dolby Laboratories Licens...

1. A method for identifying a boundary of at least one black bar in an image determined by lines of pixels, where the pixels
determine the black bar and at least one non-black image region, and said black bar is between the non-black image region
and an edge of the image, said method including steps of:
(a) for each line of at least a subset of the lines, determining a standard deviation value and a difference value, where
the standard deviation value is indicative of the standard deviation of pixel values for the line, and the difference value
is the absolute value of the difference between the standard deviation value for said line and the standard deviation value
for one of the lines adjacent to said line;

(b-1) determining whether any line of the at least a subset of the lines satisfies a criterion that its difference value exceeds
a predetermined threshold but the difference value for each other line of the at least a subset of lines to be displayed nearer
to a first edge of the image does not exceed the threshold; and

(b-2) identifying one of the lines as a black bar boundary in response to the step of determining.

US Pat. No. 9,076,391

HIGH DYNAMIC RANGE DISPLAY WITH REAR MODULATOR CONTROL

Dolby Laboratories Licens...

1. A method comprising the steps of:
receiving an input image having a plurality of pixels;
grouping the pixels to corresponding sample locations;
determining a maximum intensity and a mean intensity of the pixels in each sample location;
generating, for each sample location, a weighted sum intensity from the maximum intensity and the mean intensity of the pixels
in the sample location;

grouping the sample locations such that each group of sample locations corresponds to one or more modulating elements of the
rear modulator;

determining a modulation value for each group of sample locations based on the weighted sum intensity of the sample locations
of the corresponding group of sample locations, except that, when at least one pixel in a group of pixels is associated with
an intensity that exceeds a threshold intensity, and when other pixels in the group of pixels are associated with intensities
that are below the threshold intensity, the modulation value associated with the group of pixels is set to a minimum intensity;

applying a signal indicative of the modulation value to control one or more modulating elements of the rear modulator to generate
an image at the rear modulator;

determining a mismatch in spatial resolution between the one or more modulating elements and either the group of pixels or
the group of sample locations; and

reconfiguring a first arrangement to match a second arrangement, the first arrangement being associated with either the group
of pixels or the group of sample locations, the first arrangement having a first spatial resolution, the second arrangement
being associated with the one or more modulating elements and having a second spatial resolution, mismatched locations being
inserted into the first arrangement when reconfigured,

thereby resolving the mismatch in spatial resolution to illuminate an image portion associated with the group of pixels.

US Pat. No. 9,419,577

TECHNIQUES FOR DISTORTION REDUCING MULTI-BAND COMPRESSOR WITH TIMBRE PRESERVATION

Dolby Laboratories Licens...

1. An apparatus comprising:
a multi-band filterbank configured to split an audio signal for a plurality of frequency bands;
compression function element dedicated to a frequency band of the plurality of frequency bands;
a timbre preservation element coupled to the compression function element, the timbre preservation element configured to provide
a time-varying threshold for the frequency band; and

at least one audio transducer configured to audibly present an audio signal outputted from the compression function element,
wherein the time-varying threshold for the frequency band is determined by levels of the audio signal from neighboring frequency
bands but not all of the plurality of frequency bands.

US Pat. No. 9,350,311

CALCULATING AND ADJUSTING THE PERCEIVED LOUDNESS AND/OR THE PERCEIVED SPECTRAL BALANCE OF AN AUDIO SIGNAL

Dolby Laboratories Licens...

7. An apparatus for controlling a perceived loudness of a digital audio signal, comprising:
a subsystem, implemented at least partially in hardware, that derives scale factors from the digital audio signal to reduce
a difference between the perceived loudness and a target loudness, each scale factor respectively operating in a frequency
band of a plurality of frequency bands;

a subsystem, implemented at least partially in hardware, that applies the scale factors to the digital audio signal to reduce
the difference between the perceived loudness and the target loudness;

wherein the perceived loudness varies both with frequency and acoustic level.

US Pat. No. 9,341,887

DISPLAYS WITH A BACKLIGHT INCORPORATING REFLECTING LAYER

Dolby Laboratories Licens...

1. A display comprising:
a light source,
a spatial light modulator; and
a light control layer in an optical path between the light source and the spatial light modulator, the light control layer
comprising:

an enhanced specular reflector layer, and
a first optical layer in optical contact with a first side of the enhanced specular reflector layer, the first optical layer
at least one of substantially transparent and substantially translucent, wherein

an optical transmission coefficient for the light control layer is greater than an optical transmission coefficient for the
enhanced specular reflector layer on its own; and

wherein the first optical layer is configured such that light control layer increases the ratio of the amount of light energy
in a central portion of the point spread function to an amount of light energy in tails of the point spread function by a
factor A as follows:


where:
ECF is the optical energy within one full-width at half maximum of the point spread function in the presence of the light control
layer;

ETF is the optical energy outside of twice the full width at half-maximum of the point spread function in the presence of the
light control layer;

ECW is the optical energy within one full-width at half maximum of the point spread function in the absence of the light control
layer; and

ETW is the optical energy outside of twice the full width at half-maximum of the point spread function in the absence of the light
control layer.

US Pat. No. 9,343,076

METHODS AND SYSTEMS FOR GENERATING FILTER COEFFICIENTS AND CONFIGURING FILTERS

Dolby Laboratories Licens...

1. A method, performed by an audio encoding device, for encoding an audio input signal using a prediction filter including
an infinite impulse response (IIR) filter and a finite impulse response (FIR) filter, the prediction filter configured with
a predetermined palette of IIR coefficient sets, said method including the steps of:
(a) for each of the IIR coefficient sets in the palette, generating configuration data indicative of an output signal generated
by applying the IIR filter configured with said each of the IIR coefficient sets to an audio signal derived in response to
the audio input signal, the audio signal comprising a stream of audio signal samples received by the prediction filter, and
identifying as a selected IIR coefficient set one of the IIR coefficient sets which configures the IIR filter to generate
configuration data that satisfy a predetermined criterion;

(b) determining an optimal FIR filter coefficient set by performing a recursion operation on test data indicative of an output
signal generated by applying the prediction filter to an audio signal derived in response to the audio input signal, the audio
signal comprising a stream of audio signal samples received by the prediction filter, with the IIR filter configured with
the selected IIR coefficient set;

(c) configuring the FIR filter with the optimal FIR coefficient set and configuring the IIR filter with the selected IIR coefficient
set, thereby configuring the prediction filter;

(d) generating a prediction filtered audio signal by filtering an audio signal derived in response to the audio input signal
with the configured prediction filter;

(e) generating an encoded audio signal in response to the prediction filtered audio signal; and
(f) asserting, at least one output of the audio encoding device, the encoded audio signal and filter coefficient data indicative
of the selected IIR filter coefficient set, wherein at least one of the steps is implemented, at least in part, by one or
more hardware devices within the audio encoding device.

US Pat. No. 9,335,614

PROJECTION SYSTEMS AND METHODS USING WIDELY-SPACED PROJECTORS

Dolby Laboratories Licens...

1. A projection system, comprising a screen and at least two widely spaced digital projectors for projecting images defined
by image data, wherein the at least two projectors are angled so that they illuminate the screen,
wherein the image data are pre-shifted before delivery to the projection system in order to compensate for a trapezoidal effect
caused by projecting images from the at least two widely spaced digital projectors,

wherein the compensation for the trapezoidal effect comprises shifting where pixels of an image to be projected are modulated
on modulators of at least one of the projectors such that the pixels of the at least two projectors are projected onto a same
portion of the screen, and

wherein the at least two projectors are spaced apart as widely as possible within a theater in which the projection system
is installed and within the limitations of the compensation for the trapezoidal effect, thereby maintaining the projectors'
ability to correct for a trapezoidal effect associated with the projected images to ensure a target resolution of the projected
images.

US Pat. No. 9,060,168

OVERLAPPED BLOCK DISPARITY ESTIMATION AND COMPENSATION ARCHITECTURE

Dolby Laboratories Licens...

1. A method for decoding a video bit stream that adaptively utilizes overlapped block motion compensation, the method comprising:
determining whether a mode for a first block of the video bit stream includes a first mode and whether the mode for the first
block of the video bit stream includes a second mode, the mode being determined by an explicit signal in the video bit stream,
wherein a prediction type of the first block is inter-prediction;

conditioned on the mode being determined to include the first mode, utilizing motion vector information of a neighboring partition
for performing overlapped block motion compensation for the first block, the neighboring partition being edge adjacent to
the first block and located above the first block,

wherein the first block and the neighboring partition are a same prediction type for the first mode, the prediction type being
inter-prediction, and

wherein a region of the first block overlaps with a region of the neighboring partition; and
conditioned on the mode being determined to include the second mode, utilizing motion vector information of a different partition
in the video bit stream for the first block without utilizing the motion vector information of the neighboring partition for
performing the overlapped block motion compensation.

US Pat. No. 9,497,471

METHOD AND SYSTEM FOR IMPROVING COMPRESSED IMAGE CHROMA INFORMATION

Dolby Laboratories Licens...

1. A computer-implemented method for a decoder, the computer-implemented method comprising:
receiving, at the decoder, at least a luminance QP (quantization parameter) value and a first chroma QP bias value for a bi-directionally
interpolated macroblock, wherein the first chroma QP bias value was determined at an encoder and signaled from the encoder
to the decoder, wherein the decoder comprises a luminance channel, a first chroma channel and a second chroma channel, and
wherein the encoder comprises a chroma channel having one-half of a horizontal resolution of the luminance channel of the
decoder;

utilizing, with the decoder, the luminance QP value and the first chroma QP bias value to determine a first chroma QP value
for the bi-directionally interpolated macroblock by adding the first chroma QP bias value to the luminance QP value; and

decompressing an image region of a video image using the luminance QP value and the determined first chroma QP value,
wherein the determined first chroma QP value is less than a predetermined maximum value and greater than a predetermined minimum
value, and

wherein the determined first chroma QP value is determined for the first chroma channel.

US Pat. No. 9,489,956

AUDIO SIGNAL ENHANCEMENT USING ESTIMATED SPATIAL PARAMETERS

Dolby Laboratories Licens...

1. A method, comprising:
receiving audio data comprising a first set of frequency coefficients and a second set of frequency coefficients;
estimating, based on at least part of the first set of frequency coefficients, spatial parameters for at least part of the
second set of frequency coefficients; and

applying the estimated spatial parameters to the second set of frequency coefficients to generate a modified second set of
frequency coefficients,

wherein the first set of frequency coefficients corresponds to a first frequency range and the second set of frequency coefficients
corresponds to a second frequency range;

wherein the audio data comprises data corresponding to individual channels and a coupled channel, and wherein the first frequency
range corresponds to an individual channel frequency range and the second frequency range corresponds to a coupled channel
frequency range;

wherein the audio data comprises frequency coefficients in the first frequency range for two or more channels; and
wherein the estimating process involves:
creating a composite coupling channel based on audio data of the individual channels in the first frequency range, which involves
calculating combined frequency coefficients of the composite coupling channel based on frequency coefficients of the two or
more channels in the first frequency range; and

computing, for at least a first channel, cross-correlation coefficients between frequency coefficients of the first channel
and the combined frequency coefficients.

US Pat. No. 9,479,766

MODIFYING IMAGES FOR A 3-DIMENSIONAL DISPLAY MODE

Dolby Laboratories Licens...

1. A method, comprising:
receiving a first image for display in a 3-dimensional display mode, wherein the first image is not encoded for the 3-dimensional
display mode;

receiving a third image for display in the 3-dimensional display mode, wherein the third image is encoded for the 3-dimensional
display mode, and where the third image comprises a set of side-by-side images further comprising a right view image and a
left view image and further wherein the first image is to be merged with the third image to result in a 3-dimensional composite
image;

detecting that the first image has not been encoded for the 3-dimensional display mode based on an analysis of the first image,
wherein the detecting that the first image has not been encoded for the 3-dimensional mode comprises comparing two different
portions of the first image and determining a degree of similarity between the two different portions of the first image;

detecting that the third image has been encoded for the 3-dimensional display mode based on an analysis of the third image,
wherein the analysis of the third image generates 3-dimensional display mode information;

modifying the first image to generate a second image encoded for the 3-dimensional display mode, wherein the second image
comprises a right view image and a left view image that, when composed with the third image, renders a 3-dimensional image
substantially depicting the 2-dimensional first image in 3-dimensions, and wherein the modifying the first image to generate
the second image is responsive to the detecting that the first image has not been encoded for the 3-dimensional display mode
and is based on the 3-dimensional display mode information; and

displaying the second image in the 3-dimensional display mode;
wherein the method is performed by one or more devices that comprise a processor.

US Pat. No. 9,466,098

LOCAL DEFINITION OF GLOBAL IMAGE TRANSFORMATIONS

Dolby Laboratories Licens...

1. A method for adjusting image data defining an image, the method comprising, by a data processor executing software and/or
firmware instructions:
generating a saliency map for the image wherein generating the saliency map comprises one or more of:
assigning saliency to pixels based on the locations of the pixels in the image; and
receiving information regarding the saliency of various parts of the image by way of a user interface;
establishing a global transformation based on the image data wherein establishing the global transformation comprises:
ignoring parts of the image data corresponding to image areas indicated by the saliency map as having low saliency; or
weighing parts of the image data corresponding to image areas indicated by the saliency map as having low saliency less heavily
than other parts of the image data corresponding to image areas indicated by the saliency map as having higher saliency;

and one or more of:
applying the global transformation to the image data to yield altered image data and displaying the altered image data on
a display;

applying the global transformation to the image data to yield altered image data and storing the altered image data on a non-transitory
medium; and

storing a definition of the global transformation on a non-transitory medium.

US Pat. No. 9,401,152

SYSTEM FOR MAINTAINING REVERSIBLE DYNAMIC RANGE CONTROL INFORMATION ASSOCIATED WITH PARAMETRIC AUDIO CODERS

Dolby Laboratories Licens...

1. A decoding system configured to reconstruct an n-channel audio signal on the basis of a bitstream, the decoding system
comprising:
a parametric-mode demultiplexer for receiving the bitstream and outputting, based thereon and in a parametric coding mode
of the system, an encoded core signal and multichannel coding parameters;

a core signal decoder for receiving the encoded core signal and outputting, based thereon, an m-channel core signal, where
1?m
a parametric synthesis stage for receiving the core signal and the multichannel coding parameters and outputting, based thereon,
the n-channel signal,

wherein the parametric-mode demultiplexer is further configured to output, based on the bitstream, pre-processing dynamic
range control, DRC, parameters quantifying an encoder-side dynamic range limiting of the core signal, and

wherein the decoding system is operable to cancel the encoder-side dynamic range limiting based on the pre-processing DRC
parameters.

US Pat. No. 9,386,313

MULTIPLE COLOR CHANNEL MULTIPLE REGRESSION PREDICTOR

Dolby Laboratories Licens...

1. A method to approximate using a processor an image having a first dynamic range in terms of an image having a second dynamic
range, the method comprising;
receiving a first image and a second image, wherein the second image has a different dynamic range than the first image;
selecting a multi-channel, multiple-regression (MMR) prediction model from one or more MMR models;
determining values of prediction parameters of the selected MMR model;
computing an output image approximating the first image based on the second image and the determined values of the prediction
parameters of the selected MMR prediction model, wherein pixel values of at least one color component in the output image
are computed based on pixel values of at least two color components in the second image; and

outputting the determined values of the prediction parameters and the computed output image
wherein selecting the MMR prediction model from the one or more MMR prediction models further comprises an iterative selection
process comprising:

(a) selecting and applying an initial MMR prediction model;
(b) computing a residual error between the first image and the output image;
(c) selecting the initial MMR model if the residual error is smaller than an error threshold and no further MMR prediction
model is selectable; otherwise,
selecting a new MMR prediction model from the variety of MMR prediction models, the new MMR prediction model being different
from the previously selected MMR prediction model; and returning to step (b).

US Pat. No. 9,324,278

AMBIENT BLACK LEVEL

Dolby Laboratories Licens...

1. A method, comprising:
detecting, on a display panel, a luminance level of ambient light, the display panel being illuminated by a plurality of light
sources and each individual light source in the plurality of light sources being individually settable to an individual light
output level;

determining whether the luminance level of the ambient light is above a maximum ambient luminance threshold;
wherein if the luminance level of the ambient light is above the maximum ambient luminance threshold, the display panel is
to operate with an intrinsic black level;

wherein the intrinsic black level being independent of the luminance level of the ambient light; and
in response to determining that the luminance level of the ambient light is not above the maximum luminance threshold, operating,
by the display panel, with an ambient black level calculated based on the luminance level of the ambient light as an input
variable;

wherein the method is performed by one or more computing devices.

US Pat. No. 9,311,922

METHOD, APPARATUS, AND STORAGE MEDIUM FOR DECODING ENCODED AUDIO CHANNELS

Dolby Laboratories Licens...

1. A method for decoding M encoded audio channels representing N audio channels, where N is two or more, and a set of one
or more spatial parameters, wherein one or more of said spatial parameters are differentially encoded, the method comprising:
a) receiving said M encoded audio channels and said set of spatial parameters,
b) applying a differential decoding process to the differentially encoded spatial parameters,
c) deriving N audio signals from said M encoded channels, wherein each audio signal is divided into a plurality of frequency
bands, wherein each band comprises one or more spectral components,

d) generating a multichannel output signal from the N audio signals and the spatial parameters, and
e) synthesizing, by an audio reproduction device, the multichannel output signal, wherein:M is two or more,at least one of said N audio signals is a correlated signal derived from a weighted combination of at least two of said M
encoded audio channels,said set of spatial parameters includes a first parameter indicative of the amount of an uncorrelated signal to mix with a
correlated signal andstep d) includes deriving at least one uncorrelated signal from said at least one correlated signal, and controlling the proportion
of said at least one correlated signal to said at least one uncorrelated signal in at least one channel of said multichannel
output signal in response to one or ones of said spatial parameters, wherein said controlling is at least partly in accordance
with said first parameter.

US Pat. No. 9,288,499

DEVICE AND METHOD OF IMPROVING THE PERCEPTUAL LUMINANCE NONLINEARITY-BASED IMAGE DATA EXCHANGE ACROSS DIFFERENT DISPLAY CAPABILITIES

DOLBY LABORATORIES LICENS...

1. A method comprising:
receiving, by a data encoder, image data to be encoded;
accessing, via the data encoder, a reference data conversion function which determines a mapping between a set of reference
digital code values and a set of reference levels based on contrast sensitivity of human vision;

based on the receiving and the accessing, encoding, via the data encoder, the received image data into reference encoded image
data; and

outputting, via the data encoder, the reference encoded image data,
wherein:
color component values in the received image data are represented by the set of reference digital code values, the reference
digital code values having a bit depth of 10 or 12 bits,

at least a lowest three and a highest three reference digital code values are excluded from the mapping,
the mapping is based at least in part on a functional model of:

 and
n, m, c1, c2, and c3 are predetermined values.

US Pat. No. 9,509,935

DISPLAY MANAGEMENT SERVER

Dolby Laboratories Licens...

1. A display management unit configured to provide a modified video signal substantially in real-time for display on a target
display, the display management unit in communications with the target display and a second target display over an electronic
distribution network, the electronic distribution network connected to both the target display and the second target display,
and the target display and the second target display having a first set of display characteristics and a second set of display
characteristics respectively, the display management unit configured to access information regarding the target display and
the second target display and at least one input video signal, the display management unit comprising:
a database interface configured to retrieve display characteristics corresponding to the information regarding the target
display from a display characteristics database, the target display in two-way electronic communications during the course
of video signal transmission with the display management unit over the electronic distribution network;

a mapping unit configured to map at least one of tone and color values from the at least one input video signal to corresponding
mapped values based at least in part on the retrieved display characteristics to produce the modified video signal;

wherein the mapping unit is configured to selectively map the at least one of tone and color values from the at least one
input video signal to corresponding mapped values based at least in part on locations of pixels corresponding to the at least
one of tone and color values in image frames of the input video signal;

wherein the display management unit is configured to access metadata characterizing at least one aspect of a creative intent
affecting how the video content embodied in the at least one input video signal ought to be displayed, wherein the mapping
unit is configured to map the at least one of tone and color values from the input video signal to corresponding mapped values
based at least in part on the metadata; and

wherein further the display management unit configured to distribute the modified video signals substantially in real-time
to the target display over the electronic distribution network and further wherein the modified video signals are modified
in accordance with display information sent from the target display to the display management unit during the course of video
signal transmission.

US Pat. No. 9,348,082

ILLUMINATOR FOR REFLECTIVE DISPLAYS

Dolby Laboratories Licens...

1. A display, comprising an illuminator, the illuminator comprising:
a light guide having substantially transparent front and rear planar surfaces sized to overlap a viewing surface of the display
when the planar surfaces are substantially parallel and adjacent to the viewing surface;

a light source optically coupled to emit light into the light guide, the light source having an adjustable chromaticity;
a plurality of light redirecting structures distributed on the rear surface of the light guide, each one of the structures
shaped to redirect through the light guide toward the viewing surface light rays which encounter the structures;

an optical sensor having an output operable to provide an output signal representative of a chromaticity of ambient light;
and

a controller configured to adjust the chromaticity of the light source in response to the output signal of the optical sensor;
wherein:
most light rays which are emitted into the light guide by the light source and which do not encounter any of the structures
are confined within the light guide by total internal reflection;

most light rays which are emitted into the light guide by the light source and which encounter any of the structures are redirected
through the light guide toward the viewing surface and illuminating the display; and

the structures are formed within the light guide.

US Pat. No. 9,338,389

METHOD AND SYSTEM FOR VIDEO EQUALIZATION

Dolby Laboratories Licens...

1. A video equalization method, including the step of:
performing equalization on input video to generate equalized video, such that a sequence of images determined by the equalized
video have dynamic range that is constant to a predetermined degree, where the input video includes high dynamic range video
and standard dynamic range video, the images include at least one image determined by the high dynamic range video and at
least one image determined by the standard dynamic range video, the equalization is performed with a common anchor point for
the input video and the equalized video, and the equalization is performed such that the images determined by the equalized
video have at least substantially the same average luminance as images determined by the input video; and

wherein the high dynamic range video is encoded with code values having a full range, and the equalization includes a step
of mapping code values of the standard dynamic range video to code values in a subrange of the full range of code values that
encode the high dynamic range video, such that the mapping expands the dynamic range of the standard dynamic range video.

US Pat. No. 9,294,771

QUANTIZATION CONTROL FOR VARIABLE BIT DEPTH

Dolby Laboratories Licens...

1. A method of processing an encoded bitstream of digital symbols representing a moving image, the encoded bitstream is entropy
encoded by an encoder, the encoder identifying a bit depth M, the method comprising:
receiving, by a decoder, the bitstream, the bitstream including a quantization parameter QP, quantized digital symbols, and
data identifying the bit depth M; and

determining a quantization step size QSM for the bit depth M, the quantization step size QSM equaling 2(M?8)*QS8 when the bit depth M is greater than 8,

wherein QS8 is a quantization step size for the bit depth of 8 according to the quantization parameter QP, and

wherein QS8 is proportional to 2QP/6.

US Pat. No. 9,190,014

DATA TRANSMISSION USING OUT-OF-GAMUT COLOR COORDINATES

Dolby Laboratories Licens...

1. A method for encoding additional information in image data by video encoder, the image data comprising pixel values defining
a plurality of in-gamut points within a gamut for a plurality of pixels, the method of comprising:
selecting one or more of the pixels; and
mapping one ore more pixel values of each of the selected pixels to one or more corresponding mapped values;
wherein at least one of the one or more mapped values defines an out-of-gamut point and at least one of the one or more mapped
values corresponds to one or more bits of the additional information;

wherein at least one of the one or more mapped values defining the out-of gamut point corresponds to at least one of the one
or more pixel values;

wherein mapping the at least one pixel value mapped to the at least one mapped value defining the out-of-gamut point comprises
copying a plurality of lower order bits of the at least one pixel value to the at least one mapped value.

US Pat. No. 9,538,176

PRE-PROCESSING FOR BITDEPTH AND COLOR FORMAT SCALABLE VIDEO CODING

Dolby Laboratories Licens...

1. A method of processing input video data comprising:
receiving a first input of video data;
generating a second input of video data by tone mapping and color formatting the first input of video data, wherein the second
input of video data is generated based on the first input and has a lower dynamic range than the first input;

pre-processing the first input video data with a first Motion Compensated Temporal Filter (MCTF) to generate a first pre-processed
signal;

pre-processing the second input video data with a second Motion Compensated Temporal Filter (MCTF) to generate a second pre-processed
signal;

encoding the second pre-processing signal using a base layer encoder to generate a base layer stream; and
encoding the first pre-processing signal using an enhancement layer encoder to generate an enhancement layer stream.

US Pat. No. 9,426,300

MATCHING REVERBERATION IN TELECONFERENCING ENVIRONMENTS

Dolby Laboratories Licens...

1. A method of matching acoustics in a telecommunications system, comprising:
receiving, in a near end environment, a first audio signal from a far end environment and a second audio signal from the near
end environment;

determining acoustic parameters for at least one of the first audio signal and the second audio signal, wherein the acoustic
parameters correspond to at least one of the far end environment for the first audio signal and the near end environment for
the second audio signal;

performing filtering on at least one of the first audio signal and the second audio signal in order to match the acoustic
parameters of at least one of the first audio signal and the second audio signal to acoustic parameters of a reference environment;
and

outputting in the near end environment an output signal including the filtered signal,
wherein the reference environment corresponds to one of the far end environment, the near end environment, and a third environment
that differs from the far end environment and the near end environment.

US Pat. No. 9,412,383

HIGH FREQUENCY REGENERATION OF AN AUDIO SIGNAL BY COPYING IN A CIRCULAR MANNER

Dolby Laboratories Licens...

1. A method for generating a reconstructed audio signal having a baseband portion and a highband portion, the method comprising:
deformatting an encoded audio signal into a first part and a second part;
obtaining a decoded baseband audio signal by decoding the first part, wherein the first part includes spectral components
of the baseband portion and does not include spectral components of the highband portion, wherein the number of the spectral
components of the baseband portion may vary dynamically;

extracting, from the second part, a noise parameter and an estimated spectral envelope of the highband portion;
obtaining a plurality of subband signals by filtering the decoded baseband audio signal;
generating a high-frequency reconstructed signal by copying in a circular manner a number of consecutive subband signals of
the plurality of subband signals;

obtaining an envelope adjusted high-frequency signal by adjusting, based on the estimated spectral envelope of the highband
portion, a spectral envelope of the high-frequency reconstructed signal, wherein a frequency resolution of the estimated spectral
envelope is adaptive;

generating a noise component based on the noise parameter, wherein the noise parameter indicates a level of noise contained
in the highband portion;

obtaining a combined high-frequency signal by adding the noise component to the envelope adjusted high-frequency signal; and
obtaining a time-domain reconstructed audio signal by combining the decoded baseband audio signal and the combined high-frequency
signal;

wherein the method is implemented by an audio decoding device comprising one or more hardware elements.

US Pat. No. 9,324,250

HIGH DYNAMIC RANGE DISPLAYS COMPRISING MEMS/IMOD COMPONENTS

Dolby Laboratories Licens...

1. A display system comprising:
a backlight source, said backlight source providing light into an optical path;
a first modulator, receiving light from said backlight source and modulating said light on said optical path, wherein the
first modulator is one of group, the group consists of: a MEMS modulator and a IMOD modulator;

a second modulator, receiving light from said first modulator, further modulating the light from said first modulator, and
transmitting said light;

a controller, said controller inputting image data to be rendered upon said display system and sending signals to said first
modulator and said second modulator.

US Pat. No. 9,247,269

INTERPOLATION OF VIDEO COMPRESSION FRAMES

Dolby Laboratories Licens...

1. An apparatus comprising a video bitstream stored on one or more non-transitory machine-readable media, the video bitstream
characterized by:
picture areas of predicted and bidirectional predicted frames, wherein at least one picture area of a bidirectional predicted
frame is in a compressed format, and wherein the compressed format comprises:

a signal indicating an interpolative motion vector prediction mode for the at least one picture area based on one or more
referenceable frames;

a signal indicating a pixel interpolation mode using an unequal pixel weighting of the one or more referenceable frames; and
two or more pixel weight values representing the unequal pixel weighting.

US Pat. No. 9,202,438

IMAGE FORMATS AND RELATED METHODS AND APPARATUSES

DOLBY LABORATORIES LICENS...

1. A method for displaying an image on a target display comprising:
receiving image data at the target display, the image data comprising a predetermined image value of the image data corresponding
to a predetermined mid-tone value displayed on a reference display; and

mapping the image data to the target display so that the mid-tone value when viewed on the target display matches the predetermined
mid-tone value when viewed on the reference display,

wherein a dynamic range of the target display differs from a dynamic range of the reference display,
wherein the mapping sets a slope of a mid-tone region of a response function when viewed on the target display to match a
corresponding mid-tone region of a response function of the reference display, and

wherein the mapping comprises a mapping function given by:
where LOUT is an output of the mapping, LIN is an input of the mapping, and c1, c2,c3 and n are numeric parameters.

US Pat. No. 9,135,864

SYSTEMS AND METHODS FOR ACCURATELY REPRESENTING HIGH CONTRAST IMAGERY ON HIGH DYNAMIC RANGE DISPLAY SYSTEMS

Dolby Laboratories Licens...

1. A dual-panel display system comprising:
a backlight;
a first image-generating panel;
a second contrast-improving panel;
a control module for selecting a codeword (CW) pair for driving said first image-generating panel and said second contrast-improving
panel respectively according to input image data; and

further wherein said control module is capable of selecting the CW pair to improve final image rendering presented to a viewer.

US Pat. No. 9,117,440

METHOD, APPARATUS, AND MEDIUM FOR DETECTING FREQUENCY EXTENSION CODING IN THE CODING HISTORY OF AN AUDIO SIGNAL

Dolby International AB, ...

1. A method for detecting frequency extension coding in the coding history of an audio signal, the method comprising
providing a plurality of subband signals in a corresponding plurality of subbands comprising low and high frequency subbands,
the plurality of subband signals generated using a filter bank comprising a plurality of filters; wherein the plurality of
subband signals corresponds to a time/frequency domain representation of the audio signal;

determining a degree of relationship between subband signals in the low frequency subbands and subband signals in the high
frequency subbands; wherein the degree of relationship is determined based on the plurality of subband signals;

wherein determining the degree of relationship comprises determining a set of cross-correlation, wherein the set of cross-correlation
values comprises a subset of elements of a K x K similarity matrix, wherein the K x K similarity matrix comprises cross-correlation
values corresponding to all pairs of subband signals from the plurality of subband signals;

wherein determining a cross-correlation value comprises determining an average over time of products of corresponding samples
of a first and a second subband signal at zero time lag; and
determining frequency extension coding history if the degree of relationship is greater than a relationship threshold.

US Pat. No. 9,084,070

SYSTEM AND METHOD FOR AUTOMATIC SELECTION OF AUDIO CONFIGURATION SETTINGS

Dolby Laboratories Licens...

1. An apparatus including a circuit for automatically adjusting an output of an audio device, the circuit comprising:
a memory circuit that is configured to store configuration information, wherein the configuration information includes one
of balance information and surround sound information;

a detector circuit that is configured to detect environment information related to an environment in which the apparatus is
present;

a control circuit that is configured to select selected configuration information from the memory circuit according to the
environment information detected by the detector circuit; and

an output circuit that is configured to receive an input audio signal and the selected configuration information, that is
configured to modify the input audio signal according to the selected configuration information, and is configured to generate
an output audio signal corresponding to the input audio signal as modified according to the selected configuration information.

US Pat. No. 9,077,910

MULTI-FIELD CCD CAPTURE FOR HDR IMAGING

Dolby Laboratories Licens...

1. A method comprising:
generating first measurable imagery responses of a scene, the first measurable imagery responses being obtained by exposing
a first field of a multi-field image sensor of an image processing system for a first time duration;

generating second measurable imagery responses of the scene, the second measurable imagery responses at least in part being
obtained by exposing a second field of the multi-field image sensor of the image processing system for a second time duration
that contains the first time duration;

wherein the second measurable imagery responses further comprise measurable imagery responses in the first field for a time
duration equal to a difference between the first time duration and the second time duration;

generating, based on the first measurable imagery responses only, a first image of the scene;
generating, based on both (1) the first measurable imagery responses and (2) the second measurable imagery responses, a second
image of the scene;

combining the first image and the second image into an output image of the scene, the output image having a higher dynamic
range than either of the first image and the second image.

US Pat. No. 9,818,433

VOICE ACTIVITY DETECTOR FOR AUDIO SIGNALS

Dolby Laboratories Licens...

1. A method for determining voice activity in an audio signal, the method comprising:
receiving a frame of an input audio signal, the input audio signal having an sample rate;
dividing the frame into a plurality of subbands based on the sample rate, the plurality of subbands including at least a lowest
subband and a highest subband;

filtering the lowest subband with a linear filter to reduce an energy of the lowest subband;
estimating a noise level for at least some of the plurality of subbands;
calculating a signal to noise ratio value for at least some of the plurality of subbands; and
determining a speech activity level based at least in part on an average of the calculated signal to noise ratio values and
an average of an energy of at least some of the plurality of subbands,

wherein the method is performed with one or more computing devices.

US Pat. No. 9,420,109

CLUSTERING OF AUDIO STREAMS IN A 2D / 3D CONFERENCE SCENE

Dolby Laboratories Licens...

1. A conference controller configured to place L upstream audio signals associated with respective L conference participants
within a 2D or 3D conference scene to be rendered to a listener; L being an integer, L>1; wherein the conference controller
is configured to
set up a X-point conference scene with X different spatial talker locations within the conference scene, X being an integer,
X>0;

assign the L upstream audio signals to the X talker locations;
determine N downstream audio signals from the L assigned upstream audio signals based on the L upstream audio signals, N being
an integer, N
determining a degree of activity of the L upstream audio signals; and
initiating a mixing of at least two upstream audio signals having the at least two lowest degrees of activity, thereby yielding
a first of the N downstream audio signals;

determine N updated talker locations for the N downstream audio signals, respectively, based on the talker locations which
the L upstream audio signals are assigned to;

assign the first downstream audio signal to a first of the N updated talker locations; and
generate metadata identifying the updated talker locations and enabling an audio processing unit to generate a spatialized
audio signal based on the N downstream audio signals; wherein when rendering the spatialized audio signal to the listener,
the listener perceives the N downstream audio signals as coming from the N updated talker locations, respectively.

US Pat. No. 9,420,311

CODING AND DECODING OF INTERLEAVED IMAGE DATA

Dolby Laboratories Licens...

1. A method to decode an encoded video signal with at least one processor, the method comprising:
decoding, by the at least one processor, the encoded video signal, the encoded video signal comprising samples from more than
one image in a frame, the frame comprising interleaved groups of samples from a first image and a second image, and the decoding
comprising:

identifying a de-interleaving format from a plurality of interleaving formats for the interleaved groups, the de-interleaving
format identified by a de-interleaving identifier encoded in the video signal;

determining, based on the de-interleaving identifier, that the interleaved groups of samples from the first image and the
second image are top-bottom interleaved;

identifying at least one region of the frame for de-interleaving, the at least one region identified by a sub-sampling arrangement
map identifier encoded in the video signal; and

determining, based on the sub-sampling arrangement map identifier, that the first image and the second image are quincunx-sampled.

US Pat. No. 9,282,419

AUDIO PROCESSING METHOD AND AUDIO PROCESSING APPARATUS

Dolby Laboratories Licens...

1. An audio processing method comprising:
transforming a mono-channel audio signal into a plurality of first subband signals;
estimating proportions of a desired component and a noise component in each of the subband signals;
generating second subband signals corresponding respectively to a plurality of channels from each of the first subband signals,
wherein each of the second subband signals comprises a first component and a second component obtained by assigning a spatial
hearing property and a perceptual hearing property different from the spatial hearing property to the desired component and
the noise component in the corresponding first subband signal respectively, based on a multi-dimensional auditory presentation
method; and

transforming the second subband signals into signals for rendering with the multi-dimensional auditory presentation method.

US Pat. No. 9,247,246

COMPLEXITY SCALABLE MULTILAYER VIDEO CODING

Dolby Laboratories Licens...

1. A multi-layer video system, comprising:
a first layer encoder that encodes a first layer of video information;
at least one second layer encoder that encodes at least one second layer of video information; and
an encoder side reference processing unit (RPU) that:
estimates one or more of an optimal filter or an optimal process that applies on a reference picture that is reconstructed
from the first video information layer, and

processes a current picture of the second video information layer, based on a correlation between the first layer reconstructed
reference picture and the current picture;

wherein the correlation relates to a complexity characteristic that scaleably corresponds to the first video information layer
reconstructed reference picture and the second video information layer current picture, the complexity characteristic comprises
one of a plurality of complexity levels, the at least one second video information layer comprises a plurality of levels,
the encoder side RPU sends processed pictures selectively to one level of the plurality thereof and supports a plurality of
complexity modes; and

wherein a scalable video bitstream is outputted, the video bitstream scalable in relation to one or more of:
spatial scalability;
temporal scalability;
quality scalability;
bit-depth scalability;
aspect ratio scalability;
view scalability;
chroma sampling scalability; or
color gamut scalability.

US Pat. No. 9,222,629

N-MODULATION FOR WIDE COLOR GAMUT AND HIGH BRIGHTNESS

Dolby Laboratories Licens...

1. A method comprising:
deriving, based on an image frame, a plurality of color-specific frames, each specifying luminance levels of a different color
in a plurality of colors for pixels in a display unit on an individual pixel basis;

outputting one or more first portions of a first color-specific frame, in the plurality of color-specific frames, to the display
unit, the first color-specific frame specifying luminance levels of a first color in the plurality of colors for the pixels
in a display unit on an individual pixel basis; and

turning on, for a first time interval, one or more first light sources, the one or more first light sources illuminating one
or more first portions of the display unit with light of the first color;

wherein each pixel of the display unit comprising at least a part of a black-and-white light valve from two or more stacked
monochromatic liquid crystal display panels, and

wherein the method is performed by one or more computing devices.

US Pat. No. 9,189,995

SYSTEMS AND METHODS FOR CONTROLLING DRIVE SIGNALS IN SPATIAL LIGHT MODULATOR DISPLAYS

Dolby Laboratories Licens...

1. A method for processing control values for a backlight and drive signals for a display modulation layer, the backlight
having an array of light sources, the method comprising, for each light source:
receiving image data for a new frame;
determining a new control value for the backlight based at least in part on the new frame of image data;
determining a difference between the new control value for the backlight and an old control value for the backlight for an
old frame of image data;

generating an intermediate control signal for the backlight based at least in part on the difference in control values and
a desired ramping pattern, if the difference between the old control value and the new control value is greater than a threshold;

generating a blanking pattern for the backlight, the blanking pattern based in part on the identification of motion over a
set of image frames; and

outputting a final control signal to the backlight based on the intermediate control signal and the blanking pattern.

US Pat. No. 9,171,549

AUTOMATIC CONFIGURATION OF METADATA FOR USE IN MIXING AUDIO PROGRAMS FROM TWO ENCODED BITSTREAMS

Dolby Laboratories Licens...

1. A method, performed by one or more devices, for encoding audio signals, wherein the method comprises:
receiving one or more main audio signals that represent a main audio program and receiving one or more associated audio signals
that represent an associated audio program;

encoding the one or more main audio signals to generate a main encoded audio signal and encoding the one or more associated
audio signals to generate an associated encoded audio signal;

generating audio mixing metadata in response to estimated loudness of the main audio program and estimated loudness of the
associated audio program, wherein one or more audio signals to be decoded from the main encoded audio signal and one or more
audio signals to be decoded from the associated audio signal are to be mixed according to the audio mixing metadata; wherein
the audio mixing metadata specify an attenuation level for the one or more audio signals to be decoded from the main audio
program prior to mixing; and

assembling the main encoded audio signal, the associated encoded audio signal and the audio mixing metadata into an output
encoded signal.

US Pat. No. 9,136,810

AUDIO GAIN CONTROL USING SPECIFIC-LOUDNESS-BASED AUDITORY EVENT DETECTION

Dolby Laboratories Licens...

1. An audio processing method to generate a second signal by applying dynamic gain modifications to a first signal, the method
comprising:
detecting changes in loudness with respect to time in the first signal,
identifying as auditory event boundaries changes greater than a threshold in loudness with respect to time in the first signal,
wherein an audio segment between consecutive boundaries constitutes an auditory event, and

applying a dynamic gain modification to the first signal based on said auditory event to generate the second signal;
wherein the audio processing method is performed by one or more processors.

US Pat. No. 9,066,070

NON-LINEAR VDR RESIDUAL QUANTIZER

Dolby Laboratories Licens...

1. A method for the coding of High Dynamic Range (HDR) images, the method comprising:
accessing an input HDR image and an input Standard Dynamic Range (SDR) image, wherein the input HDR and SDR images represent
the same scene but at a different dynamic range;

generating a predicted HDR image based on the input SDR image and the input HDR image;
generating a residual HDR input image based on the input HDR image and the predicted HDR image;
for limiting the dynamic range of the residual HDR input image, applying a non-linear quantization to the residual HDR input
image to output a quantized residual image; and

coding the quantized residual image using a residual encoder,
wherein said residual HDR input image has a bit-depth that is higher than the bit-depth supported by the residual encoder,
wherein the method step of non-linear quantization comprises:

transforming pixel values of the residual HDR input image to corresponding quantized pixel values according to a non-linear
transfer function; said transfer function characterized by one or more function parameters, said function parameters comprising
an offset parameter and an output dynamic range parameter representative for a desired maximum dynamic range of the non-linear
quantization;
wherein said transfer function has a mid-range slope controlled by one or more of the function parameters, and wherein said
parameters of the non-linear transfer function are set by a method comprising:
receiving from the residual encoder quantizer information related to one or more of the function parameters, the offset parameter,
and the output dynamic range parameter; and

adjusting said one or more parameters based on said received quantizer information, wherein the non-linear transfer function
comprises computing

wherein O denotes the offset parameter, Lmax denotes the output, dynamic range parameter, x is a residual HDR input pixel value, c(x) is the quantized output value, xmax is the maximum absolute value of the residual HDR input pixel values, and ?denotes a function parameter controlling the mid-range
slope.

US Pat. No. 9,602,940

AUDIO PLAYBACK SYSTEM MONITORING

Dolby Laboratories Licens...

1. A method for monitoring audience reaction to an audiovisual program played back by a playback system including a set of
M speakers in a playback environment, where M is a positive integer, wherein the program has a soundtrack comprising M channels,
said method including steps of:
(a) playing back the audiovisual program in the presence of an audience in the playback environment, including by emitting
sound, determined by the program, from the speakers of the playback system in response to driving each of the speakers with
a speaker feed for a different one of the channels of the soundtrack;

(b) obtaining audio data indicative of at least one microphone signal generated by at least microphone in the playback environment
during emission of the sound in step (a); and

(c) processing the audio data to extract audience data from said audio data, and analyzing the audience data to determine
audience reaction to the program, wherein the audience data are indicative of audience content indicated by the microphone
signal, and the audience content comprises sound produced by the audience during playback of the program.

US Pat. No. 9,503,757

FILTERING FOR IMAGE AND VIDEO ENHANCEMENT USING ASYMMETRIC SAMPLES

Dolby Laboratories Licens...

1. A method for processing samples of an image or sequence of images, the method comprising:
encoding a first set of samples with a first encoding process;
encoding a second set of samples with a second encoding process;
filtering reconstructed samples of the first set based on information associated with at least one of the first encoding process
and the second encoding process; and

combining the filtered reconstructed samples of the first set with reconstructed samples of the second set to obtain a new
representation of the reconstructed samples of the second set; and

wherein the filtering comprises a multi-hypothesis filtering and a confidence value comprising a difference between each filtered
reconstructed sample of the first set and samples lying within a filter support;

wherein the multi-hypothesis filtering comprises generating, for each reconstructed sample of the first set, a plurality of
different filter outputs; and

the combining includes selecting one of the different filter outputs as the filtered reconstructed samples of the first set;
wherein the selecting includes selecting one of the different filter outputs as the filtered reconstructed samples of the
first set based on the confidence value in relation to a threshold.

US Pat. No. 9,349,378

HAPTIC SIGNAL SYNTHESIS AND TRANSPORT IN A BIT STREAM

Dolby Laboratories Licens...

1. A method comprising:
generating a low frequency information audio signal based on a multichannel audio signal;
synthesizing a haptic signal from the low frequency information audio signal;
wherein synthesizing the haptic signal comprises synthesizing, from the low frequency information audio signal, one or more
haptic effect parameters;

providing the one or more haptic effect parameters to a tool for adjusting the one or more haptic effect parameters;
receiving haptic effect metadata reflecting one or more user adjustments made using the tool to at least one of the one or
more haptic effect parameters;

embedding the haptic signal and the haptic effect metadata in a multichannel audio codec bit stream that encodes at least
the multichannel audio signal;

wherein the method is performed by a computing device.

US Pat. No. 9,338,470

GUIDED COLOR TRANSIENT IMPROVEMENT FILTERING IN VIDEO CODING

Dolby Laboratories Licens...

1. A method for guided filtering by an encoder, the method comprising:
receiving a target image to be encoded by an encoder and a guide image, wherein both the target image and the guide image
represent similar scenes and each comprises a first color component and a second color component;

encoding the target image with the encoder to generate a coded image;
decoding the coded image with a decoder to generate a decoded image, the decoded image comprising a decoded first color component
and a decoded second color component;

selecting a color transient improvement (CTI) filter to filter pixels of the decoded image to generate an output color component
image;

computing filtering coefficients for the color transient improvement (CTI) filter, wherein the filtering coefficient computation
is based on minimizing an error measurement between pixel values of the output color component image and corresponding pixel
values of the second color component in the guide image, wherein the CTI filter comprises a first set of filtering coefficients
and a second set of filtering coefficients, wherein generating the output color component image comprises combining the result
of filtering the first color component of the decoded image with the first set of filtering coefficients with the result of
filtering the second color component of the decoded image with the second set of filtering coefficients; and

transmitting the CTI filtering coefficients to a decoder.

US Pat. No. 9,264,690

CONVERSION OF INTERLEAVED DATA SETS, INCLUDING CHROMA CORRECTION AND/OR CORRECTION OF CHECKERBOARD INTERLEAVED FORMATTED 3D IMAGES

Dolby Laboratories Licens...

1. A method, comprising the step of correcting a sample error in at least one pixel of a first image in a video stream, wherein
the video stream comprises a checkerboard arrangement of two views of 3D video content, said video stream comprising a first
colorspace and the sample error comprises color bleeding from a pixel of a second image in the video stream, wherein the color
bleeding comprises bleeding induced by an up-conversion of a frame containing the first image and the second image, the method
comprising:
performing a color conversion to a second colorspace;
applying a correction schema by which at least one incorrect color sample in the first image is replaced by a replacement
color sample derived from at least one correct color sample of the same image;

wherein the correction schema comprises a schema retrieved from a database of schemas; and
wherein the step of correcting comprises substituting an incorrect color sample in the first image by a correct neighboring
color sample of the same image.

US Pat. No. 9,224,320

PROJECTION DISPLAY PROVIDING ADDITIONAL MODULATION AND RELATED METHODS

Dolby Laboratories Licens...

1. A projection display system comprising:
a projector comprising a projection lens arranged to focus an image from the projector onto a viewing screen, the viewing
screen forming the projected image that is viewed by users;

a spatial modulator, the spatial modulator comprising controllable elements, disposed between the projection lens and the
viewing screen, wherein the spatial modulator is substantially disposed in a plane that is not at the focus of the lens and
located at a position such that the effect of the controllable elements of the spatial modulator is substantially blurred
at the viewing screen;

a controller connected to control transmissivity of a plurality of regions of the spatial modulator based directly or indirectly
on image data; and

wherein further the controller is capable of sending control signals to the spatial modulator to compensate for flare produced
by the projection lens.

US Pat. No. 9,113,165

QUANTIZATION CONTROL FOR VARIABLE BIT DEPTH

Dolby Laboratories Licens...

1. An encoder apparatus to encode storing an image as an encoded bitstream dependent on a bit depth M, the apparatus comprising:
at least one processor;
a non-transitory digital memory; and
the encoded bitstream stored on the non-transitory digital memory, the encoded bitstream including data identifying:
the bit depth M that is greater than 8;
a signed quantization parameter QP that determines a step size QSM for the bit depth M, wherein QSM equals 2(M-8)*QS8; and

quantized digital symbols;
wherein QS8 is a step size for bit depth 8 that is a function of exponential mapping, and

wherein a syntax range of the signed quantization parameter QP extends to a negative value, and
wherein QS8 equals 2QP/6-L.

US Pat. No. 9,099,046

APPARATUS FOR PROVIDING LIGHT SOURCE MODULATION IN DUAL MODULATOR DISPLAYS

Dolby Laboratories Licens...

1. A display comprising:
a light source layer comprising light sources;
a display modulation layer comprising an array of modulation elements;
a controller configured to receive image data and to determine first drive signals for the light source layer based at least
in part on the image data, the light source layer configured to emit a first spatially varying light pattern in response to
the first drive signals;

a conversion layer interposed in an optical path between the light source modulation layer and the display modulation layer
and configured to receive the first spatially varying light pattern, the conversion layer comprising one or more phosphorescent
materials configured to cause a second spatially varying light pattern in response to receiving the first spatially varying
light pattern, the second spatially varying light pattern having a spectral distribution different from that of the first
spatially varying light pattern;

wherein the controller is configured to determine second drive signals for the modulation elements of the display modulation
layer based at least in part on the image data and estimated expected characteristics of the second spatially varying light
pattern made using a transfer function model relating the first spatially varying light pattern received at the conversion
layer to the second spatially varying light pattern affected by the phosphorescent materials in the conversion layer and received
at the display modulation layer;

wherein the controller is configured to determine an estimate of the expected characteristics of the second spatially varying
light pattern received at the display modulation layer based at least in part on the first drive signals and modified point
spread functions of the light sources, the modified point spread functions incorporating the transfer function of the phosphorescent
materials; and

wherein the conversion layer comprises a patterned plurality of regions, each region comprising a plurality of sub-regions,
and the plurality of sub-regions within each region comprise a red sub-region which emits light having a generally red central
wavelength, a green sub-region which emits light having a generally green central wavelength and a blue sub-region which emits
light having a generally blue central wavelength.

US Pat. No. 9,049,413

MULTIPLE STAGE MODULATION PROJECTOR DISPLAY SYSTEMS HAVING EFFICIENT LIGHT UTILIZATION

Dolby Laboratories Licens...

1. A multi-modulation projector display system, said display system comprising:
a light source, the light source capable of being modulated in luminance;
a controller;
a first modulator, said first modulator being illuminated by said light source and said first modulator comprising a plurality
of analog mirrors to modulate light from the light source;

a second modulator, said second modulator being illuminated by light from said first modulator and capable of modulating light
from said first modulator, and said second modulator comprising a plurality of mirrors;

said controller further comprising:
a processor;
a memory, said memory associated with said processor and said memory further comprising processor-readable instructions, such
that when said processor reads the processor-readable instructions, causes the processor to perform the following instructions:

receiving image data, said image data comprising at least one highlight feature, the highlight feature representing a proper
subset of the area of the image to be rendered, the highlight feature further comprising a luminance not less than the highest
luminance of the remaining area of the image to be rendered;

sending control signals to said first modulator such that said first modulator may allocate a desired proportion of the light
from said light source during a desired sub-frame period of time onto said second modulator to form said highlight feature,
without substantially throwing away any excess light from the light source during the desired sub-frame period of time; and

sending control signals to said second modulator such that said desired proportion of the light from said light source is
modulated to form said highlight feature.

US Pat. No. 9,497,475

PIECEWISE CROSS COLOR CHANNEL PREDICTOR

Dolby Laboratories Licens...

1. In a decoder, a method for inter-layer prediction using a computer, the method comprising: accessing by the decoder a first
image comprising one or more color channels, the first image comprising a plurality of pixels, wherein at least one color
channel of the first image is segmented into two or more non-overlapping color channel segments; accessing metadata comprising
prediction parameters for a piece-wise cross-color channel (PCCC) prediction model, wherein for a color channel segment, a
predicted pixel value of a pixel of a predicted image in one color channel is expressed as a combination of at least the respective
pixel values for all color channels of the pixels within the predicted image having the same pixel coordinates as the pixel
of the first image; and generating a color channel segment of a second image based on the first image, the PCCC prediction
model, and the prediction parameters.

US Pat. No. 9,473,791

INTERPOLATION OF VIDEO COMPRESSION FRAMES

Dolby Laboratories Licens...

1. An apparatus comprising a bitstream stored on one or more non-transitory machine-readable media, the bitstream characterized
by:
data representing picture frames that include picture regions in a compressed format, wherein a portion of the data that represents
at least one picture region of the picture regions in the compressed format comprises:

a signal indicating whether a bidirectional prediction mode for the at least one picture region is used;
a signal indicating whether an interpolative motion vector prediction mode for the at least one picture region based on two
referenceable frames in the bitstream is used, wherein in the interpolative motion vector prediction mode, at least one motion
vector associated with the at least one picture region is determined at a decoder based on a motion vector associated with
the two referenceable frames;

a signal indicating whether a pixel interpolation mode using pixel weighting from the two or more referenceable frames in
the bitstream is used; and

two pixel weight values for the pixel weighting, wherein the pixel weight values are unequal.

US Pat. No. 9,445,199

METHOD AND APPARATUS FOR DETERMINING DOMINANT SOUND SOURCE DIRECTIONS IN A HIGHER ORDER AMBISONICS REPRESENTATION OF A SOUND FIELD

Dolby Laboratories Licens...

1. A method for determining dominant sound source directions in a Higher Order Ambisonics representation denoted HOA of a
sound field, said method comprising:
from a current time frame of HOA coefficients, estimating a directional power distribution with respect to dominant sound
sources;

from said directional power distribution and from an a-priori probability function for dominant sound source directions, computing
an a-posteriori probability function for said dominant sound source directions;

depending on said a-posteriori probability function and on dominant sound source directions for the previous time frame of
said HOA coefficients, searching and assigning dominant sound source directions for said current time frame of said HOA coefficients,

wherein said a-priori probability function is computed from a set of estimated sound source movement angles and from said
dominant sound source directions for the previous time frame of said HOA coefficients, and wherein said set of estimated sound
source movement angles is computed from said dominant sound source directions for the previous time frame of said HOA coefficients
and from dominant sound source directions for the penultimate time frame of said HOA coefficients.

US Pat. No. 9,407,869

SYSTEMS AND METHODS FOR INITIATING CONFERENCES USING EXTERNAL DEVICES

Dolby Laboratories Licens...

1. A method comprising:
initiating, by a conference device, an ultrasound broadcast signal within proximity to an external device to establish a communication
pathway between said conference device and said external device;

generating and transmitting to said external device, by said conference device, a pilot sequence for synchronizing said communication
pathway between said conference device and said external device; and

receiving, by said conference device, conference information from said external device, said conference device using said
conference information to initiate a conference call to a plurality of external devices;

wherein said method further comprises the step of generating an acoustic signature;
wherein the step of generating an acoustic signature further comprises the steps of:
generating said pilot sequence;
pulse shaping said pilot sequence;
coding a data bit stream;
pulse shaping said data bit stream;
appending said pilot sequence at the beginning of a transmission;
modulating said appended bit stream with an ultrasonic carrier frequency; and
transmitting said modulated bit stream.

US Pat. No. 9,253,566

VECTOR NOISE CANCELLATION

Dolby Laboratories Licens...

1. A method comprising:
receiving a sample pair comprising a first input sample in a first input signal and a second input sample in a second input
signal, the first input sample being in a plurality of first input samples derived from the first input signal, and the second
input sample being in a plurality of second input samples derived from the second input signal;

wherein at least one of the first input signal or the second input signal comprises responses to one or more of physical force,
pressure, sound, electromagnetic wave, electric current, radiation, or light;

calculating (a) a first magnitude value based on the first input sample, and (b) a second magnitude value based on the second
input sample;

selecting, based on a plurality of thresholds and a magnitude-dependent value computed from the first magnitude value and
the second magnitude value, a specific selection region in a finite number of non-overlapping selection regions, wherein each
selection region in the finite number of non-overlapping selection regions is located in between two corresponding neighboring
thresholds in the plurality of thresholds;

using a weight factor that is fixed within the specific selection region to derive an intermediate sample as a weighted combination
of the first input sample and the second input sample;

determining a phase difference from complex-domain representations of the first input sample and the second input sample;
and

applying an amplification or attenuation to the intermediate sample to generate an output sample in an output signal, the
output sample being in a plurality of output samples in the output signal, the amplification or attenuation being monotonically
related to the phase difference;

wherein the method is performed by one or more processors comprised in one or more computing devices.

US Pat. No. 9,232,232

INTERPOLATION OF VIDEO COMPRESSION FRAMES

Dolby Laboratories Licens...

1. An apparatus comprising a video bitstream stored on one or more non-transitory machine-readable media, the video bitstream
characterized by:
square blocks of predicted and bidirectional predicted frames, wherein at least one square block of a bidirectional predicted
frame is in a compressed format, and wherein the compressed format comprises:

a signal indicating an interpolative motion vector prediction mode for the at least one square block based on one or more
referenceable frames;

a signal indicating a pixel interpolation mode using an unequal pixel weighting of one or more referenceable frames; and
two or more pixel weight values for the unequal pixel weighting.

US Pat. No. 9,179,042

SYSTEMS AND METHODS TO OPTIMIZE CONVERSIONS FOR WIDE GAMUT OPPONENT COLOR SPACES

Dolby Laboratories Licens...

1. A method to perform opponent color space conversion, the method comprising:
generating, by a computer, one or more codewords in a first opponent color space;
performing a color space conversion, thereby converting the one or more codewords in the first opponent color space to a second
opponent color space; and

optimizing, by the computer, the color space conversion in order to increase a number of colors in the second opponent color
space that can be perceived by a human visual system.

US Pat. No. 9,179,236

SYSTEM AND METHOD FOR ADAPTIVE AUDIO SIGNAL GENERATION, CODING AND RENDERING

Dolby Laboratories Licens...

1. A system for processing audio signals, comprising an authoring component configured to:
receive a plurality of audio signals;
generate an adaptive audio mix comprising a plurality of monophonic audio streams and one or more metadata sets associated
with each of the plurality of monophonic audio streams and specifying a playback location of a respective monophonic audio
stream, wherein at least some of the plurality of monophonic audio streams are identified as channel-based audio and wherein
the others of the plurality of monophonic audio streams are identified as object-based audio, and wherein the playback location
of the channel-based audio comprises speaker designations of speakers in a speaker array, and the playback location of the
object-based audio comprises a location in three-dimensional space relative to a playback environment containing the speaker
array; and further wherein a first metadata set is applied to one or more of the plurality of monophonic audio streams for
a first condition of the playback environment, and a second metadata set is applied to the one or more of the plurality of
monophonic audio streams for a second condition of the playback environment; and

encapsulate the plurality of monophonic audio streams and the at least two metadata sets in a bitstream for transmission to
a rendering system configured to render the plurality of monophonic audio streams to a plurality of speaker feeds corresponding
to speakers in the playback environment in accordance with the at least two metadata sets based on a condition of the playback
environment.

US Pat. No. 9,148,645

CROSSTALK CANCELLATION IN 3D DISPLAYS

Dolby Laboratories Licens...

1. A method for preparing 3D images comprising left-eye and right-eye views for display, the method comprising:
based on image data determining amounts to increase pixel values to allow complete subtractive crosstalk cancellation;
determining a maximum of the amounts;
globally increasing the pixel values by one or both of addition and scaling in an amount based on the maximum
wherein the 3D images comprise video frames in a video sequence, the method comprises applying a temporal low pass filter
to the maximum or an amount derived from the maximum and the temporal filter comprises a bidirectional filter.

US Pat. No. 9,503,818

METHOD AND APPARATUS FOR PROCESSING SIGNALS OF A SPHERICAL MICROPHONE ARRAY ON A RIGID SPHERE USED FOR GENERATING AN AMBISONICS REPRESENTATION OF THE SOUND FIELD

Dolby Laboratories Licens...

1. A method for processing microphone capsule signals of a spherical microphone array on a rigid sphere, said method comprising:
converting said microphone capsule signals representing the pressure on the surface of said microphone array to a spherical
harmonics or Ambisonics representation Anm(t);

computing per wave number k an estimation of the time-variant signal-to-noise ratio SNR(k) of said microphone capsule signals,
using the average source power |P0(k)|2 of the plane wave recorded from said microphone array and the corresponding noise power |Pnoise(k)|2 representing the spatially uncorrelated noise produced by analog processing in said microphone array;

by using a time-variant Wiener filter for each order n designed at discrete finite wave numbers k from said estimation of
the time-variant signal-to-noise ratio estimation SNR(k), multiplying a transfer function of said Wiener filter by an inverse
transfer function of said microphone array in order to get an adapted transfer function Fn,array(k);

applying said adapted transfer function Fn,array(k) to said spherical harmonics or Ambisonics representation Anm(t) using a linear filter processing, resulting in adapted directional time domain coefficients dnm(t), wherein n denotes the Ambisonics order and index n runs from 0 to a finite order and m denotes the degree and index m
runs from ?n to n for each index n.

US Pat. No. 9,501,817

IMAGE RANGE EXPANSION CONTROL METHODS AND APPARATUS

Dolby Laboratories Licens...

1. A method for image processing, the method comprising:
storing in a non-transitory memory a data stream received over a digital network;
obtaining, from the stored data stream, image data comprising a digital image to be displayed and metadata, wherein the metadata
from the data stream includes both:

(i) a multiplicative factor value for safe luminance dynamic range expansion for a scene of the image data; and
(ii) metadata indicative of a luminance dynamic range of a source display, the metadata including parameters for each of color
primaries, black level, and white level of the source display;

obtaining target display information to determine a luminance dynamic range of the target display;
computing, by one or more processors, a maximum available expansion of a luminance dynamic range for the image data as a ratio
of the luminance dynamic range of the target display and the luminance dynamic range of the source display;

comparing the computed maximum available expansion and the factor value obtained from the data stream; and
by the one or more processors, expanding the luminance dynamic range of the image data by the lesser of the factor value obtained
from the data stream and the computed maximum available expansion.

US Pat. No. 9,501,818

LOCAL MULTISCALE TONE-MAPPING OPERATOR

Dolby Laboratories Licens...

1. A method for local tone mapping, the method comprising:
receiving an input high dynamic range (HDR) image (204);

converting luminance values in the input HDR image into a logarithmic domain to generate a logarithmic input image (212);

applying a global tone-mapping operator (216) to the logarithmic input image to generate a logarithmic global tone-mapped image;

generating a high resolution (HR) ratio image (228) based on the luminance values of the global tone-mapped image and the input HDR image;

generating based on the HR ratio image at least two different scale ratio images, each being of a different spatial resolution
level (220);

merging the at least two different scale ratio images to generate a local multiscale ratio image that comprises a weighted
combination of the at least two different scale ratio images, wherein the local multiscale ratio image is obtained using recursive
processing; and

adding (222) the local multiscale ratio image to the logarithmic input image to generate pixel values of a luminance output tone-mapped
image in the logarithmic domain (232).

US Pat. No. 9,497,456

LAYER DECOMPOSITION IN HIERARCHICAL VDR CODING

Dolby Laboratories Licens...

1. An encoding method, comprising:
receiving an input visual dynamic range (VDR) image in a sequence of input images, said visual dynamic range (VDR) being wide
or high dynamic range, wherein the input VDR image comprises a first bit depth;

selecting a specific advanced quantization function from one or more available advanced quantization functions;
applying the specific advanced quantization function to the input VDR image to generate an input base layer image, wherein
the input base layer image comprises a second bit depth, which is lower than the first bit depth;

compressing image data derived from the input base layer image into a base layer (BL) video signal; and
compressing residual image data between the input VDR image and a prediction image generated from on the input base layer
image into one or more enhancement layer (EL) video signals;

wherein the available advanced quantization functions comprise one or more of global quantization, linear quantization, linear
stretching, curve-based quantization, probability-density-function (Pdf) optimized quantization, LLoyd-Max quantization, partition-based
quantization, perceptual quantization, and cross-color channel / vector quantization;

wherein the specific advanced quantization function is selected based on one or more factors including at least one of:
minimizing an amount of image data to be encoded into the one or more EL video signals relative to the input VDR image, and
one or more characteristics determined from the input VDR image; and
wherein the selecting the specific advanced quantization function from the one or more available advanced quantization functions
further comprises:

selecting two consecutive input VDR images in the sequence of input images;
applying a first adaptation function to compute a first set of two corresponding base layer (BL) images, the first adaptation
function being a frame-by-frame-based adaptation function of the selected advanced quantization function;

applying a second adaptation function to compute a second set of two corresponding BL images, the second adaptation function
being a scene-based adaptation function of the selected advanced quantization function;

computing a first set of histograms based on the first set of BL images, each histogram of the first set of histograms being
a function which counts a number of pixels falling into each one of plural distinct pixel values in a respective BL image
of the first set of BL images;

computing a second set of histograms based on the second set of BL images, each histogram of the second set of histograms
being a function which counts a number of pixels falling into each one of plural distinct pixel values in a respective BL
image of the second set of BL images;

computing a first mean-square difference between the first set of histograms;
computing a second mean-square difference between the second set of histograms;
comparing the first mean-square difference with the second mean-square difference; and
selecting the first adaptation function if the first mean-square difference is smaller than the second mean-square difference.

US Pat. No. 9,467,690

COMPLEXITY-ADAPTIVE SCALABLE DECODING AND STREAMING FOR MULTI-LAYERED VIDEO SYSTEMS

Dolby Laboratories Licens...

1. An adaptive decoding multi-layer video system comprising:
a base layer decoder;
one or more enhancement layer decoders;
a reference processing unit connected with the base layer decoder and the one or more enhancement layer decoders, the reference
processing unit being configured to store reference frames; and

a decoding adaptor connected with the base layer decoder, the reference processing unit and the one or more enhancement layer
decoders, the decoding adaptor receiving feedback from the base layer decoder, the reference processing unit and the one or
more enhancement layer decoders, the decoding adaptor controlling operation of the base layer decoder, the reference processing
unit and the one or more enhancement layer decoders, the decoding adaptor being configured to:

read a frame under consideration;
set a decoding mode for the frame under consideration based on a decoding time of previously decoded frame;
calculate a decoding time for the frame under consideration;
determine whether the decoding time is higher or lower than a first threshold;
if the decoding time is higher than the first threshold:
decrease a complexity of the set decoding mode;
determine whether the decoding time is higher or lower than a second threshold, the second threshold being lower than the
first threshold; and

if the decoding time is higher than the second threshold, further decrease the complexity of the set decoding mode; and
decode the frame under consideration according to the set decoding mode;
update the decoding time of the previously decoded frame after each decoding, wherein the decoding time comprises an average
decoding time or a nonaverage decoding time; and

repeatedly read, set, and decode if further frame is available.

US Pat. No. 9,380,311

METHOD AND SYSTEM FOR GENERATING A TRANSFORM SIZE SYNTAX ELEMENT FOR VIDEO DECODING

Dolby Laboratories Licens...

1. An electronic device comprising:
circuitry configured to
determine that a macroblock is to be intra-predicted;
select a macroblock type having a size based on the determining;
select a transform size having a same size as the size of the selected macroblock type; and
generate a transform syntax element based on the transform size to indicate an inverse transform size to a video decoder for
use with the intrapredicted macroblock,

wherein the circuitry is configured to select an N×N transform size when the macroblock type is an N×N macroblock type and
select an M×M transform size when the macroblock type is an M×M macroblock type, wherein N and M are integer values and M
is greater than N.

US Pat. No. 9,503,759

ADAPTIVE FILTERING BASED UPON BOUNDARY STRENGTH

Dolby Laboratories Licens...

1. An image decoding method filtering a boundary between two adjacent blocks in a reconstructed image selectively, comprising:
a motion compensation prediction step for conducting motion compensation prediction for each of blocks to be decoded by using
the reconstructed image,

an inverse transformation step for conducting inverse transformation for the data of the blocks to be decoded,
a reconstruction step for reconstructing an image based on the motion compensation predicted blocks and the inverse transformed
blocks, and

a determination step for determining a filtering strength and whether or not to conduct filtering, with respect to each of
the boundaries, wherein the determining step determines:

(1) filtering is conducted when at least one of the two adjacent blocks is intra-coded,
(2) filtering is not conducted when both of the two adjacent blocks are not intra-coded, a nonzero transformation coefficient
is not coded in both of the two adjacent blocks, the two adjacent blocks are predicted by a same reference frame, and an absolute
value of a difference between motion vectors of the two adjacent blocks is smaller than a specified threshold value.

US Pat. No. 9,491,299

TELECONFERENCING USING MONOPHONIC AUDIO MIXED WITH POSITIONAL METADATA

Dolby Laboratories Licens...

1. A method for preparing a pulse code modulated, hereinafter “PCM”, monophonic audio signal for transmission to at least
one node of a teleconferencing system, wherein the PCM monophonic audio signal is indicative of speech, in a frequency range,
by a currently dominant participant in a teleconference, said method including the steps of:
(a) generating a monophonic mixed audio signal, including by adding a tone to the PCM monophonic audio signal, wherein the
tone has a frequency in the frequency range and is indicative of an apparent source position of the currently dominant participant
in the teleconference; and

(b) encoding the mixed audio signal to generate a monophonic encoded audio signal.

US Pat. No. 9,467,704

ADAPTIVE RATIO IMAGES IN HDR IMAGE REPRESENTATION

Dolby Laboratories Licens...

1. A method to code a high dynamic range (HDR) image with a processor, the method comprising:
receiving an input image with a first dynamic range;
generating a tone-mapped image based on the input image, wherein the tone-mapped image has a dynamic range lower than the
first dynamic range;

generating a ratio image (YR), the ratio image comprising luminance pixel values of the input image divided pixel by pixel by corresponding luminance
pixel values in the tone-mapped image;

applying an invertible function (F) to the ratio image to generate a modified ratio image (F(YR));

generating a look-up table to represent the inverse of the invertible function (F?1), wherein applying the look-up table (F?1) to the modified ratio image generates an approximation of the ratio image; and

generating a coded HDR image based on the tone-mapped image and the modified ratio image, wherein the invertible function
is selected so that when applying its inverse to the modified ratio image, the distance between the ratio image and the approximation
of the ratio image is minimized according to a predetermined measure.

US Pat. No. 9,445,083

SYSTEM FOR DELIVERING STEREOSCOPIC IMAGES

Dolby Laboratories Licens...

1. A system for delivering stereoscopic images comprising:
output means to output at least a first left-eye image and at least a first right-eye image,
means to direct the first left-eye image at a desired first left-eye angle and to direct the first right-eye image at a desired
first right-eye angle,

the direction means being movable relative to the output means to adjust the first left-eye angle at which the first left-eye
image is directed and to adjust the first right-eye angle at which the first right-eye image is directed to suit a viewer,

wherein the direction means comprises a flexible film and both the first left-eye angle and the first right-eye angle are
adjusted by deforming the flexible film.

US Pat. No. 9,407,913

ADAPTIVE FILTERING BASED UPON BOUNDARY STRENGTH

Dolby Laboratories Licens...

1. An image decoding method comprising:
motion compensation predicting a block to be decoded by using a previously reconstructed image as a reference image;
inverse quantizing a block of transformed and quantized coefficients;
inverse transforming the block of inverse quantized coefficients;
reconstructing an image using the motion compensation predicted block and the inverse transformed block; and
deblock filtering the reconstructed image;
wherein the deblock filtering comprises determining whether or not to conduct filtering a boundary between two adjacent blocks
in the reconstructed image, where:

(1) filtering is conducted when at least one of the two adjacent blocks is intra-coded, and
(2) filtering is not conducted when both of the two adjacent blocks are not intra-coded, a non-zero transformation coefficient
is not coded in both of the two adjacent blocks, the two adjacent blocks are predicted by a same reference frame, and an absolute
value of a difference between motion vectors of the two adjacent blocks is smaller than a specified threshold value.

US Pat. No. 9,324,335

MULTISTAGE IIR FILTER AND PARALLELIZED FILTERING OF DATA WITH SAME

Dolby Laboratories Licens...

1. An audio encoder configured to generate encoded audio data in response to input audio data, said encoder including at least
one multistage filter coupled and configured to filter the audio data, wherein the multistage filter includes:
a buffer memory;
at least two biquad filter stages, including a first biquad filter stage and a subsequent biquad filter stage; and
a controller, coupled to the biquad filter stages and configured to assert a single stream of instructions to both the first
biquad filter stage and the subsequent biquad filter stage, wherein said first biquad filter stage and said subsequent biquad
filter stage operate independently and in parallel in response to the stream of instructions,

wherein the first biquad filter stage is coupled to the memory and configured to perform biquadratic filtering on a block
of N input samples in response to the stream of instructions to generate intermediate values, and to assert the intermediate
values to the memory, wherein the intermediate values include a filtered version of each of at least a subset of the input
samples, and

wherein the subsequent biquad filter stage is coupled to the memory and configured to perform biquadratic filtering on buffered
values retrieved from the memory in response to the stream of instructions to generate a block of output values, wherein the
output values include an output value corresponding to each of the input samples in the block of N input samples, and the
buffered values include at least some of the intermediate values generated in the first biquad filter stage in response to
the block of N input samples;

wherein the multistage filter is configured to perform multistage filtering of the block of N input samples in a single processing
loop with iteration over a sample index buy without iteration over a biquadratic filter index.

US Pat. No. 9,661,190

LOW LATENCY AND LOW COMPLEXITY PHASE SHIFT NETWORK

Dolby Laboratories Licens...

1. A method, comprising:
determining a target phase response for an all-pass recursive filter, the all-pass recursive filter being representable with
a transfer function having N zeros and N poles, the N zeros having a one to one correspondence relationship with the N poles;

specifying an initial value of N;
(a) identifying, based at least in part on the target phase response, a set of optimized values for a set of filter coefficients
for the all-pass recursive filter; and

(b) determining whether the all-pass recursive filter with the set of optimized values for the set of filter coefficients
satisfies one or more criteria;

wherein the above steps of determining, specifying, (a) identifying, and (b) determining are performed with one or more computing
devices;

in response to determining that the all-pass recursive filter with the set of optimized values for the set of filter coefficients
does not satisfy the one or more criteria, incrementing N and repeating steps (a) and (b);

in response to determining that the all-pass recursive filter with the set of optimized values for the set of filter coefficients
satisfies the one or more criteria, causing the all-pass recursive filter to be implemented in one or more semiconductor integrated
circuits in a media processor, the media processor using the all-pass recursive filter to process first media samples in a
first signal path of two or more signal paths, while the media processor is processing second media samples without the all-pass
recursive filter in a second different signal path of the two or more signal paths.