US Pat. No. 10,771,907

TECHNIQUES FOR ANALYZING CONNECTIVITY WITHIN AN AUDIO TRANSDUCER ARRAY

Harman International Indu...

1. A non-transitory computer-readable medium that, when executed by a processing unit, cause the processing unit to analyze connectivity within a sound system, by performing the steps of:applying a baseline signal to a plurality of amplifiers coupled to a transducer array comprising a plurality of transducers, wherein each transducer in the transducer array is coupled to a different optical emitter that produces a light signal in response to the baseline signal;
capturing a baseline image that indicates a different location associated with each transducer in the transducer array based on respective light signals produced by each optical emitter coupled to each transducer in the transducer array;
generating a three dimensional model of locations of the plurality of transducers from the baseline image;
applying a first signal to a first amplifier in the plurality of amplifiers;
capturing a first image that indicates a first transducer in the transducer array;
determining that the first amplifier drives the first transducer;
comparing the baseline image to the first image to determine a location of the first transducer within the three dimensional model of locations of the plurality of transducers; and
generating connectivity data that indicates that the first amplifier is coupled to the first transducer.

US Pat. No. 10,771,906

HEARING ASSISTANCE DEVICE WITH IMPROVED MICROPHONE PROTECTION

Starkey Laboratories, Inc...

1. A hearing assistance device configured to be powered by a battery and to be worn by a wearer having an ear canal, the hearing assistance device comprising:a microphone;
a battery door configured to hold the battery, the battery door including a microphone opening and an acoustic recess connected to the microphone opening;
a battery contact configured to be electrically connected to the battery by closing the battery door, the battery contact including a microphone port; and
an acoustic path formed by closing the battery door, the acoustic path allowing a sound to enter the acoustic recess through the microphone opening and to reach the microphone from the acoustic access through the microphone port.

US Pat. No. 10,771,905

HEARING DEVICE COMPRISING A MICROPHONE ADAPTED TO BE LOCATED AT OR IN THE EAR CANAL OF A USER

1. A hearing device adapted for being arranged at least partly on a user's head or at least partly implanted in a user's head, the hearing device comprisingan input unit for providing a multitude of electric input signals representing sound in an environment of the user, the input unit comprising
at least one first input transducer for picking up said sound and providing respective at least one first electric input signals, the at least one first input transducer being located at a first location away from an ear canal of the user;
a second input transducer for picking up said sound and providing a second electric input signal, the second input transducer being located at or in an ear canal of the user;
an output unit comprising an output transducer for converting a processed electric signal representing said sound to a stimulus perceivable by said user as sound, and
a near-field beamformer applied to said at least one first and said second electric input signals and implementing a feedback suppression system for suppressing feedback from said output unit to said at least one first input transducer, and comprising an adaptation unit for modifying the second electric input signal in approximation of an acoustic transfer function, or an impulse response, from the second input transducer to the at least one first input transducer and providing a modified second electric input signal representative of an estimate of said feedback, wherein the adaptation unit is configured to delay the second electric input signal corresponding to a delay of an acoustic propagation path of sound from the second to the at least one first input transducer.

US Pat. No. 10,771,904

DIRECTIONAL MEMS MICROPHONE WITH CORRECTION CIRCUITRY

Shure Acquisition Holding...

1. A microphone assembly, comprising:a transducer assembly including a first enclosure defining a first acoustic volume and a Micro-Electrical-Mechanical-System (“MEMS”) microphone transducer disposed within the first enclosure such that the first acoustic volume surrounds a rear of the MEMS microphone transducer;
a second enclosure disposed adjacent to the first enclosure and defining a second acoustic volume in acoustic communication with the first acoustic volume, the second enclosure including an acoustic resistance, wherein the first and second acoustic volumes, in cooperation with the acoustic resistance, create an acoustic delay for producing a directional polar pattern; and
circuitry electrically coupled to the transducer assembly and comprising a shelving filter configured to correct a portion of a frequency response of the MEMS microphone transducer, so as to flatten the frequency response across all frequency values within a predetermined bandwidth.

US Pat. No. 10,771,903

ELECTROSTATIC GRAPHENE SPEAKER

THE REGENTS OF THE UNIVER...

1. A device comprising:a graphene membrane having a diameter of about 3 millimeters to 11 millimeters;
a first electrode proximate a first side of the graphene membrane, the first electrode being electrically conductive; and
a second electrode proximate a second side of the graphene membrane, the second electrode being electrically conductive, the graphene membrane being suspended between the first electrode and the second electrode,
wherein the device is a microphone or a loudspeaker.

US Pat. No. 10,771,902

DISPLAY APPARATUS AND COMPUTING APPARATUS INCLUDING THE SAME

LG Display Co., Ltd., Se...

1. A display apparatus, comprising:a display module having a display panel configured to display an image;
a sound generation module disposed at the display module and configured to vibrate the display module to generate sound;
a first cover configured to cover a first periphery portion of the display module; and
a plurality of ultrasonic generation modules disposed between the first periphery portion of the display module and the first cover,
wherein the first cover includes a plurality of first holes overlapping with the plurality of ultrasonic generation modules.

US Pat. No. 10,771,901

LOUDSPEAKER DRIVER SURROUND

GP Acoustics (UK) Limited...

1. A loudspeaker driver surround comprising a generally annular element of resilient material and having a central axis along which in use a diaphragm is driven, a first circumferential edge for fitment to an enclosure and a second circumferential edge for fitment to the diaphragm and/or a voice coil, with a roll surface extending between the edges which projects in the direction of the axis, wherein the roll surface has a shape formed by a plurality of axial corrugations extending generally radially with respect to the annular element between the first and second edges thereof, the corrugations being shaped and configured such that the roll surface is non-axisymmetric about the axis, and the arrangement being such that cross-sections of the roll surface which extend radially with respect to the annular element between the first and second edges thereof have a substantially constant length at all circumferential positions around the annular element and so that the shape of the said cross-section varies continuously between circumferential positions around the annular element, the corrugations giving the projecting roll surface an order of rotational symmetry of at least 30.

US Pat. No. 10,771,900

SPEAKER DIAPHRAGM STRUCTURE

FU JEN CATHOLIC UNIVERSIT...

1. A speaker diaphragm structure installed within a sound generator device which comprises a frame, a speaker diaphragm structure installed within the frame, and a suspension edge whose inner perimeter is connected to the speaker diaphragm structure and whose outer perimeter is connected to the frame; herein the speaker diaphragm structure includes:a diaphragm body; and
a composite material layer, in which the composite material layer is used for bonding onto the surface of the diaphragm body or attaching within the diaphragm body, and the composite material layer is composed of one or more types of tetrapyrrole compounds as well as one or more types of metal ions; additionally, the composite material layer has a thickness smaller than the thickness of the diaphragm body.

US Pat. No. 10,771,899

DISPLAY APPARATUS

LG Display Co., Ltd., Se...

1. A display apparatus, comprising:a display panel configured to display an image by emitting light;
a rear structure configured to support the display panel;
a vibration generator configured to vibrate the display panel; and
a supporting member in the vibration generator, the supporting member being configured to maintain a distance between the display panel and the vibration generator,
wherein the supporting member comprises a material having lower elasticity than the display panel.

US Pat. No. 10,771,898

LOCATING WIRELESS DEVICES

Apple Inc., Cupertino, C...

1. A method performed by an electronic device, comprising:playing, or initiating the playing of, a sound through a loudspeaker of an accessory device located in an environment, wherein the accessory device is communicatively coupled to the electronic device through a communication link, wherein the sound is played at a specified frequency or frequencies that utilizes a resonant frequency response of the loudspeaker of the accessory device, and wherein the sound is played through the loudspeaker of the accessory device at increasing levels of loudness until a maximum loudness is reached;
receiving, by two or more microphones of the electronic device, a recording of the environment, the recording including the sound played through the loudspeaker of the accessory device and ambient noise;
filtering, by one or more filters of the electronic device, the recording, the one or more filters configured to pass the sound played through the loudspeaker of the accessory device and to reduce masking of the sound by the ambient noise;
determining, by the electronic device, a pressure level of the passed sound; and
associating, by the electronic device, the pressure level of the passed sound with an orientation of the electronic device in a reference coordinate system, the orientation of the electronic device determined from sensor data provided by one or more inertial sensors of the electronic device.

US Pat. No. 10,771,897

PORTABLE SOUND SYSTEM

1. A system for enhancing sound provided by at least a pair of speaker drivers relative to a listener, comprising:a structure, defining:
a first portion of an oblong enclosure that forms an approximated double ellipse profile, comprising a material having a sound reflective surface, and positioned between a first area proximate a first speaker driver and a second area proximate the listener, wherein the first portion of the oblong enclosure forms a first part of the approximated double ellipse profile; and
a second portion of the oblong enclosure that forms the approximated double ellipse profile, comprising a material having a sound reflective surface, and positioned between a third area proximate a second speaker driver and a fourth area proximate the listener, wherein the second portion of the oblong enclosure forms a second part of the approximated double ellipse profile,
wherein the first portion and the second portion together envelop the listener with the approximated double ellipse profile and are shaped such that sound emitted from the first speaker driver and the second speaker driver is reflected and focused toward the listener.

US Pat. No. 10,771,896

CROSSTALK CANCELLATION FOR SPEAKER-BASED SPATIAL RENDERING

Hewlett-Packard Developme...

1. An apparatus comprising:a processor; and
a non-transitory computer readable medium storing machine readable instructions that when executed by the processor cause the processor to:
perceptually smooth head-related transfer functions (HRTFs) corresponding to ipsilateral and contralateral transfer paths of sound emitted from first and second speakers to corresponding first and second destinations;
insert an inter-aural time difference in the perceptually smoothed HRTFs corresponding to the contralateral transfer paths; and
generate a crosstalk canceller by inverting the perceptually smoothed HRTFs corresponding to the ipsilateral transfer paths and the perceptually smoothed HRTFs corresponding to the contralateral transfer paths including the inserted inter-aural time difference.

US Pat. No. 10,771,895

AUDIO SIGNAL PROCESSING DEVICE

MITSUBISHI ELECTRIC CORPO...

1. An audio signal processing device comprising processing circuitry to:perform high-pass filtering to convert an input audio signal into a first audio signal and to output the first audio signal,
estimate displacement amplitude of a speaker diaphragm when the input audio signal is inputted,
perform saturation processing on the displacement amplitude that is estimated or on a signal obtained by correcting the displacement amplitude,
generate a second audio signal by using displacement amplitude obtained after the saturation processing, and
synthesize the first audio signal and the second audio signal.

US Pat. No. 10,771,894

METHOD AND APPARATUS FOR AUDIO CAPTURE USING BEAMFORMING

Koninklijke Philips N.V.,...

1. An apparatus for capturing audio, the apparatus comprising:a microphone array;
a first beamformer, the beamformer coupled to the microphone array, wherein the beamformer is arranged to generate a first beamformed audio output;
a plurality of constrained beamformers, the plurality of constrained beamformers coupled to the microphone array, wherein each of the plurality of constrained beamformers is arranged to generate a constrained beamformed audio output;
a first adapter, wherein the first adaptor is arranged to adapt beamform parameters of the first beamformer;
a second adapter, wherein the second adaptor is arranged to adapt constrained beamform parameters for the plurality of constrained beamformers;
a difference processor circuit, wherein the difference processor circuit is arranged to determine a difference measure for at least one of the plurality of constrained beamformers,
wherein the difference measure is indicative of a difference between beams formed by the first beamformer and the at least one of the plurality of constrained beamformers;
wherein the second adapter is arranged to adapt constrained beamform parameters with a constraint such that constrained beamform parameters are adapted only for constrained beamformers of the plurality of constrained beamformers for which a difference measure has been determined that meets a similarity criterion,
wherein the difference processor circuit is arranged to determine the difference measure for a first constrained beamformer as a difference between a first set of parameters and the constrained set of parameters for the first constrained beamformer.

US Pat. No. 10,771,893

SOUND PRODUCING APPARATUS

xMEMS Labs, Inc., Santa ...

1. A sound producing apparatus (10), comprising:a driving circuit (12), configured to generate a driving signal according to an input audio signal; and
a sound producing device (14);
wherein the sound producing device is driven by the driving signal, such that the sound producing device produces a plurality of air pulses at an air pulse rate, and the air pulse rate is higher than a maximum human audible frequency;
wherein the plurality of air pulses produces a non-zero offset in terms of sound pressure level, and the non-zero offset is a deviation from a zero sound pressure level;
wherein the driving signal driving the sound producing device to produce the plurality of air pulses, is unipolar with respect to a first voltage.

US Pat. No. 10,771,892

UNDERWATER SUBWOOFER SYSTEM

TELEDYNE INSTRUMENTS, INC...

1. A submersible sound system comprising:a housing;
a housing end piece in mechanical communication with a posterior end of the housing;
an elastic membrane in mechanical communication with an anterior end of the housing;
an end cap in mechanical communication with the elastic membrane;
a resonator end wall in mechanical communication with the anterior end of the housing;
a resonator throat disposed within the resonator end wall; and
a subwoofer speaker system disposed within the housing, the subwoofer speaker system comprising:
a magnet assembly disposed within the posterior end of the housing;
a frame in mechanical communication with the magnet assembly;
a voice coil;
a diaphragm in mechanical communication with the frame and configured to be driven by the voice coil;
a subwoofer speaker support in mechanical communication with the frame and an interior portion of the housing; and
a tuning pipe disposed within the subwoofer speaker support;
wherein the housing, the housing end piece, and a posterior surface of the subwoofer speaker support together form a posterior enclosure,
wherein an anterior surface of the resonator end wall, the anterior end of the housing, the elastic membrane, and the end cap together define a sealed cylindrical bubble sound source,
wherein the anterior surface of the diaphragm, an anterior surface of the subwoofer speaker support, an anterior portion of the housing, the resonator end wall, and the resonator throat together define a Helmholtz resonator,
wherein the resonator throat is configured to permit fluidic communication between the Helmholtz resonator and the cylindrical bubble sound source, and
wherein the tuning pipe extends between the posterior enclosure and the Helmholtz resonator and is configured to permit fluidic communication between the posterior enclosure and the Helmholtz resonator.

US Pat. No. 10,771,891

METHOD FOR MANUFACTURING AIR PULSE GENERATING ELEMENT

xMEMS Labs, Inc., Santa ...

1. A method for manufacturing an air pulse generating element, comprising:providing a thin film layer, wherein the thin film layer comprises a membrane;
forming a plurality of actuators on the thin film layer;
forming a first chamber between the thin film layer and a first plate, wherein forming the first chamber comprises bonding the first plate on a first surface of the thin film layer through a first bonding agent;
patterning the thin film layer to form a plurality of valves, wherein the membrane and the valves are formed of the thin film layer;
forming a second chamber between the thin film layer and a second plate, wherein forming the second chamber comprises bonding the second plate on a second surface of the thin film layer opposite to the first surface through a second bonding agent; and
forming a plurality of channels in the first plate and the second plate.

US Pat. No. 10,771,890

ANNULAR SUPPORT STRUCTURE

Apple Inc., Cupertino, C...

16. A speaker device, comprising:an axisymmetric device housing comprising an upper housing component and a lower housing component coupled to the upper housing component;
a support structure engaged with threading disposed along an interior facing surface of the lower housing component, the support structure comprising:
a first annular member, and
a second annular member coupled to the first annular member;
a subwoofer coupled to the support structure and filling a central opening defined by the support structure;
a flexible PCB extending through an opening defined by a portion of the support structure disposed outboard of the subwoofer; and
a fastener extending through a fastener opening defined by the upper housing component and engaging the support structure.

US Pat. No. 10,771,889

ACOUSTIC FILTERING

Vesper Technologies Inc.,...

1. A packaged device comprises:a transducer;
a package substrate having an acoustic port, with the transducer supported over a surface of the package substrate and over or adjacent to the acoustic port; and
a venting mechanism affixed to the package substrate, the venting mechanism proximate to and partially surrounding the acoustic port of the package substrate, the venting mechanism comprised of a material patterned to provide first and second adjacent sidewalls along a surface of the package substrate adjoined at a first end of the venting mechanism and that are spaced by a gap that provides a vent for venting air or sound pressure from an enclosure volume of the device.

US Pat. No. 10,771,887

ANISOTROPIC BACKGROUND AUDIO SIGNAL CONTROL

CISCO TECHNOLOGY, INC., ...

1. An apparatus comprising:a first microphone;
a second microphone; and
a processor coupled to receive signals derived from outputs of the first microphone and the second microphone, wherein the processor is configured to:
obtain, from the first microphone, a first audio signal including a user audio signal and an anisotropic background audio signal;
obtain, from the second microphone, a second audio signal including the user audio signal and the anisotropic background audio signal;
extract, from the first audio signal and the second audio signal, using a first adaptive filter, a reference audio signal including the anisotropic background audio signal;
based on the reference audio signal, cancel, using a second adaptive filter, the anisotropic background audio signal from a third audio signal derived from the first and/or second audio signals to produce an output audio signal; and
provide the output audio signal to a receiver device.

US Pat. No. 10,771,886

HEADPHONE STRUCTURE FOR EXTENDING AND ENHANCING RESONANCE

EVGA CORPORATION, New Ta...

1. A headphone structure for extending and enhancing resonance, comprising:a main body, the main body being formed with an accommodating portion at one side and at least one shield interlocking portion and at least one cover interlocking portion on both sides respectively, a speaker being disposed in the accommodating portion, the main body being penetrated with at least one master sound guiding hole and at least one side sound guiding hole and being composed of a front fixing member and a rear fixing member, wherein the shield interlocking portion is formed on one side edge of the front fixing member, the front fixing member is formed with a plurality of convex portions at positions on another side opposite to the shield interlocking portion, and the front fixing member is penetrated with the side sound guiding hole;
a cover, the cover being disposed on a rear side of the main body, a rear cavity space being formed between the cover and the main body, and the rear cavity space communicating with the side sound guiding hole; and
a shield, the shield being disposed on a front side of the main body, the shield having an outer ring portion and an inner ring portion connected to each other, a front cavity space being formed by the inner ring portion of the shield, and a rear cavity extending space being formed between the outer ring portion and the inner ring portion for communicating with the side sound guiding hole and the rear cavity space.

US Pat. No. 10,771,885

HEADPHONE DEVICE

JVC KENWOOD CORPORATION, ...

1. A headphone device comprising:a head pad having a curved shape with a first radius of curvature;
a band extending from an edge of the head pad into a curved shape with a second radius of curvature different from the first radius of curvature, and supporting a housing, via a hanger, housing a speaker unit; and
a sleeve having a curved shape with the first radius of curvature and slidable along the head pad to cover the band so as to change the radius of curvature at a part of the band covered with the sleeve to approximate to the first radius of curvature.

US Pat. No. 10,771,884

ELECTRONIC DEVICES WITH COHERENT SELF-MIXING PROXIMITY SENSORS

Apple Inc., Cupertino, C...

1. An earbud, comprising:a housing;
a speaker in the housing;
a self-mixing proximity sensor in the housing, wherein the self-mixing proximity sensor includes a laser with a laser cavity configured to emit output light that illuminates a target and wherein a portion of the output light that has illuminated the target reenters the laser cavity and causes self-mixing fluctuations in a power of the output light; and
control circuitry in the housing that is configured to gather proximity measurements with the self-mixing proximity sensor.

US Pat. No. 10,771,883

HEARING ASSISTANCE DEVICE THAT USES ONE OR MORE SENSORS TO AUTONOMOUSLY CHANGE A POWER MODE OF THE DEVICE

Eargon, Inc., Mountain V...

1. An apparatus, comprising:a device for use with a hearing assistance device with one or more accelerometers and a power control module to receive input data indicating a change in acceleration of the device over time from the one or more accelerometers in order to make a determination to autonomously change a power mode for the hearing assistance device based on at least whether the power control module senses movement of the hearing assistance device as indicated by the accelerometers, where the power control module is configured to derive the input data indicating the change in acceleration of the hearing assistance device over time from the one or more accelerometers by using an algorithm that takes an average of a mathematical differential of a vector corresponding to gravity over a set amount of samplings, relative to a coordinate system reflective of the hearing assistance device.

US Pat. No. 10,771,882

USER INTERFACE FOR AN EARBUD DEVICE

SONY CORPORATION, Tokyo ...

1. A system comprising:one or more processors; and
logic encoded in one or more non-transitory computer-readable storage media for execution by the one or more processors and when executed operable to perform operations comprising:
detecting contact of a finger of a user on a touch user interface of an earbud device;
determining a contact pattern based on movement of the finger on the touch user interface, wherein the contact pattern is associated with one of increasing volume or decreasing volume if the contact pattern is a circular swipe motion on the touch user interface of the earbud device, wherein the contact pattern is associated with engaging a phone call if the contact pattern is an upward vertical swipe motion on the touch user interface of the earbud device, wherein the contact pattern is associated with terminating an active phone call if the contact pattern is an downward vertical swipe motion on the touch user interface of the earbud device, wherein the upward vertical swipe motion is detected when the finger of the user contacts a bottom of the touch user interface and swipes vertically upward to a top of the touch user interface, and wherein the downward vertical swipe motion is detected when the finger of the user contacts the top of the touch user interface and swipes vertically downward to the bottom of the touch user interface;
mapping the contact pattern to a predetermined command; and
executing the predetermined command based on the mapping.

US Pat. No. 10,771,881

EARPIECE WITH AUDIO 3D MENU

BRAGI GmbH, Munich (DE)

1. An earpiece comprising:an earpiece housing;
an intelligent control system disposed within the earpiece housing;
a speaker operatively connected to the intelligent control system;
a microphone operatively connected to the intelligent control system;
at least one sensor operatively connected to the intelligent control system for providing sensor data, wherein the at least one sensor includes an inertial sensor and wherein the sensor data includes inertial sensor data;
wherein the intelligent control system is configured to determine, using the inertial sensor data, if the user movement detected with the at least one sensor is indicative that the user intends to awaken an audio menu interface comprising an audio menu, the audio menu interface providing a hierarchy of menu selections having a plurality of levels with a plurality of menu selections present at each of the plurality of levels;
wherein the intelligent control system of the earpiece is configured to interface with a user of the earpiece by determining at least one of attention or intention of the user using the sensor data without receiving manual input at the earpiece and without receiving voice input from the user;
wherein the earpiece is configured to present the audio menu when the audio menu interface is awakened and use the attention or intention of the user to select a first item from the audio menu by sensing a first head movement of the user and confirm selection of the first item by sensing a second head movement of the user;
wherein the earpiece is further configured to present a second plurality of menu selections in response to a confirmation of the selection of the first item;
wherein each of the plurality of menu selections is positioned in a different location in 3D space relative to the user using a psychoacoustic model.

US Pat. No. 10,771,879

HEADPHONE EARTIPS WITH INTERNAL SUPPORT COMPONENTS FOR OUTER EARTIP BODIES

Apple Inc., Cupertino, C...

1. An eartip, comprising:an inner eartip body defining an inner eartip space and having an inner eartip front end and an inner eartip back end opposite from the inner eartip front end, the inner eartip body comprising:
an inner eartip exterior surface extending between the inner eartip front end and the inner eartip back end; and
an inner eartip interior surface opposite from the inner eartip exterior surface and extending between the inner eartip front end and the inner eartip back end;
an outer eartip body extending from the inner eartip front end at an interface and operative to be at least partially positioned within an ear canal; and
an internal support subsystem comprising a coil spring within the inner eartip body between the inner eartip interior and exterior surfaces extending around the inner eartip space providing cross-sectional rigidity to the inner eartip body about a respective portion of the inner eartip space.

US Pat. No. 10,771,878

MINIATURE FORM FACTOR BLUETOOTH DEVICE

Acouva, Inc., San Franci...

1. A Bluetooth in-ear utility device with miniature form factor, comprising:a housing comprising an oval shaped trunk configured to reside in an ear canal of a user's ear within the first bend of the ear canal at a distance less than 16 millimeters from the entrance of the ear canal,
wherein the housing is configured to reside in a concha bowl distal portion of the user's ear,
wherein the oval shaped trunk is shaped to allow the oval shaped trunk to enter the ear canal while reducing interference with an ear canal wall;
a speaker (receiver) port, wherein the oval shaped trunk is shaped to allow the oval shaped trunk to directly enter the ear canal wall while preventing the speaker port from being blocked by the ear canal wall; and
a bone conduction microphone configured to detect resident frequencies to facilitate user voice recognition.

US Pat. No. 10,771,877

DUAL EARPIECES FOR SAME EAR

BRAGI GmbH, Munich (DE)

1. A system comprising:a set of wireless earpieces comprising a first wireless earpiece and a second wireless earpiece, wherein each of the first wireless earpiece and the second wireless earpiece is anatomically conformed for wearing in the same ear such that both of the first wireless earpiece and the second wireless earpiece are sized and shaped to operatively fit in a left ear and not a right ear of a user or both the first wireless earpiece and the second wireless earpiece are sized and shaped to operatively fit in the right ear and not the left ear of the user, wherein each of the first wireless earpiece and the second wireless earpiece further comprises:
an earpiece housing;
a speaker disposed within the earpiece housing;
a memory device disposed within the earpiece housing;
a transceiver disposed within each earpiece housing; and
a processor disposed within each earpiece housing and operatively connected to the speaker and the transceiver;
wherein the first wireless earpiece is configured to communicate with the second wireless earpiece to synchronize data after the first wireless earpiece is removed from the ear of the user; and
wherein the second wireless earpiece is configured to communicate with the first wireless earpiece to synchronize data after the second wireless earpiece is removed from the ear of the user.

US Pat. No. 10,771,876

HEADPHONES WITH ACOUSTICALLY SPLIT CUSHIONS

Apple Inc., Cupertino, C...

1. A headphone, comprising:a first earcup assembly including a first acoustically split cushion coupled to a first earcup, the first acoustically split cushion including a first portion comprising porous foam that is acoustically open and a second portion comprising closed foam that is acoustically sealed from the first portion;
a second earcup assembly; and
a headband extending between the first and second earcup assemblies, the headband including first and second opposing ends attached to the first and second earcup assemblies, respectively.

US Pat. No. 10,771,875

GRADIENT MICRO-ELECTRO-MECHANICAL SYSTEMS (MEMS) MICROPHONE

Harman International Indu...

17. A micro-electro-mechanical systems (MEMS) microphone assembly comprising:a first enclosure;
a first micro-electro-mechanical systems (MEMS) transducer positioned within the first enclosure;
a second enclosure;
a second MEMS transducer positioned within the second enclosure; and
a plurality of substrate layers including a first substrate layer and a second substrate layer to support the first MEMS transducer and the second MEMS transducer,
wherein the plurality of substrate layers define a first transmission mechanism to enable the first MEMS transducer to receive an audio input signal and a second transmission mechanism to enable the second MEMS transducer to receive the audio input signal,
wherein the first transmission mechanism and the second transmission mechanism are positioned below the MEMS transducer;
wherein the plurality of substrate layers define a first sound aperture and a second sound aperture that are separated from one another by a delay distance, and
wherein the delay distance is longer than an overall length of the first enclosure and the second enclosure.

US Pat. No. 10,771,874

SPEAKER SYSTEM SUCH AS A SOUND BAR ASSEMBLY HAVING IMPROVED SOUND QUALITY

JVIS-USA, LLC, Sterling ...

1. A speaker system comprising:An injection molded front panel having inner and outer surfaces and an outer boundary mating portion formed on the perimeter of the front panel, the front panel having a first sound opening at a first end and a second sound opening at a second end, wherein the front panel is formed as a unitary molded part from a thermoplastic;
an injection molded back panel having inner and outer surfaces and first and second ends and an outer boundary mating portion formed on the perimeter of the back panel, mateable in a sealed relation with the outer boundary mating portion of the front panel to form an outer boundary, wherein mating portions of the front and back panels are joined together to form an outer boundary, wherein the first and second ends of the front and back panels, when joined, each house a speaker unit having a first frequency range, the first and second ends being interconnected by an elongated section that separates the sound openings, wherein the back panel is formed as a unitary molded part from a thermoplastic;
wherein the front panel comprises first and second depressions at its first and second ends, respectively, each of the depressions housing a speaker unit having a second frequency range different from the first frequency range; and
wherein the first and second depressions are defined by bottom and side walls which separate the speaker units having a second frequency range from the speaker units having a first frequency range.

US Pat. No. 10,771,871

DATA CENTER ARCHITECTURE UTILIZING OPTICAL SWITCHES

Juniper Networks, Inc., ...

1. A method for routing data in a switch network, the method comprising:receiving, by the switch network, data packets from a plurality of node devices, the switch network comprising a level of electronic switches and a level of optical switches that are located between the level of electronic switches and the plurality of node devices, the optical switches being configured to change connections between the electronic switches and the plurality of node devices;
changing connections between one or more of the electronic switches and the plurality of node devices using one or more of the optical switches; and
receiving additional data by the one or more of the electronic switches, the additional data received by the one or more of the electronic switches using the connections changed by the one or more of the optical switches.

US Pat. No. 10,771,870

TECHNOLOGIES FOR DYNAMIC REMOTE RESOURCE ALLOCATION

Intel Corporation, Santa...

1. An orchestrator server to dynamically allocate resources among a set of managed nodes, the orchestrator server comprising:one or more processors;
one or more memory devices having stored therein a plurality of instructions that, when executed by the one or more processors, cause the orchestrator server to:
receive telemetry data from the managed nodes, wherein the telemetry data is indicative of resource utilization and workload performance by the managed nodes as the workloads are executed;
generate a resource allocation map indicative of resources that have been allocated among the managed nodes;
determine, as a function of the telemetry data and the resource allocation map, a dynamic adjustment to allocation of resources to at least one of the managed nodes to improve performance of at least one of the workloads executed on the at least one of the managed nodes; and
apply the adjustment to the allocation of resources among the managed nodes as the workloads are executed.

US Pat. No. 10,771,869

RADIO AND ADVANCED METERING DEVICE

1. A multi-device module comprising:a host device interface within the multi-device module, the host device interface configured to connect the multi-device module to a host meter device, wherein the host device interface includes:
a communications interface configured to send communications to and receive communications from the host meter device;
a power interface configured to receive power from the host meter device; and
an antenna interface configured to connect a radio on the multi-device module with an antenna on the host meter device, the radio configured to transmit radio frequency signals to a wireless network via the antenna interface and to receive radio frequency signals from the wireless network via the antenna interface; and
a computing device within the multi-device module and communicatively coupled to the communications interface and to the radio, wherein:
the host device interface is disposed on a first side of a printed circuit board, the first side comprising a plurality of contacts that connect to the host device interface, and
the computing device and the radio are disposed on a second side of the printed circuit board, wherein the second side comprises a shield that covers the computing device and the radio.

US Pat. No. 10,771,868

OCCUPANCY PATTERN DETECTION, ESTIMATION AND PREDICTION

Google LLC, Mountain Vie...

1. A system for detecting occupancy of an enclosure comprising:a hardware sensing system adapted to monitor utility information for the enclosure, wherein the utility information comprises a measure of electrical power used within the enclosure;
a hardware processing system programmed to:
analyze the utility information monitored by the sensing system to filter a main power line and thereby detect the use of one or more electronic devices;
determine whether the use of the one or more electronic devices detected from filtering the main power line indicates a likelihood of current occupancy of the enclosure by one or more humans;
estimate a probability of occupancy of the enclosure by one or more humans based at least in part on:
the likelihood of current occupancy indicated by the use of the one or more electronic devices detected from filtering the main power line; and
a historical probability distribution of occupancy that is fitted using previously detected departure and arrival times; and
cause a home system to operate based on the estimate of the probability.

US Pat. No. 10,771,866

METHODS, SYSTEMS, AND MEDIA SYNCHRONIZING AUDIO AND VIDEO CONTENT ON MULTIPLE MEDIA DEVICES

Google LLC, Mountain Vie...

1. A method for synchronizing audio and video content, the method comprising:receiving, at a media device, an indication of a media content item to be presented using the media device, wherein the media device includes an audio component for presenting audio content associated with the media content item and a video component for presenting video content associated with the media content item;
determining that the media device is associated with a group of media devices for presenting the media content item, wherein the group of media devices includes the media device and at least one audio device that presents the audio content associated with the media content item;
generating, by the media device, an audio timestamp that controls the presentation of the audio content on the group of media devices;
generating, by the media device, a video timestamp that controls the presentation of the video content on the media device based on the generated audio timestamp in response to setting the audio component of the media device as a master device and the at least one audio device in the group of media devices as a follower device; and
causing, at the media device, the video content associated with the media content item to be presented using the generated video timestamp and causing the audio content associated with the media content item to be simultaneously presented by the at least one audio device in the group of media devices by transmitting the audio timestamp from the media device to the at least one audio device in the group of media devices.

US Pat. No. 10,771,865

TECHNIQUES FOR ADVANCING PLAYBACK OF INTERACTIVE MEDIA TITLES IN RESPONSE TO USER SELECTIONS

NETFLIX, INC., Los Gatos...

1. A computer-implemented method, comprising:playing back at least a portion of an interstitial segment included in a media title, wherein the interstitial segment indicates a set of options for a user to select, and each option corresponds to a different media segment included in the media title;
receiving, at a first point in time, a user selection of a first option included in the set of options, wherein the first option corresponds to a first media segment included in the media title;
determining a first portion of the interstitial segment that already has been committed for playback as of the first point in time;
determining a first playback position within the interstitial segment at which to begin playback of the first media segment based on the first portion of the interstitial segment;
automatically advancing playback of the media title past a remaining portion of the interstitial segment that occurs subsequent to the first playback position to a second playback position within the first media segment.

US Pat. No. 10,771,864

METHODS AND APPARATUS TO MONITOR MEDIA

THE NIELSEN COMPANY (US),...

1. An apparatus to monitor media, the apparatus comprising:a media identifier to:
determine a first media identifier and a first timestamp based on media monitoring information obtained from a content publisher, the media monitoring information obtained from the content publisher in response to an access of media by a media device, the first media identifier and the first timestamp from the content publisher, the first media identifier to identify the media accessed by the media device;
map the first media identifier and the first timestamp to a second media identifier and a second timestamp in a look-up table, the second media identifier and the second timestamp generated by an AME media analyzer before the media device accessed the media; and
credit access of the media to the media device based on the mapping; and
a report generator to generate a media crediting report based on the crediting.

US Pat. No. 10,771,863

AUTOMATED MEDIA PUBLISHING

Avid Technology, Inc., B...

1. A method of generating a rendition of a media composition, the method comprising:receiving the media composition, wherein:
the media composition includes a compositional object that references a source media asset;
the compositional object is associated with an editorial rendition rule specified by an editor of the media composition using a media composition application; and
the editorial rendition rule specifies how the compositional object is to be rendered for each value of a plurality of values of an essence encoding parameter;
receiving a rendition profile that specifies a given value of a media essence encoding parameter for the rendition of the media composition; and
generating from the source media asset the rendition of the compositional object of the media composition in accordance with the editorial rendition rule as it applies to the compositional object for the given value of the media essence encoding parameter.

US Pat. No. 10,771,862

SYSTEMS, METHODS, AND APPARATUS TO IDENTIFY LINEAR AND NON-LINEAR MEDIA PRESENTATIONS

THE NIELSEN COMPANY (US),...

1. A system to automatically determine whether a media presentation is a linear media presentation or a non-linear media presentation, the system comprising:a media monitoring device to:
electronically monitor media presentation of first media; and
generate at least one of an audio fingerprint or a watermark code associated with the media presentation of the first media, the audio fingerprint based on an audio characteristic of the first media electronically measured at discrete times by the media monitoring device;
a log generator to:
access the at least one of the audio fingerprint or the watermark code generated by the media monitoring device;
identify first media identifiers of the first media from the at least one of the audio fingerprint or the watermark code by matching the at least one of the audio fingerprint or the watermark code to a reference audio fingerprint or a reference watermark code in a reference database;
generate a media presentation log for the media presentation of the first media, the media presentation log including first media identifiers of the first media and first times at which the first media was presented; and
generate a reference log including second media identifiers of second media and second times at which the second media was presented as a linear media presentation;
a log comparator to compare the media presentation log to the reference log to determine a duration associated with matches between ones of the first media identifiers and ones of the second media identifiers, the matches between the ones of the first media identifiers and the ones of the second media identifiers being in a same order in the media presentation log and the reference log, the duration based on at least one of the first times or the second times; and
a presentation classifier to classify the media presentation of the first media as a linear media presentation when the duration satisfies a threshold.

US Pat. No. 10,771,861

DIGITAL CHANNEL INTEGRATION SYSTEM

CBS Interactive Inc., Sa...

1. A method for automatically providing metadata related to a currently playing segment of a television program, the method comprising:receiving, at a digital channel integration system, production content metadata of the television program from a production system, the production content metadata including rundown information specifying how the television program is time-encoded;
parsing, by a processor of the digital channel integration system, the production content metadata to obtain segment data for a plurality of segments of the television program, wherein the segment data for each segment includes a segment identifier, at least one time code, and a story slug indicating an identifier for a story associated with each of the plurality of segments;
grouping, based on the time codes and the story slugs, adjacent segments having matching story slugs into one or more stories;
receiving, at the digital channel integration system, during streaming of a video stream corresponding to the television program, real-time status information from an automation system that includes an identifier of a playing segment of the television program;
associating, based on the real-time status information provided by the automation system and the segment data obtained from the production content metadata, a story with the playing segment of the television program;
determining that the playing segment of the television program corresponds to a beginning of a new story having a different story slug than a previously played segment immediately before the playing segment of the television program;
responsive to determining that the playing segment of the television program corresponds to the beginning of the new story, transmitting information to cause an encoding system to insert a cue point marker for the story in the video stream;
storing in association with the cue point marker, story metadata corresponding to the story, the story metadata derived from the rundown information; and
responsive to receiving a request from a client device currently streaming the video stream for the story metadata associated with the cue point marker, providing the story metadata to the client device to enable the client device to present information about the story.

US Pat. No. 10,771,859

PROVISIONING COMMERCIAL-FREE MEDIA CONTENT

DISH Technologies L.L.C.,...

1. A device comprising:one or more processing devices; and
memory communicatively coupled with and readable by the one or more processing devices, the one or more processing devices configured to perform actions comprising:
defining a priority with respect to a plurality of different data sources and performing a search for break-free media content in order to identify results in accordance with the defined priority, the performing the search comprising querying the plurality of different data sources;
consequent to the search, causing output of options of media content to cause display of the options of media content with a display, the options including indicators as to which items of the media content correspond to break-free media content;
processing a selection received via a user interface of one of the options that corresponds to an item of break-free media content;
responsive to the selection received via the user interface of the one of the options, generating at least a portion of the item of break-free media content at least partially by stripping out one or more advertisements from corresponding media content, and causing output of at least the portion of the item of break-free media content to the display;
processing a subsequent selection of an information button received while the item of break-free media content is being displayed; and
responsive to the subsequent selection, causing output of an information interface to cause the information interface to be displayed with the display while at least part of the item of break-free media content is being displayed, the information interface to provide one or more items related to the item of break-free media content that is currently being displayed, the one or more items comprising:
a first user-selectable option to request additional content associated with the item of break-free media content that is currently being displayed;
a second user-selectable option that, when selected, initiates a posting to a user's associated social media account for an online social network, wherein the posting includes information identifying the item of break-free media content that is currently being displayed; and/or
a presentation of targeted content related to the item of break-free media content that is currently being displayed.

US Pat. No. 10,771,858

CREATING AND FULFILLING DYNAMIC ADVERTISEMENT REPLACEMENT INVENTORY

The Nielsen Company (US),...

1. A system comprising:a processing device; and
a non-transitory computer-readable storage medium in communication with the processing device, the non-transitory computer-readable storage medium storing instructions that when executed on the processing device cause the processing device to perform operations comprising:
monitoring, using automatic content recognition (ACR), a video stream as the video stream is delivered to a television (TV);
during the monitoring of the video stream, identifying, in at least one media segment of the video stream, a broadcasted TV ad before the TV is to receive and display the broadcasted TV ad for an advertisement (ad) spot, wherein the broadcasted TV ad is associated with a first party;
determining, in real time, that the broadcasted TV ad is on target for a target audience for the ad spot;
in response to determining that the broadcasted TV ad is on target, determining, in real time, that the broadcasted TV ad is underperforming based on actual addressable TV impressions of the broadcasted TV ad being less than a desired number of addressable TV impressions for the broadcasted TV ad;
in response to determining that the broadcasted TV ad is underperforming:
increasing a sell cost-per-mille (CPM) price of addressable TV impressions for the ad spot based on a percentage by which the broadcast ad is underperforming;
providing criteria to an ad replacer client for generating a request to replace the broadcasted TV ad with a replacement ad for dynamic advertisement replacement (DAR) for the ad spot; and
providing, to a second party, the request for the replacement ad for the DAR for the ad spot;
receiving the replacement ad from the second party for the DAR based on digital advertising campaign objectives of the second party, the digital advertising campaign objectives comprising an available budget, a target audience, and a desired number of addressable TV impressions;
determining a number of predicted DAR impressions for the replacement ad, the predicted DAR impressions based on historical advertising data, a broadcast TV network, and a time of day;
providing data for determining that the replacement ad corresponding to the ad spot satisfies the digital advertising campaign objectives; and
responsive to determining that the replacement ad satisfies the digital advertising campaign objectives, communicating the replacement ad to replace the broadcasted TV ad for the ad spot in the video stream delivered to the TV,
wherein determining that the replacement ad satisfies the digital advertising campaign objectives occurs as the video stream is delivered to the TV and further comprises:
receiving the CPM for the replacement ad; and
determining that the CPM for the replacement ad satisfies the available budget of the digital advertising campaign objectives.

US Pat. No. 10,771,857

VIDEO STREAM AD REPLACEMENT

GOLD LINE TELEMANAGEMENT ...

1. A method comprising:retrieving a plurality of lists of advertisements from an external advertisement server, where each list in the plurality of lists of advertisements:
is associated with a corresponding user device among a plurality of user devices; and
contains a plurality of entries, each entry among the plurality of entries referencing a replacement advertisement for the corresponding user device;
wherein the retrieving is based, at least in part, on a characteristic of a user associated with the corresponding user device;
downloading, from the external advertisement server and for each user device among the plurality of user devices, a plurality of replacement advertisements among the replacement advertisements referenced by entries contained in the list associated with the each user device;
storing, at an advertisement buffering server, the plurality of replacement advertisements downloaded from the external advertisement server;
receiving notification that an existing advertisement has been detected in a video stream, the video stream further containing program content, and the video stream representing a linear channel of content;
identifying specific user devices among the plurality of user devices, where the specific user devices are playing back the linear channel as provided by a broadcaster over a packet-switched network;
selecting, for each user device among the specific user devices, a particular replacement advertisement from among the plurality of replacement advertisements downloaded for the each user device from the external advertisement server, the selecting based, at least in part, on a length of the existing advertisement;
notifying the broadcaster of the particular replacement advertisement to be played at the specific user devices instead of the existing advertisement;
instructing the broadcaster to unicast, to each of the specific user devices, over the packet-switched network, each particular replacement advertisement, accessed from the advertisement buffering server, instead of the existing advertisement;
receiving, from the broadcaster, notification that the particular replacement advertisement has started playing at each user device among the specific user devices; and
responsive to the receiving notification that the particular replacement advertisement has started playing, tracking, by calling a tracking URL associated with the particular replacement advertisement, playback of the particular replacement advertisement at each user device among the specific user devices.

US Pat. No. 10,771,856

SYSTEM AND METHOD FOR STORING ADVERTISING DATA

11. A device comprising:a processing system including a processor and a memory of an end user device; and
a memory storing executable instructions that, when executed by the processing system, facilitate performance of operations comprising:
storing end user reference data specific to the end user device;
receiving a video data stream via a video provider system, the video data stream comprising a plurality of advertising data items and advertising reference data associated therewith, each of the plurality of advertising data items having an advertising data type of a plurality of advertising data types, the plurality of advertising data types including video, audio, text and image data types, the video provider system comprising an advertising data system in communication with the end user device and a video content server delivering the video data stream, wherein the video content server, the advertising data system and the end user device respectively are distinct components of a hierarchical network, wherein the advertising reference data is inserted into the video data stream by the advertising data system;
comparing the end user reference data with the advertising reference data to identify a set of advertising data items in the video data stream;
extracting the set of advertising data items from the video data stream, forming an extracted set of advertising data items;
storing the set of advertising data items in the memory;
generating an advertising indicator corresponding to each of the extracted set of advertising data items, thereby providing a set of advertising indicators;
extracting excerpt data from respective advertising data items of the set of advertising data items, wherein the extracting the excerpt data comprises extracting the excerpt data from the advertising reference data of the video data stream, wherein the excerpt data forms respective excerpts of respective advertising data items;
determining an end user tendency to respond to one of the plurality of advertising data types;
sorting the set of advertising indicators to form a sorted set of advertising indicators, the sorting performed according to criteria comprising at least in part the end user tendency to respond to one of the plurality of advertising data types;
receiving a review request input to view the set of advertising data items in the memory;
displaying, at a display of the end user device, the sorted set of advertising indicators in an organized display of the sorted set of advertising indicators with associated respective excerpts of respective advertising data items, wherein the displaying is responsive to the review request input;
receiving a signal indicating a user selection of one of the set of advertising indicators, forming a selected advertising indicator;
presenting via the display a plurality of options for presenting the advertising data item associated with the selected advertising indicator and accordingly comprising a selected advertising data item; and
presenting the selected advertising data item,
wherein the end user reference data comprises at least one of demographic reference data and regional reference data,
wherein the end user tendency to respond to one of the plurality of advertising data types is determined is determined by the processing system, based on a recorded end user response to each of the plurality of advertising data types, wherein the recorded end user response to each of the plurality of advertising data types is stored at the end user device.

US Pat. No. 10,771,855

DEEP CHARACTERIZATION OF CONTENT PLAYBACK SYSTEMS

Amazon Technologies, Inc....

1. A computing system, comprising:one or more processors and memory configured to:
receive from a client device a request to play back media content;
receive a first parameter value and a second parameter value from a first component of a plurality of interconnected components of a content playback system, the first parameter value representing a characteristic of the first component, and the second parameter value representing a characteristic of a second component of the content playback system, wherein the characteristic of the first component and the characteristic of the second component correspond to different capabilities of the first component and the second component for a particular content playback parameter, and wherein the first component and the second component are configured, as part of the content playback system, to facilitate playback of the media content requested by the client device, the first component is part of the client device, and the second component is discrete from the client device;
determine the capability of the second component corresponding to the particular content playback parameter by using the second parameter value as a database index;
generate manifest data representing content playback versions of media content at a plurality of quality levels, the content playback versions being selected to ensure compatibility with the different capabilities of the first component and the second component and representing a subset of less than all of a superset of content playback versions; and
transmit the manifest data to the first component of the content playback system.

US Pat. No. 10,771,854

VIDEO STREAMING APPARATUS AND METHOD IN ELECTRONIC DEVICE

Samsung Electronics Co., ...

1. A video streaming method in an electronic device supporting multiple communication paths at least based on a multi-radio access technology, the video streaming method comprising:sending a request message to a server through each of the multiple communication paths and receiving a video segment from the server through each of the multiple communication paths in response to the request message;
estimating a transmission quality in each of the multiple communication paths based on the received video segment received through each of the multiple communication paths;
determining one communication mode designating at least one communication path for supporting video streaming from among multiple communication modes, based on the transmission quality, estimated for each of the multiple communication paths, and a preset reference transmission quality,
wherein the multiple communication modes comprise a single transmission mode for supporting video streaming through one of the multiple communication paths, a main/backup transmission mode for supporting video streaming by using a first communication path as a main communication path and a second communication path as a backup communication path, and a merged transmission mode for supporting video streaming through two or more communication paths among the multiple communication paths; and
performing prefetching with respect to the video segment by using the backup communication path in case of serving video streaming by using the main communication path in the main/backup transmission mode.

US Pat. No. 10,771,853

SYSTEM AND METHOD FOR CAPTION MODIFICATION

ARRIS ENTERPRISES LLC, S...

1. A device for use with a viewing device operable to display a video, said device comprising:an image receiver operable to receive image data;
a closed caption receiver operable to receive closed caption data;
an image decoder operable to decode the image data into image display data;
a closed caption decoder operable to decode the closed caption data into closed caption display data;
a video combining unit which generates a first content package based on a first image display data and the closed caption display data;
an output port operable to output the first content package to the viewing device to display a first video image over a first period of time;
an instruction receiver operable to receive a closed caption modification instruction and to output an image modification signal based on the closed caption modification instruction; and
a closed caption modifying unit which generates modified closed caption display data,
wherein said video combining unit is further operable, based on the image modification signal, to generate a second content package based on the first image display data and the modified closed caption display data,
said output port is further operable to output the second content package to the viewing device to display a second video image over a second period of time, and
the second content package is generated by locating the first image display data corresponding to the first period of time which is previous to the second period of time, and the second video image includes the first image display data and the modified closed caption display, and
wherein the device further comprises a memory operable to store the image display data and the closed caption display data,
said instruction receiver is further operable to receive a trick play closed caption modification instruction and to output a trick play signal based on the trick play closed caption modification instruction,
said video combining unit is further operable, based on the trick play signal, to generate a trick play content package based on the image display data and the modified closed caption display data, and
said output port is further operable to output the trick play content package to the viewing device to display a trick play video image for rewinding to the first set of image display data corresponding to the first period of time.

US Pat. No. 10,771,852

ON-DEMAND LIVE MEDIA CONTENT STREAMING

1. A method comprising:receiving, by a processing system having a processor, a first request for live content from first equipment of a first user, wherein the first request specifies a first location of the live content;
in response to the receiving the first request, sending, by the processing system to equipment of a group of users within a pre-determined distance of the first location, a second request for the live content;
receiving, by the processing system, a first reply from second equipment of a second user, wherein the first reply includes an identification of a capability of the second equipment;
in response to the receiving the first reply, presenting, by the processing system, a second reply to the first equipment of the first user, wherein the second reply includes the identification of the capability of the second equipment;
receiving, by the processing system from the first equipment of the first user, a selection;
in response to the receiving the selection, sending, by the processing system to the second equipment of the second user, a third request for the live content; and
receiving, at the first equipment of the first user, the live content, wherein the live content originates from the second equipment of the second user;
charging the first user a first fee associated with the live content; and
paying a second fee to the second user wherein the second fee is specified by the second user and the second reply includes the first fee.

US Pat. No. 10,771,851

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, PROGRAM AND INFORMATION PROCESSING SYSTEM

Sony Corporation, (JP)

1. An apparatus comprising:circuitry configured to:
receive a request to reproduce content data;
acquire, from an external device, information related to the content data, in which the information includes a reproducing point of the content data reproduced on the external device; and
control reproduction of the content data to initiate reproduction of the content data based on the reproducing point,
wherein the reproducing point of the content data indicates a temporal position within the content data at which the content data is currently being reproduced on the external device, at a time when the reproduction of the content data is initiated at the apparatus, and wherein the reproduction of the content data is initiated at another temporal position in the content data that is later than the temporal position within the content data.

US Pat. No. 10,771,850

METHOD AND APPARATUS FOR OBTAINING RECORDED MEDIA CONTENT

1. A method comprising:sending, by an end user device to a digital video recorder via a first communication channel, a first request to receive a requested media content item, the requested media content item having been previously recorded by the digital video recorder, wherein the digital video recorder is distinct from the end user device;
receiving, by the end user device responsive to the sending of the first request, a metadata file, wherein the metadata file comprises authentication information and a network address of a media content server where the requested media content item is located, wherein the media content server is distinct from each of the digital video recorder and the end user device;
sending to the media content server by the end user device via a second communication channel distinct from the first communication channel and not including the digital video recorder, responsive to the receiving of the metadata file, a second request to receive the requested media content item, wherein the second request is sent to the network address, contained in the metadata file, of the media content server where the requested media content item is located, wherein the second request comprises the authentication information contained in the metadata file, and wherein the second request comprises an identification of the requested media content item; and
receiving by the end user device via the second communication channel, responsive to the authentication information and the identification of the requested media content item that are sent in the second request, the requested media content item from the media content server,
wherein the authentication information comprises credential information and digital rights management information, the credential information including unique identification information of the digital video recorder and unique identification information of a subscriber to a service that delivers media content to the digital video recorder, and
wherein, in accordance with the authentication information, the end user device receives the requested media content item directly from the media content server.

US Pat. No. 10,771,849

MULTIMEDIA SYSTEM FOR MOBILE CLIENT PLATFORMS

1. A method comprising:receiving audio and video segments encoded in a digital encoding format and with an encoding rate; wherein, said audio and video segments are associated with object parameters and supplied host path identification to form multimedia objects;
requesting by a multimedia player, transmission of said multimedia objects; wherein said multimedia objects are located using http and received by said multimedia player from servers using a wireless connection;
playing back received multimedia objects by the multimedia player, wherein the multimedia player is configured to play multimedia objects in a sequence such that fluidity, video quality and audio quality are maintained by selecting a plurality of said multimedia objects that reflect available network bandwidth, autonomously adjusting said selection and playback according to the multimedia object parameters and supplied host path identification, and by
utilizing optimized decoding processes to maintain quality playback.

US Pat. No. 10,771,848

ACTIONABLE CONTENTS OF INTEREST

Alphonso Inc., Mountain ...

1. An automated method of creating a media consumer-personalized database table of selectable actions related to media content of interest to a media consumer that is being displayed on a media device, and using the database table of selectable actions to initiate an action, wherein a media content database maintains (i) an identifier of media content that is broadcast or streamed to a plurality of media devices, and (ii) metadata associated with the media content, the method comprising:(a) recording, by the media device, an indication of media content of interest to the media consumer via a pre-designated input received by the media device from a user input device when the media content of interest is being displayed;
(b) creating a fingerprint, by a fingerprint processor, for a segment of each indicated media content of interest, and electronically communicating the fingerprint to a remote server;
(c) performing automated content recognition (ACR) in the remote server on the fingerprints, the remote server thereby identifying the media content for each of the media consumer's indicated media content of interest by its respective identifier;
(d) populating, by the remote server that is in electronic communication with the media content database, the database table of selectable actions with at least the following items for each indicated media content of interest:
(i) the metadata associated with the media content of interest, and
(ii) selectable actions that can be taken using the metadata related to the media content of interest, wherein the selectable actions are based on the metadata related to the media content of interest, thereby allowing different selectable actions to be taken for different media content of interest, and
wherein the database table of selectable actions is populated with the metadata and associated selectable actions for a plurality of different media content of interest;
(e) displaying in a single display screen, on the media device, the populated database table of selectable actions for the plurality of different media content of interest, wherein the single display screen shows the plurality of different media content of interest, and their respective selectable actions; and
(f) initiating, by the media device, at least one of the selectable actions upon receipt by the media consumer or another entity of a request to take one of the selectable actions,
wherein at least some of the selectable actions require the media device to electronically communicate with an entity external to the media device to perform the selectable actions, and
wherein steps (e) and (f) occur at a later point in time than step (a).

US Pat. No. 10,771,847

SETUP PROCEDURES FOR AN ELECTRONIC DEVICE

Apple Inc., Cupertino, C...

1. A method comprising:at an electronic device in communication with a display and one or more input devices:
determining a primary content provider for the electronic device that allows for content associated with the primary content provider to be accessible on the electronic device;
after determining the primary content provider for the electronic device, displaying, on the display, one or more representations of one or more suggested applications to install on the electronic device based on the determined primary content provider, including a first application associated with the primary content provider;
while displaying the one or more representations of the one or more suggested applications, receiving, via the one or more input devices, a single input corresponding to a request to install the one or more suggested applications on the electronic device;
in response to receiving the single input:
installing the one or more suggested applications on the electronic device; and
in accordance with a determination that one or more criteria are met:
authorizing the electronic device with the primary content provider;
granting access to the authorization of the electronic device with the primary content provider to the one or more suggested applications, wherein granting access to the authorization of the electronic device with the primary content provider to the one or more suggested applications provides the electronic device with access to media content via the one or more suggested applications; and
enabling sharing of content viewing information associated with the one or more suggested applications with a unified media browsing application installed on the electronic device; and
after receiving the single input;
receiving, via the one or more suggested applications at the electronic device, the media content using the authorization of the electronic device with the primary content provider; and
displaying, in the unified media browsing application, one or more representations of media content based on the content viewing information associated with the one or more suggested applications.

US Pat. No. 10,771,846

ELECTRONIC APPARATUS FOR PLAYING SUBSTITUTIONAL ADVERTISEMENT AND METHOD FOR CONTROLLING METHOD THEREOF

SAMSUNG ELECTRONICS CO., ...

1. A display apparatus comprising:a display;
at least one memory storing instructions; and
at least one processor configured to execute the instructions to cause the following to be performed by the display apparatus:
receiving an advertisement image corresponding to a channel from a broadcasting transmission apparatus, and
while the advertisement image is being displayed on the display,
transmitting, to an advertising server, a request for a substitutional advertisement image for the channel,
receiving, from the advertising server, the substitutional advertisement image in response to the transmitted request,
determining whether an operable scaler for displaying the received substitutional advertisement image is present in the display apparatus, and
when the operable scaler is determined to be present,
substituting the received substitutional advertisement image for the advertisement image, and
displaying the substituted substitutional advertisement image on the display.

US Pat. No. 10,771,845

INFORMATION PROCESSING APPARATUS AND METHOD FOR ESTIMATING ATTRIBUTE OF A USER BASED ON A VOICE INPUT

SONY CORPORATION, Tokyo ...

1. An information processing apparatus, comprising:circuitry configured to:
obtain information of a voice input of at least one user;
estimate an attribute of the at least one user based on the information of the voice input;
set a size and a font of at least one character on a layout image based on the attribute of the at least one user;
control output of the layout image for a user input based on the set size and the set font of the at least one character;
obtain the user input for the layout image; and
control output of the user input to a thread associated with the estimated attribute of the at least one user.

US Pat. No. 10,771,844

METHODS AND APPARATUS TO ADJUST CONTENT PRESENTED TO AN INDIVIDUAL

The Nielsen Company (US),...

1. A system comprising:a first sensor to measure a first response of an individual to first content during a first time frame and a second response of the individual to the first content during a second time frame, the first sensor including a pupil dilation sensor;
a second sensor to measure a third response of the individual to the first content during the first time frame and a fourth response of the individual to the first content during the second time frame; and
a processor to:
generate a cognitive load index based on data from the pupil dilation sensor, the cognitive load index representative of how much of an information processing capacity of the individual is being used;
determine a first mental classification of the individual based on (1) a first comparison of the first response to a first threshold and (2) a second comparison of the third response to a second threshold;
determine a second mental classification of the individual based on (1) a third comparison of the second response to a third threshold and (2) a fourth comparison of the fourth response to a fourth threshold;
determine a mental state of the individual based on a degree of similarity between the first mental classification and the second mental classification; and
at least one of modify the first content to include second content or replace the first content with the second content based on the mental state.

US Pat. No. 10,771,843

MEDIA DISTRIBUTION WITH SAMPLE VARIANTS FOR NORMALIZED ENCRYPTION

Telefonaktiebolaget LM Er...

1. A method for distributing media content with end-to-end encryption, the method comprising:processing a media content asset for distribution with end-to-end encryption, the processing comprising: (i) encrypting a main track of the media content asset using a first encryption scheme, the main track comprising a plurality of samples and each sample of the main track encrypted using the first encryption scheme, and (ii) encrypting a sample variant track of the media content asset using a second encryption scheme, the sample variant track comprising a plurality of variants respectively corresponding to the plurality of samples of the main track and each of the variants of the sample variant track encrypted using the second encryption scheme, wherein the second encryption scheme is different from the first encryption scheme used for encrypting the main track of the media content asset; and
transmitting the encrypted main track containing the plurality of samples and the encrypted sample variant track containing the corresponding plurality of variants in a distribution container format to an edge media router (EMR) device configured to repackage the media content asset into a delivery container format without reencrypting the media content asset, the delivery container format comprising a format compatible for processing by at least one of a premises gateway node, a set-top-box (STB), and a user equipment (UE) device.

US Pat. No. 10,771,842

SUPPLEMENTAL CONTENT INSERTION USING DIFFERENTIAL MEDIA PRESENTATION DESCRIPTIONS FOR VIDEO STREAMING

HULU, LLC, Santa Monica,...

15. A method comprising:receiving, by a computing device, a first instance of a media presentation description for a first set of segments of a stream of a media presentation including first status information, wherein the first status information identifies a break from the stream of the media presentation for insertion of supplemental content;
sending, by the computing device, a first request with the first status information;
receiving, by the computing device, a second instance of the media presentation description for at least a portion of the supplemental content, wherein the second instance of the media presentation description includes second status information that reverts back to the stream of the media presentation after insertion of the supplemental content;
sending, by the computing device, a second request with the second status information; and
receiving, by the computing device, a third instance of the media presentation description for a second set of segments of the stream of the media presentation.

US Pat. No. 10,771,841

DISPOSITION OF VIDEO ALERTS AND INTEGRATION OF A MOBILE DEVICE INTO A LOCAL SERVICE DOMAIN

Comcast Cable Communcatio...

1. An apparatus comprising:one or more processors; and
memory storing instructions that, when executed by the one or more processors, cause the apparatus to:
receive data indicating a content type and a user;
receive, via an external network, an indication that content associated with the content type is available to a first user device in communication with the apparatus, wherein the first user device is:
in communication with the apparatus via a network local to a premises, and in communication with the apparatus via a first communication protocol;
determine that a second user device, associated with the user, has entered a wireless range of the apparatus, wherein the second user device is in communication with the apparatus via a second communication protocol; and
after the determining that the second user device has entered the wireless range of the apparatus, send, by the apparatus and to the second user device, a notification for output for the user that the content is available at the first user device.

US Pat. No. 10,771,840

SINK DEVICE

PANASONIC INTELLECTUAL PR...

1. A sink device bi-directionally communicably connected to a source device, the sink device comprising:a first memory storing a plurality of pieces of audio format information including information representing an audio format that the sink device can process; and
a controller for selecting, based on a first selection or a second selection, from among the plurality of pieces of audio format information stored in the first memory, one piece of audio format information corresponding to receiver format information representing an audio format that a receiver device connected to the sink device can process, and for outputting the selected audio format information to the source device, wherein:
the controller determines whether or not an audio output destination set beforehand is the receiver device,
when the controller has determined that the audio output destination is not the receiver device, the controller performs the second selection, and
when the controller has determined that the audio output destination is the receiver device, the controller determines whether or not an audio output setting is a predetermined setting,
when the controller has determined that the audio output setting is the predetermined setting, the controller performs either the first selection or the second selection, and
when the controller has determined that the audio output setting is not the predetermined setting, the controller performs the second selection.

US Pat. No. 10,771,839

CONTROL METHOD AND DISPLAY APPARATUS PROVIDING VARIOUS TYPES OF CONTENT

SAMSUNG ELECTRONICS CO., ...

1. A display apparatus comprising:a display;
a tuner;
a first interface configured to connect to an external device;
a second interface configured to connect to a network; and
a controller configured to:
control the display to display a content of a broadcast channel received via the tuner,
control the display to display a content received via the first interface,
control the display to display a screen of an application received via the second interface,
control the display to display a content bar including a first category user interface (UI) item in connection with the tuner for indicating a last displayed channel, which is a most recently displayed channel of the tuner prior to the display of the content bar, a second category UI item in connection with the first interface for indicating a last reproduced content, which is a most recently reproduced content received via the first interface prior to the display of the content bar, and a third category UI item in connection with the second interface for indicating a last used application, which is a most recently used application via the second interface prior to the display of the content bar,
in response to a selection of the first category UI item, control the tuner to tune to the last displayed channel and display received broadcast content on the display,
in response to a selection of the second category UI item, control the first interface to connect to the external device and display the last reproduced content stored in the external device on the display, and
in response to a selection of the third category UI item, control the second interface to connect to the network and display a screen of the last used application on the display.

US Pat. No. 10,771,838

SUPPLEMENTAL SERVICES INTERFACE

Comcast Cable Communicati...

1. A system comprising:at least a portion of a network, wherein the network is communicatively coupled between at least one computing device and another device;
at least one data storage device communicatively coupled with the at least one computing device and storing first data associating a plurality of items of scheduled video content with a first plurality of services and with a second plurality of services; and
the at least one computing device, wherein the at least one computing device comprises:
at least one processor; and
memory storing instructions that, when executed by the at least one processor, cause the at least one computing device to:
send, via the network, an indication of at least one of the plurality of items of scheduled video content and at least one of the first plurality of services associated with the at least one of the plurality of items of scheduled video content;
receive, from the another device, a user selection of a first service of the first plurality of services;
determine, based on the first data, whether a first item of scheduled video content, which is associated with the first service, is also associated with any of the second plurality of services; and
send, via the network and based on determining that the first item of scheduled video content is associated with a second service of the second plurality of services, an indication of the second service.

US Pat. No. 10,771,837

SYSTEMS AND METHODS FOR PRESENTING BACKGROUND GRAPHICS FOR MEDIA ASSET IDENTIFIERS IDENTIFIED IN A USER DEFINED DATA STRUCTURE

Rovi Guides, Inc., San J...

1. A method for selecting graphics to display as backgrounds of media asset identifiers, the method comprising:receiving a plurality of media asset identifiers;
comparing metadata of each of the plurality of received media asset identifiers with metadata of a first media asset identifier stored in a data structure;
determining, based on the comparing, that a second media asset identifier of the plurality of received media asset identifiers corresponds to the first media asset identifier stored in the data structure;
in response to the determining, determining display dimensions of the second media asset identifier;
accessing a database storing a plurality of graphics associated with the second media asset identifier;
retrieving, from a field in the database corresponding to a respective graphic of the plurality of graphics associated with the second media asset identifier, the dimensions of the respective graphic;
comparing the dimensions of the respective graphic to the display dimensions of the second media asset identifier;
determining whether the dimensions of the respective graphic match the display dimensions of the second media asset identifier;
in response to determining that the dimensions of the respective graphic match the display dimensions of the second media asset identifier, retrieving, from the database storing the plurality of graphics associated with the second media asset identifier, the graphic corresponding to the second media asset identifier;
generating for display the retrieved graphic as a background graphic of the second media asset identifier;
determining that the dimensions of each respective graphic of the plurality of graphics associated with the second media asset identifier stored in the database do not match the display dimensions of the second media asset identifier;
in response to determining that the dimensions of each respective graphic of the plurality of graphics associated with the second media asset identifier stored in the database do not match the display dimensions of the second media asset identifier, accessing a server;
retrieving metadata associated with a server graphic;
comparing the metadata associated with the server graphic with the metadata of the second media asset identifier;
in response to determining the metadata associated with the server graphic matches the metadata of the second media asset identifier, retrieving the server graphic; and
generating for display the retrieved server graphic as the background graphic.

US Pat. No. 10,771,836

DISPLAY APPARATUS AND CONTROL METHOD THEREOF

SAMSUNG ELECTRONICS CO., ...

1. A display system comprising:an input device;
a communicator configured to receive a wireless signal from the input device; and
a processor configured to output an image signal configured to:
display a first set of user interface (UI) items in a first group of a plurality of groups of UI items over an image displayed in a display area of a display device based on a wireless signal corresponding to a first input being received from the input device via the communicator;
graphically distinguish a first UI item from among the first set of UI items, to represent selection of the first UI item, while the first set of UI items is displayed in the display area;
graphically distinguish a second UI item from among the first set of UI items, to represent selection of the second UI item, based on a wireless signal corresponding to a second input being received from the input device via the communicator;
display a second set of UI items in a second group of the plurality of groups in the display area by sliding the second set of UI items into the display area based on a wireless signal corresponding to a third input being received from the input device via the communicator while the first set of UI items is displayed in the display area; and
display a third set of UI items in the first group that is different than the first set of UI items based on a wireless signal corresponding to a fourth input being received from the input device via the communicator while the first set of UI items is displayed in the display area.

US Pat. No. 10,771,835

CONTROLLING CONFIGURABLE LIGHTS USING COLOR METADATA OF A MEDIA STREAM

Amazon Technologies, Inc....

1. A method comprising:sending, by one or more computer processors coupled to at least one memory, a request for media content for presentation at a user device, the media content comprising video content;
receiving, from a media content server, a media stream comprising encrypted in-band data and unencrypted out-of-band data, the encrypted in-band data comprising video content data, and the unencrypted out-of-band data comprising color metadata;
determining, based at least in part on the color metadata, first lighting color parameters for a plurality of configurable lights associated with an environment of the user device, the first lighting color parameters corresponding to a first portion of the video content;
causing, based at least in part on the video content data, a presentation of the first portion of the video content at the user device;
sending, based at least in part on the first lighting color parameters, a first command for the plurality of configurable lights to output a first plurality of colors during the presentation of the first portion of the video content at the user device;
determining, based at least in part on the color metadata, second lighting color parameters for the plurality of configurable lights, the second lighting color parameters corresponding to a second portion of the video content;
causing, based at least in part on the video content data, a presentation of the second portion of the video content at the user device after the presentation of the first portion of the video content at the user device; and
sending, based at least in part on the second lighting color parameters, a second command for the plurality of configurable lights to output a second plurality of colors during the presentation of the second portion of the video content at the user device, the second plurality of colors being different from the first plurality of colors.

US Pat. No. 10,771,834

PERSONALIZED CONTENT

Oath, Inc., Dulles, VA (...

8. An apparatus, comprising:a storage device that stores a set of instructions; and
at least one processor coupled to the storage device, the set of instructions configuring the at least one processor to:
provide a channel over a network to distribute multimedia content to one or more terminal devices;
determine the state of resources and allocate a frequency or PID pair to the one or more terminal devices based on the state of resources;
transmit, from a media switch to the terminal device, reusable access information associated with the channel;
transmit, from the media switch to a network device, the multimedia content;
detect at least one of a user command or a pause command from the terminal device;
tracking which data units have been received; and
reply, based on scheduling of the multimedia content, to the detected user command.

US Pat. No. 10,771,833

SYSTEM AND METHOD FOR IMPROVING STREAMING VIDEO VIA BETTER BUFFER MANAGEMENT

The Trustees of Princeton...

1. A method for improving video streaming performance of a video in a system having a client machine and remote machine, the method being performed by the client machine and comprising:determining a first number based on one or more parameters, at least one of the parameters being related to current network conditions;
determining a second number corresponding to a number of video segments of the video, as calculated by a total size of the video segments, that is greater than or equal in size to a third number determined based on at least a bandwidth-delay product of the network to the remote machine, the third number being no less than two;
requesting from the remote machine the second number of video segments in a pipelined fashion, wherein a subsequent request for a video segment of the video is made before a response to a prior request is at least partially received, provided that no less than the second number of video segments are outstanding at any one time, and wherein another subsequent request is made if fewer than the second number of video segments are outstanding; and
stopping subsequent pipelined requests if a predetermined size of the video has been requested that is greater than or equal to the first number.

US Pat. No. 10,771,832

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, PROGRAM, APPLICATION INFORMATION TABLE SUPPLYING APPARATUS, AND APPLICATION INFORMATION TABLE SUPPLYING METHOD

Saturn Licensing LLC, Ne...

1. An information processing apparatus, comprising:a memory configured to store an application information table; and
circuitry configured to
receive and process broadcast content,
acquire the application information table for an application which is capable of being processed together with the broadcast content and is to be acquired from a server, the application information table storing first information, and the first information including a distribution number of an access timing of the application to be acquired from the server and including a time range in which access to the server is distributed, and
calculate, based on the time range and the distribution number included in the first information, a random timing at which a request for acquiring the application is transmitted.

US Pat. No. 10,771,831

SYSTEM AND METHOD FOR PREEMPTIVE ADVERTISEMENT CACHING TO OPTIMIZE NETWORK TRAFFIC

1. A non-transitory, machine-readable storage medium, comprising executable instructions that, when executed by a processing system including a processor, facilitate performance of operations, the operations comprising:predicting an interest of a viewer in a media program that will be available for consumption over a content delivery network, wherein the predicting is based on viewer data;
determining a first advertisement to be shown to the viewer during a broadcast of the media program, the first advertisement being selected from a plurality of advertisements from available advertising campaigns associated with the content delivery network for the media program;
transmitting the first advertisement to a local device of the viewer in advance of the broadcast of the media program, wherein the transmitting occurs during a period of low utilization of the content delivery network, wherein the local device stores the first advertisement in a cache;
cuing a playout server at the local device to display the first advertisement at a selected portion of the broadcast of the media program, wherein, responsive to the cuing, the playout server, at the local device, retrieves the first advertisement from the cache, inserts the first advertisement into the selected portion of the broadcast of the media program, and displays the selected portion of the broadcast of the media program including the first advertisement;
determining that the playout server is not available at the local device;
responsive to determining that the playout server is not available, transmitting a second advertisement to the local device during the broadcast of the media program; and
cuing the local device to display the second advertisement, wherein the local device displays the second advertisement when cued.

US Pat. No. 10,771,830

SYSTEMS AND METHODS FOR MANAGING CONTENT DISTRIBUTION TO AN IN-TRANSIT MEDIA SYSTEM

Viasat, Inc., Carlsbad, ...

1. A method for managing media content distribution rights to a media client located on a transport craft, the method comprising:receiving, at a rights location manager, a media content distribution region definition for a media content item from a content server in response to a request for the media content item from the media client located on the transport craft, wherein the media content distribution region definition indicates one or more media content distribution regions for the media content item;
comparing a current location of the transport craft to the one or more media content distribution regions to determine a current region of the one or more media content distribution regions that includes the current location of the transport craft; and
sending, from the rights location manager, data indicating the current region to the content server for determining whether distribution of the media content item to the media client located on the transport craft is authorized, wherein the data is based on the comparison.

US Pat. No. 10,771,829

METHODS AND APPARATUS TO PERFORM IDENTITY MATCHING ACROSS AUDIENCE MEASUREMENT SYSTEMS

The Nielsen Company (US),...

1. An apparatus to perform identity matching, the apparatus comprising:memory including machine readable instructions; and
processor circuitry to execute the instructions to:
build a data structure based on normalized audience measurement events corresponding to a first audience measurement system, the data structure including a plurality of search spaces;
identify a search space in the data structure based on a query event corresponding to a second audience measurement system;
determine the query event and a first audience measurement event in the search space are a candidate match when a distance between the query event and the first audience measurement event satisfies a threshold; and
link a first user identifier associated with the first audience measurement system to a second user identifier associated with the second audience measurement system based on a plurality of candidate matches.

US Pat. No. 10,771,828

CONTENT CONSENSUS MANAGEMENT

Free Stream Media Corp., ...

1. A computer-implemented method, comprising:receiving, from a plurality of user devices, a set of fingerprint data, instances of the fingerprint data each including a representation of content received by a respective user device, each instance of fingerprint data being generated at least in part by analyzing the representation of the content received by the respective user device over a period of time;
comparing the instances of fingerprint against data representative of various instances of content to determine the instances of content associated with the instances of fingerprint data;
aggregating content data for each instance of content determined to be received from a respective content source, and by at least one of the user devices, over the period of time;
determining that a consensus criterion, corresponding to a statistically significant quantity of user devices from the plurality of user devices receiving the instances of content, is satisfied with respect to an instance of content for the period of time and the respective content source; and
storing identity information for the instance of content along with an identifier for the respective content source, indicating that the instance of content was provided by the respective content source over the period of time.

US Pat. No. 10,771,827

MONITORING AND ACTIVITY REPORTING OF ENHANCED MEDIA CONTENT

Comcast Cable Communicati...

1. A method comprising:causing, by one or more computing devices, establishment of connections with a plurality of devices in a content delivery network, wherein each of the plurality of devices is associated with a different distribution portion of the content delivery network;
receiving, from the plurality of devices and via the content delivery network, first event data indicating start times of an enhancement event associated with enhanced media content delivered via the content delivery network;
receiving, from the plurality of devices and via the content delivery network, second event data indicating end times of the enhancement event;
determining, by the one or more computing devices, based on a comparison of each of the start times to a scheduled start time associated with the enhancement event, and based on a comparison of each of the end times to a scheduled end time associated with the enhancement event, one or more discrepancies in delivery of the enhanced media content via the content delivery network; and
generating, by the one or more computing devices, a report indicating the one or more discrepancies and indicating at least one device, of the plurality of devices, as a cause of the one or more discrepancies.

US Pat. No. 10,771,826

APPARATUS AND METHOD FOR CONFIGURING A CONTROL MESSAGE IN A BROADCAST SYSTEM

Samsung Electronics Co., ...

1. An apparatus for providing multimedia content in a system, the apparatus comprising:at least one processor configured to identify a packet according to a moving picture experts group (MPEG) media transport (MMT) protocol, the packet including a packet header and a packet payload, wherein the packet payload comprises a control message for a package of the multimedia content; and
a transmitter configured to transmit the packet to a receiving entity,
wherein the control message comprises:
a message type field including information on a type of the control message,
a length field including information on a length of the control message,
an extensive information field including identifier information indicating an identifier of a first table in a payload field of the control message and version information indicating a version of the first table, and
the payload field including the first table,
wherein the first table comprises, for providing the receiving entity with information on where to obtain a second table, first location information providing a first location of the second table, the identifier information indicating the identifier of the first table and the version information indicating the version of the first table, and
wherein the second table comprises a list of assets related to the package, for the receiving entity to consume the package based on the second table.

US Pat. No. 10,771,825

MEDIA DISTRIBUTION AND MANAGEMENT PLATFORM

uStudio, Inc., Austin, T...

1. At least one non-transitory machine readable medium comprising instructions that when executed on a computing system, having at least one processor, cause the computing system to perform a method comprising:receiving video into the at least one non-transitory machine readable medium;
displaying a graphical user interface (GUI) that provides a user with options to distribute the video to first, second, and third distribution channels
after receiving the video, receiving user input supplied via first and second graphical elements of the graphical user interface (GUI) and via a hardware based input/output (I/O) channel included in the system, wherein the first graphical element corresponds to a first distribution channel and the second graphical element corresponds to a second distribution channel;
in response to the user input, determining the video, via the at least one processor, is to be distributed to the first and second distribution channels but not to a third distribution channel;
in response to the user input, determining, via the at least one processor, first and second characteristics for the video, the first and second characteristics including at least one of file type, container type, video duration, video resolution, frame rate, video compression bit rate, or audio compression bit rate;
in response to determining the video is to be distributed to the first distribution channel, determining which first channel information is necessary for the first distribution channel and which second channel information is necessary for the second distribution channel;
after determining which first channel information is necessary for the first distribution channel and which second channel information is necessary for the second distribution channel; requesting the first and second channel information from the user;
in response to the user input, (a) transcoding the video, via the at least one processor, into transcoded first video having a first format corresponding to the first distribution channel; (b) transcoding the video, via the at least one processor, into transcoded second video having a second format corresponding to the second distribution channel, the first format unequal to the second format; and (c) not transcoding the video into transcoded third video having a third format corresponding to the third distribution channel;
in response to the user input, packaging first metadata and the transcoded first video, via the at least one processor, into a first container and second metadata and the transcoded second video into a second container; and
in response to user input, publishing the first container via the first distribution channel and the second container via the second distribution channel;
wherein the first channel information is unequal to the second channel information;
wherein the first, second, and third distribution channels are on-line video distribution platforms configured to provide video to a plurality of client computing nodes.

US Pat. No. 10,771,824

SYSTEM FOR MANAGING VIDEO PLAYBACK USING A SERVER GENERATED MANIFEST/PLAYLIST

GOOGLE LLC, Mountain Vie...

1. A system for managing video playback comprising:a manifest server configured to communicate with a plurality of video players and a content delivery network (CDN), the manifest server to:
(a) receive a request from a first video player of the plurality of video players to play a video stream, wherein the video stream is associated with a first manifest file created by an encoder for the video stream and distributed to CDN along with the video stream;
(b) upon receiving the request, communicate with the CDN to request, from the CDN, the first manifest file previously stored at the CDN, the first manifest file containing information to allow any of the plurality of video players to play the video stream;
(c) receive the first manifest file from the CDN;
(d) modify, using a session identifier that identifies the connection between the manifest server and the first video player, the first manifest file previously stored at and received from the CDN to produce a second manifest file unique to the first video player having the request,
wherein to modify the first manifest file to produce the second manifest file unique to the first video player, the manifest server is further to:
determine at least one rule associated with the connection between the manifest server and the first video player,
determine a view history of a user of the video player, and
modify a playlist to specify a combination of segments of the video stream and segments of alternative content in a modified video stream, the combination created based on the rule and the view history of the user,
(e) transmit the second manifest file to the first video player, the second manifest file including (i) information identifying at least one of the video player or the user of the video player, (ii) the at least one rule added to the second manifest file, and (iii) the modified playlist to allow the first video player to play the combination of segments of the video stream and the segments of alternative content in the modified video stream according to the rule added to the second manifest file and the view history of the user of the video player; and
(f) in response to a second request of the user to seek forward or backward in the modified video stream, create an updated second manifest file based on (i) the rule associated with the connection between the manifest server and the first video player and (ii) recent view history of the user.

US Pat. No. 10,771,823

PRESENTATION OF COMPOSITE STREAMS TO USERS

Facebook, Inc., Menlo Pa...

1. A method comprising:receiving, from a first source representing a first user of a social networking system, a live broadcast of a first media stream;
receiving, from a second source representing a second user of the social networking system, a second media stream;
generating a composite stream by compositing the second media stream with the first media stream;
accessing historical data of the first user and the second user that describes views of previous streams hosted by the first user and views of previous streams hosted by the second user;
selecting a set of encoders, wherein the encoders in the selected set is based on the historical data;
encoding the composite stream into content encodings using the selected set of encoders, each content encoding comprising information describing presentation orientation of the first media stream with respect to presentation orientation of the second media stream; and
broadcasting, to one or more client devices, one of the content encodings.

US Pat. No. 10,771,822

CODING OF A SPATIAL SAMPLING OF A TWO-DIMENSIONAL INFORMATION SIGNAL USING SUB-DIVISION

GE VIDEO COMPRESSION, LLC...

1. A decoder for decoding a data stream corresponding to a video comprising a plurality of arrays of information samples, comprising:an extractor configured for extracting, from the data stream, information related to first subdivision information, second subdivision information, and a maximum hierarchy level associated with each of the plurality of arrays of information samples, wherein the first subdivision information is associated with regions of the array of information samples in a spatial domain and includes a first maximum region size and multi-tree subdivision information, the second subdivision information is associated with regions of the array of information samples in a spectral domain and includes a second maximum region size and subordinate multi-tree subdivision information, and the maximum hierarchy level is associated with regions of the array of information samples in the spectral domain;
a sub-divider configured for dividing each of the arrays of information samples in accordance with the first and second maximum region sizes, the multi-tree subdivision information and the subordinate multi-tree subdivision information, and the maximum hierarchy level for the array of information samples, by
partitioning the array of information samples into a first set of regions in the spatial domain based on the first maximum region size,
sub-partitioning at least a subset of the first set of regions into a first set of sub-regions in the spatial domain based on the multi-tree subdivision information associated therewith,
responsive to a determination that a size of at least one of the first set of sub-regions exceeds the second maximum region size, partitioning the at least one of the first set of sub-regions into a second set of regions in the spectral domain, and
responsive to a determination that at least one of the second set of regions of the second maximum size is to be sub-divided, sub-partitioning the at least one of the second set of regions into a second set of sub-regions in the spectral domain based on the subordinate multi-tree subdivision information and the maximum hierarchy level associated therewith; and
a reconstructor configured for reconstructing each of the arrays of information samples based on the first and second sets of sub-regions generated by the sub-divider.

US Pat. No. 10,771,821

OVERCOMING LOST IP PACKETS IN STREAMING VIDEO IN IP NETWORKS

Electronic Arts Inc., Re...

1. A computer-implemented method performed by a computerized device, comprising:receiving by a computing device associated with a subscriber an initial frame constituting a part of video transmission, from an encoder;
decoding the initial frame;
determining that a first packet within the initial frame is missing or corrupted, the missing first packet corrupting the initial frame and subsequent frames which are encoded based on differences relative to the initial frame or to consequent frames sent by the encoder;
notifying an encoder about the missing first packet;
receiving, during a recovery period, from the encoder at least one first frame in which at least one first part is encoded independently of a preceding frame;
decoding the at least one first frame;
sending to the encoder additional information indicating that a second packet is missing in the first frame;
receiving from the encoder at least one second frame in which at least one second part different from the at least one first part is encoded independently of a preceding frame; and
decoding the at least one second frame,
wherein the at least one first part and the at least one second part are determined independently of which packet within the initial frame is missing or corrupted, and sized to be small enough such that the video frame rate does not decrease and the maximal bitrate is not violated, and
wherein if newly missing packets are detected during the recovery period, the computing device stops receiving frames and the recovery period is restarted.

US Pat. No. 10,771,820

IMAGE ENCODING METHOD AND APPARATUS USING ARTIFACT REDUCTION FILTER, AND IMAGE DECODING METHOD AND APPARATUS USING ARTIFACT REDUCTION FILTER

SAMSUNG ELECTRONICS CO., ...

1. An image encoding method comprising:generating a first picture reconstructed by using a residual picture and a predicted picture;
generating a second picture by applying a first artifact reduction filter to the first picture;
determining a picture having a smaller bit-rate distortion cost from among the first and second pictures by comparing a first bit-rate distortion cost of the first picture with a second bit-rate distortion cost of the second picture;
generating a third picture by applying an in-loop filter to the determined picture;
generating a fourth picture by applying a second artifact reduction filter to the third picture;
evaluating subjective quality and objective quality of each of the third and fourth pictures so as to determine a first distortion of the third picture and a second distortion of the fourth picture;
determining a picture having smaller distortion from among the third and fourth pictures by comparing the first distortion with the second distortion; and
generating, via a processor, a bitstream comprising information about whether the second artifact reduction filter is applied,
wherein the evaluating the subjective quality comprises evaluating the subjective quality of each of the third and fourth pictures based on a size of a display and a distance between the display and a viewer, and
wherein the evaluating the subjective quality comprises determining sharpness of each of the third and fourth pictures, based on sharpness of each of blocks in the third and fourth pictures.

US Pat. No. 10,771,819

SAMPLE ADAPTIVE OFFSET FILTERING

Canon Kabushiki Kaisha, ...

1. A method of performing sample adaptive offset filtering on one or more images of a sequence of images, comprising:associating a fixed set of four offset values with at least one direction of edge-type filtering, and
for any image area for which it is determined that edge-type filtering is to be used and for which it is determined that a direction of the edge-type filtering is said at least one direction, using the fixed set of four offset values associated with said at least one direction to perform the sample adaptive offset filtering on the image area concerned,
wherein a first offset value +O1 is used when CCn1 and C==Cn2) or (C>Cn2 and C==Cn1), and a fourth offset value ?O4 is used when C>Cn1 and C>Cn2, where C is the value of a target pixel and Cn1 and Cn2 are respective values of neighbouring pixels of the target pixel in an edge-filtering direction, and the or each said fixed set has the following properties:O1?O2, andO4?O3.

US Pat. No. 10,771,818

METHOD AND APPARATUS FOR DETERMINING THE SEVERITY OF CORRUPTION IN A PICTURE

ATI TECHNOLOGIES, ULC, M...

1. An apparatus comprising:a decoder circuit configured to:
receive a first packet including encoded pixels representative of a picture in a multimedia stream;
receive a first approximate signature generated based on approximate values of pixels in a copy of the picture, wherein the copy of the picture is reconstructed by decoding the encoded pixels prior to transmission of the encoded pixels to the decoder circuit;
decode the encoded pixels;
generate a second approximate signature based on the decoded pixels; and
transmit a first signal that is generated based on a comparison of the first approximate signature and the second approximate signature.

US Pat. No. 10,771,817

METHOD FOR DECODING A DIGITAL IMAGE, CODING METHOD, DEVICES, AND ASSOCIATED COMPUTER PROGRAMS

1. A method comprising the following acts performed by a decoding device:receiving from a communication network a bit stream comprising coded data representative of at least one digital image; and
decoding the at least one digital image, said image being divided into a plurality of blocks processed in a defined order, said decoding comprising the following steps, implemented for a current block:
decoding coefficients of the current block from the coded data read in the bit stream;
transforming the current block into a transformed decoded block, said current block comprising M row vectors and N column vectors, with M and N being non-zero integers, said transforming step implementing a first sub step to produce an intermediate block, that applies to the column vectors respectively row vectors of the current block, and a second substep to produce a block of pixels that applies to the row vectors respectively column vectors of the intermediate block, resulting from the first substep;
rebuilding the image from the transformed decoded block;
wherein:
at least one of said first and second transformation substeps comprises, for a row vector respectively a column vector, called an input vector of the current block:
forming a first subvector of size K transforming the first subvector into a first transformed subvector by applying a first partial subtransform of size K×K and transforming the second subvector into a second transformed subvector by applying a second subtransform of size (N?K)×(N?K); and
building a transformed input vector of the intermediate block for the first transformation substep, respectively of the transformed decoded block for the second transformation step, by inserting the first transformed subvector and the at least one second transformed subvector.

US Pat. No. 10,771,816

METHOD FOR DERIVING A MOTION VECTOR

Velos Media, LLC, Dallas...

6. An apparatus comprising:an encoder; and
a memory;
the encoder configured to:
generate a first list of motion vectors comprising motion vectors for neighboring blocks of a current block in a current frame of the video, wherein the neighboring blocks include an above block of the current block, a left block of the current block and an above left block of the current block in the current frame, and to store the first list of motion vectors in the memory;
select a motion vector for a block in a previous frame by:
deriving coordinates of a position of the block in the previous frame by performing a flooring function,
wherein the flooring function operates an arithmetic right shift operation based on a top-left position of the current block followed by an arithmetic left shift operation, and
wherein a magnitude of the arithmetic right shift operation is the same as a magnitude of the arithmetic left shift operation;
generate a second list comprising the motion vector for the block in the previous frame and store the second list in the memory;
generate a third list of candidate motion vectors based on the first list and the second list and store the third list in the memory;
select a motion vector from the third list as a motion vector predictor for the current block;
derive a motion vector difference between the motion vector predictor for the current block and a motion vector for the current block; and
generate a motion vector control parameter corresponding to the motion vector predictor for the current block,
wherein the motion vector control parameter and the motion vector difference are used for encoding a bitstream.

US Pat. No. 10,771,815

METHOD AND APPARATUS FOR PROCESSING VIDEO SIGNALS USING COEFFICIENT INDUCED PREDICTION

LG Electronics Inc., Seo...

1. A method of encoding a video signal based on a graph-based lifting transform (GBLT), comprising:detecting an edge from an intra residual signal, wherein a model for the intra residual signal is designed by using a Gaussian Markov Random Field (GMRF);
generating a graph based on the edge, wherein the graph comprises a node and a weight link;
obtaining a GBLT coefficient by performing the GBLT for the graph;
quantizing the GBLT coefficient; and
entropy-encoding the quantized GBLT coefficient,
wherein the GBLT comprises a split process, a prediction process, and an update process,
wherein the split process of the GBLT is performed to minimize a Maximum A Posteriori (MAP) estimate error within a prediction set, and
wherein the method further comprises:
obtaining a DCT coefficient by performing a DCT on the intra residual signal;
comparing a rate-distortion cost of the DCT coefficient with a rate-distortion cost of the GBLT coefficient;
determining, based on the rate-distortion cost of the GBLT coefficient being smaller than the rate-distortion cost of the DCT coefficient, a mode index corresponding to the GBLT; and
entropy-encoding the mode index.

US Pat. No. 10,771,814

HYBRID VIDEO CODING SUPPORTING INTERMEDIATE VIEW SYNTHESIS

GE VIDEO COMPRESSION, LLC...

1. A decoder for decoding multi-view data, comprising:an extractor configured to:
receive multi-view data comprising data related to a first-view video and a second-view video, and
obtain, from the multi-view data, a disparity vector and a prediction residual associated with a sub-region of a frame of the second-view video, wherein the disparity vector indicates a displacement of the sub-region of the frame of the second-view video with respect to a frame of the first-view video;
a predictive reconstructor configured to reconstruct the sub-region of the frame of the second-view video based on a reconstructed portion of the frame of the first-view video, the disparity vector, and the prediction residual, wherein the reconstructed portion of the frame of the first-view video is obtained based on a prediction mode indicated in the data stream; and
a view synthesizer configured to:
determine a scaled disparity vector based on the disparity vector and a scaling factor, and
reconstruct a portion of a frame of a third-view video using the reconstructed portion of the frame of the first view video and the scaled disparity vector.

US Pat. No. 10,771,813

REFERENCE FRAME ENCODING METHOD AND APPARATUS, AND REFERENCE FRAME DECODING METHOD AND APPARATUS

HUAWEI TECHNOLOGIES CO., ...

1. A reference frame decoding method, comprising:obtaining, by a video decoding device, a first reference frame comprising a first picture frame on which decoding reconstruction has been performed or a first interpolated picture frame obtained by pixel interpolation on the first picture frame;
parsing, by the video decoding device, a bitstream to obtain mapping parameters;
determining, by the video decoding device, to-be-determined coefficients of a preset mapping function according to the mapping parameters;
obtaining, by the video decoding device in the first reference frame according to the preset mapping function, a first pixel unit having a mapping relationship with a second pixel unit of a second reference frame;
assigning, by the video decoding device, a pixel value of the first pixel unit to the second pixel unit to construct the second reference frame;
obtaining, by the video decoding device, a reference frame list comprising a candidate reference frame of the first picture frame on which the decoding reconstruction has been performed;
obtaining, by the video decoding device, regional vertexes of a preset first region in the first picture frame on which the decoding reconstruction has been performed;
obtaining, by the video decoding device in the second reference frame according to the preset mapping function, scatters having the mapping relationship with the regional vertexes of the preset first region;
coupling, by the video decoding device according to a coupling relationship between the regional vertexes of the preset first region, the scatters in the second reference frame having the mapping relationship with the regional vertexes of the preset first region;
forming, by the video decoding device, a second region using a region encircled by the coupled scatters;
calculating, by the video decoding device, a ratio of an intersection of areas of the preset first region and the second region to a union of the areas of the preset first region and the second region, wherein the intersection comprises an overlapped location region between the preset first region and the second region, and wherein the union comprises the intersection and a non-overlapped location region between the preset first region and the second region in ranges of the preset first region and the second region;
adding, by the video decoding device, the second reference frame to the reference frame list when the ratio is less than a preset value; and
adding, by the video decoding device, the first picture frame on which the decoding reconstruction has been performed to the reference frame list when the ratio is greater than or equal to the preset value.

US Pat. No. 10,771,812

ENHANCED INTRA-PREDICTION CODING USING PLANAR REPRESENTATIONS

NTT DOCOMO, INC., Tokyo ...

1. A video decoding method for predicting pixel values of target pixels in a target block, the method comprising computer executable steps executed by a processor of a video decoder to implement:deriving a prediction mode selected by an encoder from among a plurality of different intra-prediction modes including a DC mode, directional modes, and a planar mode;(a) calculating a first prediction value of a respective target pixel using linear interpolation between a pixel value of a horizontal boundary pixel horizontally co-located with a respective target pixel, the horizontal boundary pixel being from among a plurality of horizontal boundary pixels on the upper side of the target block, and a pixel value of one vertical boundary pixel from among a plurality of vertical boundary pixels on the left side of the target block when the prediction mode is the planar mode, wherein the first prediction value consists only of a first value derived solely from the linear interpolation between the pixel value of the horizontal boundary pixel horizontally co-located with the target pixel and the pixel value of said one vertical boundary pixel;(b) calculating a second prediction value of a respective target pixel using linear interpolation between a pixel value of a vertical boundary pixel vertically co-located with a respective target pixel, the vertical boundary pixel being from among a plurality of the vertical boundary pixels on a left side of the target block, and a pixel value of one horizontal boundary pixel from among a plurality of the horizontal boundary pixels when the prediction mode is the planar mode, wherein the second prediction value consists only of a second value derived solely from the linear interpolation between the pixel value of the vertical boundary pixel vertically co-located with the target pixel and the pixel value of said one horizontal boundary pixel; and(c) averaging the first prediction value and the second prediction value of each target pixel to derive each prediction pixel value in a prediction block when the prediction mode is the planar mode, wherein the prediction pixel value consists only of an average of the first and second prediction values.

US Pat. No. 10,771,811

OVERLAPPED MOTION COMPENSATION FOR VIDEO CODING

QUALCOMM Incorporated, S...

1. A method of decoding encoded video data, the method comprising:receiving, in an encoded video bitstream, a first coding unit of the encoded video data, the first coding unit being partitioned into a plurality of sub-blocks including a first sub-block;
receiving, in the encoded video bitstream, a second coding unit comprising one or more neighboring sub-blocks, each of the one or more neighboring sub-blocks neighboring the first sub-block;
parsing, from the encoded video bitstream, a syntax element;
determining that the syntax element is set to a value indicating that the first sub-block is encoded according to an overlapped block motion compensation mode;
determining motion information of one or more of the neighboring sub-blocks; and
based on the first sub-block being encoded according to the overlapped block motion compensation mode, decoding, using the overlapped block motion compensation mode, the first sub-block using the determined motion information of one or more of the neighboring sub-blocks.

US Pat. No. 10,771,810

MOVING PICTURE CODING DEVICE, MOVING PICTURE CODING METHOD AND MOVING PICTURE CODING PROGRAM, AND MOVING PICTURE DECODING DEVICE, MOVING PICTURE DECODING METHOD AND MOVING PICTURE DECODING PROGRAM

JVCKENWOOD CORPORATION, ...

1. A moving picture coding device adapted to code a coding block consisting of greater than or equal to one prediction block, comprising:a temporal merging motion information candidate generation unit configured to derive, when a predetermined threshold size is defined and the coding block is smaller than or equal to the predetermined threshold size, a temporal merging motion information candidate shared for all the prediction blocks in the coding block from a prediction block of a coded picture different from a picture having a prediction block subject to coding and derive, when the predetermined threshold size is not defined or the coding block is larger than the predetermined threshold size, a temporal merging motion information candidate not shared for all the prediction blocks in the coding block from a prediction block of a coded picture different from a picture having a prediction block subject to coding;
a merging motion information candidate generation unit configured to generate a plurality of merging motion information candidates including the temporal merging motion information candidate and zero or more spatial merging motion information candidates;
a supplementary motion information candidate generation unit configured to derive any number of supplementary motion information candidate that has predefined motion vectors and reference indexes;
a merging motion information selection unit configured to select one merging motion information candidate from the plurality of merging motion information candidates based on a candidate specifying index and to use the selected merging motion information candidate as motion information of the prediction block subject to coding; and
a coding unit configured to code the candidate specifying index and information for indicating whether or not to define the predetermined threshold size,
wherein the size of the coding block is one of a plurality of sizes of a minimum size to a maximum size, and the size of the coding block is variable in a picture.

US Pat. No. 10,771,809

PICTURE PREDICTION METHOD AND PICTURE PREDICTION APPARATUS

Huawei Technologies Co., ...

1. A picture prediction method, comprising:determining motion vectors of W control points in a current picture block, wherein the motion vectors of the W control points is determined based on a predictor with precision of 1/n of a pixel precision and a difference of motion vector with precision of 1/n of the pixel precision;
obtaining, by calculation, motion vectors of P pixel units of the current picture block by using a motion model and the motion vectors of the W control points, wherein, precision of the motion vector of each of the P pixel units is 1/N of the pixel precision, the motion vector of each of the P pixel units is used to determine a corresponding reference pixel unit in a reference picture of a corresponding pixel unit, W, n, and N are integers greater than 1, N is greater than n, and P is a positive integer; and
performing interpolation filtering on a pixel of the corresponding reference pixel unit by using an interpolation filter with a phase of Q, to obtain a predicted pixel value of each of the P pixel units, wherein Q is an integer greater than n.

US Pat. No. 10,771,808

VIDEO ENCODER AND DECODER FOR PREDICTIVE PARTITIONING

Huawei Technologies Co., ...

1. A video encoder, the video encoder being configured to:select at least one reference picture and a plurality of blocks in the at least one reference picture;
calculate, for each selected block, a projected location in a current picture based on a motion vector associated to the selected block in the reference picture;
determine each selected block, of which the projected location spatially overlaps with the block in the current picture, to be a reference block; and
generate for at least one reference block a partitioning predictor for the current block based on partitioning information associated to the at least one reference picture.

US Pat. No. 10,771,807

SYSTEM AND METHOD FOR COMPRESSING VIDEO USING DEEP LEARNING

Wipro Limited, Bangalore...

1. A method of compressing videos using deep learning, the method comprising:segmenting, by a video compressing device, each of a plurality of frames associated with a video into a plurality of super blocks based on an element present in each of the plurality of frames and a motion associated with the element, wherein the segmentation of the plurality frames into the plurality of super blocks is of variable shape and size;
determining, by the video compressing device, a block size for partition of each of the plurality of super blocks into a plurality of sub blocks, based on a feature of each of the plurality of super blocks using a Convolutional Neural Network (CNN), wherein the feature comprises at least one of a size of the super block and a motion related information;
generating, by the video compression device, a prediction data for each of the plurality of sub blocks based on a motion vector predicted and learned by the CNN, wherein the CNN predicts the motion vector based on a co-located frames;
determining, by the video compression device, a residual data for each of the plurality of sub blocks by subtracting the prediction data from an associated original data, wherein the associated original data is a bit stream of each of the plurality of sub blocks; and
generating, by the video compressing device, a transformed quantized residual data using each of a transformation algorithm and a quantization algorithm based on a plurality of parameters associated with the residual data, wherein the plurality of parameters comprises the compression rate and signal to noise ratio.

US Pat. No. 10,771,806

METHOD AND DEVICE FOR ENCODING A SEQUENCE OF IMAGES AND METHOD AND DEVICE FOR DECODING A SEQUENCE OF IMAGES

Canon Kabushiki Kaisha, ...

1. A method of determining motion information predictor candidates for a motion information predictor for encoding an image portion in an image to be encoded by inter prediction, the motion information predictor candidate being capable of including a motion vector associated with a first index corresponding to a first reference picture list and a motion vector associated with a second index corresponding to a second reference picture list, the method comprises for the image portion to be encoded:acquiring a temporal motion information predictor candidate from an image portion in an image different from the image to be encoded and one or more spatial motion information predictor candidates from one or more image portions in the image to be encoded;
performing predetermined processing on the one or more spatial motion information predictor candidates but not on any temporal motion information predictor candidates for the image portion to be encoded, wherein the predetermined processing comprises excluding a duplicate spatial motion information predictor candidate from the one or more spatial motion information predictor candidates, in the case where the one or more spatial motion information predictor candidates includes duplicate spatial motion information predictor candidates;
obtaining, based on the temporal motion information predictor candidate and one or more spatial motion information predictor candidates resulting from the predetermined processing, a set of motion information predictor candidates for the motion information predictor; and
generating one or more additional motion information predictor candidates by combining motion vectors from first and second motion information predictor candidates included in the set of motion information predictor candidates, the motion vector from the first motion information predictor candidate being associated with the first index and the motion vector from the second motion information predictor candidate being associated with the second index; and
including the generated one or more additional motion information predictor candidates in the set of motion information predictor candidates.

US Pat. No. 10,771,805

APPARATUS, A METHOD AND A COMPUTER PROGRAM FOR VIDEO CODING AND DECODING

NOKIA TECHNOLOGIES OY, E...

1. A method for decoding a scalable bitstream comprising a base layer and at least one enhancement layer, the method comprisingreceiving at least a part of the bitstream and one or more indications in a video parameter set associated with one or more coded video sequences within the bitstream, the video parameter set including parameters that apply to the one or more coded video sequences, and the one or more coded video sequences comprising a sequence of consecutive access units in decoding order from an instantaneous decoding refresh access unit to a next instantaneous decoding refresh access unit, the indications relating to use of picture types and/or network abstraction layer (NAL) unit types of NAL units on said base and one or more enhancement layers for inter-layer prediction;
determining, by a processor on the basis of computational capacity for decoding and/or receiver capabilities and/or network throughput, or being configured to use one or more target output layers for decoding;
decoding, by the processor, from the one or more indications information relating to use of picture types and/or NAL unit types of the NAL units on said base and one or more enhancement layers for inter-layer prediction of the least one enhancement layer from the base layer, wherein the information defines whether the use of inter-layer prediction is restricted to be performed from intra-coded pictures from the base layer;
determining, by the processor and based on said information, combinations of picture types and/or NAL unit types and layer identifier values that are not to be decoded to decode the one or more target output layers and/or combinations of picture types and/or NAL unit types and layer identifier values that are to be decoded to decode the one or more target output layers; and
decoding, by the processor, the NAL units of the bitstream indicated by the combinations of picture types and/or NAL unit types and layer identifier values to be decoded.

US Pat. No. 10,771,804

MOVING PICTURE CODING METHOD, MOVING PICTURE DECODING METHOD, MOVING PICTURE CODING APPARATUS, MOVING PICTURE DECODING APPARATUS, AND MOVING PICTURE CODING AND DECODING APPARATUS

SUN PATENT TRUST, New Yo...

1. A moving picture decoding method of decoding a current block which is included in a current picture, the moving picture decoding method comprising:judging whether or not reference pictures in a first reference picture list and a second reference picture list are located in the same side of the current picture;
generating a first motion vector predictor and a second motion vector predictor with reference to a first reference motion vector and a second reference motion vector of a reference block in a reference picture included in the reference pictures; and
predicting the current block using the first motion vector predictor and the second motion vector predictor; wherein
(i) when any reference pictures in the first reference picture list and the second reference picture list are located in the same side of the current picture in display order,
the first motion vector predictor is generated using the first reference motion vector, and
the second motion vector predictor is generated using the second reference motion vector, and
(ii) when a reference picture in the first reference picture list and the second reference picture list are not located in the same side of the current picture in display order, and the first reference motion vector and the second reference motion vector refer to different directions,
both of the first motion vector predictor and the second motion vector predictor are generated using only one of the first reference motion vector and the second reference motion vector.

US Pat. No. 10,771,803

CONSTRAINED MOTION FIELD ESTIMATION FOR HARDWARE EFFICIENCY

GOOGLE LLC, Mountain Vie...

1. A method for decoding an encoded block of an encoded frame, the encoded frame is processed in processing units, wherein a processing unit comprises blocks of the encoded frame including the encoded block, the method comprising:selecting motion vectors corresponding to blocks of an extended collocated processing unit in a first reference frame;
identifying a reference block of the first reference frame, such that the encoded block is a valid projection, using a motion vector of the reference block that refers to a third reference frame that is different from the encoded frame, onto the encoded frame, wherein the valid projection of the reference block onto the encoded frame using the motion vector of the reference block intersects the encoded frame at the encoded block, and wherein the valid projection comprises a scaling of the motion vector of the reference block using respective display orders of the encoded frame, the first reference frame, and the third reference frame;
determining a temporal motion vector candidate for the encoded block in a second reference frame using a motion vector of the identified reference block and respective display orders of the encoded frame, the first reference frame, the second reference frame, and the third reference frame;
adding the temporal motion vector candidate to a motion vector candidate list;
selecting a motion vector from the motion vector candidate list;
generating a prediction block for the encoded block using the selected motion vector; and
decoding the encoded block using the prediction block.

US Pat. No. 10,771,802

METHOD FOR COLOR MAPPING A VIDEO SIGNAL BASED ON COLOR MAPPING DATA AND METHOD OF ENCODING A VIDEO SIGNAL AND COLOR MAPPING DATA AND CORRESPONDING DEVICES

InterDigital VC Holdings,...

1. A method, comprising:encoding a video signal represented in a first color volume;
encoding color mapping data; and
encoding an indicator, wherein said indicator indicates whether (1) a color mapping based on said color mapping data is to be applied directly on said video signal represented in said first color volume or (2) said color mapping based on said color mapping data is to be applied on said video signal after a prior color mapping of said video signal from said first color volume into a second color volume different from said first color volume, and wherein said indicator further indicates said second color volume.

US Pat. No. 10,771,801

REGION OF INTEREST (ROI) REQUEST AND INQUIRY IN A VIDEO CHAIN

TEXAS INSTRUMENTS INCORPO...

1. A method comprising:transmitting, by a first device to a second device, a video stream in a video chain;
receiving, by the first device from the second device, a region of interest (ROI) request, wherein the ROI request comprises a first ROI type indicator that indicates a type of a first ROI, a second ROI type indicator that indicates a type of a second ROI, a first ROI priority field that indicates a first priority of the first ROI, and a second ROI priority field that indicates a second priority of the second ROI, wherein the first ROI priority field indicates a higher priority than the second ROI priority field, wherein the type of the first ROI is different than the type of the second ROI, and wherein the first ROI request is based on statistical information collected about a set of third devices;
modulating, by the first device, a quantization step size of the first ROI in the video stream based on the first ROI type indicator and the first ROI priority field; and
modulating, by the first device, a quantization step size of the second ROI in the video stream based on with the second ROI type indicator and the second ROI priority field.

US Pat. No. 10,771,800

SAMPLE ARRAY CODING FOR LOW-DELAY

GE VIDEO COMPRESSION, LLC...

1. A decoder for reconstructing a sample array of a video from an entropy-encoded data stream, the decoder comprising:an entropy decoder configured to entropy decode a plurality of entropy slices in the entropy-encoded data stream to reconstruct the sample array, each of the plurality of entropy slices corresponding to one row of a plurality of rows of the sample array and each of the plurality of rows having a same number of blocks therein,
wherein, for a current entropy slice of the plurality of entropy slices, the entropy decoder is configured to:
initialize, during a starting phase of entropy decoding, a first probability estimate for the current entropy slice before decoding a first block of a current row corresponding to the current entropy slice based on a second probability estimate obtained after completing entropy decoding of a second block of a previous row corresponding to a previous entropy slice of the plurality of entropy slices,
wherein the first block is the left-most block of the current row and the second block is spatially adjoining and to the right of the left-most block of the previous row, and the current and previous rows are consecutive rows of the sample array, and
perform, during a continuation phase of the entropy decoding, entropy decoding using the first probability estimate along an entropy coding path leading from left to right across the current row, the entropy decoding is performed by adapting the first probability estimate along the entropy coding path using only a previously-decoded part of the current entropy slice.

US Pat. No. 10,771,799

METHOD AND APPARATUS FOR VIDEO CODING

Tencent America LLC, Pal...

1. A method for video decoding in a decoder, comprising:decoding prediction information of a current block in a current coding tree unit (CTU) from a coded video bitstream, the prediction information being indicative of an intra block copy mode, a size of the current CTU being smaller than a maximum size of a reference sample memory for storing reconstructed samples;
determining a block vector that points to a reference block in a same picture as the current block, the reference block having reconstructed samples buffered in the reference sample memory; and
reconstructing at least a sample of the current block based on the reconstructed samples of the reference block that are retrieved from the reference sample memory.

US Pat. No. 10,771,798

MULTI-STREAM IMAGE PROCESSING APPARATUS AND METHOD OF THE SAME

1. A multi-stream image processing method, comprising:generating a plurality of image streams by a former stage circuit according to a same image source, wherein the image streams comprise a main image stream and at least one sub image stream, and a resolution of the main image stream is higher than a resolution of the sub image stream, and the main image stream comprises a plurality of main image frames and the sub image stream comprises a plurality of sub image frames;
processing the main image stream and the sub image stream for a plurality of image frame processing time periods, each comprising a first sub period and a second sub period behind the first sub period, to generate a processed main image stream and at least one processed sub image stream, wherein for an N-th image frame processing time period:
storing an N-th sub image frame of the sub image stream by the former stage circuit in at least one current sub image storage block of a memory module during the N-th image frame processing time period;
storing an N-th main image frame of the main image stream by the former stage circuit in a main image storage block of the memory module during the N-th image frame processing time period;
reading and processing an N?1-th sub image of the sub image stream stored in a previous sub image storage block of the memory module by a latter stage circuit during the first sub period of the N-th image frame processing time period, to generate an N?1-th processed sub image frame in the processed sub image stream; and
reading and processing the N-th main image frame stored in the main image storage block by the latter stage circuit during the second sub period of the N-th image frame processing time period, to generate an N-th processed main image frame in the processed main image stream.

US Pat. No. 10,771,797

ENHANCING A CHROMA-SUBSAMPLED VIDEO STREAM

LogMeIn, Inc., Boston, M...

1. A method performed by a computing device of sending digital video in high-quality compressed form over a network, the method comprising:performing chroma subsampling on an original video stream to yield an altered video stream whose chroma resolution is lower than its luma resolution;
applying video compression to the altered video stream to yield a compressed base layer (BL) stream;
creating an enhancement layer (EL) stream based on differences between the original video stream and a decompressed version of the BL stream, the EL stream including additional chroma information, which, when combined with the BL stream, encodes a video stream that has higher chroma fidelity to that of the original video stream than does the BL stream alone, wherein creating the EL stream includes creating a set of mappings that transforms chroma values of particular pixels of the decompressed version of the BL stream into enhanced chroma values that have higher chroma fidelity to that of the original video stream than does the BL stream alone; and
sending both the BL stream and the EL stream to a receiver device across the network to enable the receiver device to generate a version of the original video stream by combining the BL stream with the additional chroma information of the EL stream.

US Pat. No. 10,771,796

ENCODING AND DECODING BASED ON BLENDING OF SEQUENCES OF SAMPLES ALONG TIME

V-Nova International Limi...

1. A method comprising:receiving first encoded image data derived from multiple original images in a sequence, the first encoded image data specifying settings for a baseline for reproducing the sequence, wherein the multiple original images are residual images representing adjustments to respective preliminary images to generate corresponding reconstructed images, the preliminary images derived from an upsampling operation performed on respective representations of the preliminary images at a lower level of quality;
receiving second encoded image data specifying further adjustments; and
combining the baseline and the second encoded image data to reconstruct an image in the sequence.

US Pat. No. 10,771,795

SWITCHABLE CHROMA SAMPLING FOR WIRELESS DISPLAY

Intel Corporation, Santa...

1. A video transmitter apparatus comprising:a subsampler to generate a primary bitstream based on a video signal, wherein the primary bitstream is to be encoded with subsampled chroma information;
a state monitor to detect a static condition with respect to the video signal; and
an upsample emulator communicatively coupled to the state monitor and the subsampler, the upsample emulator to generate, in response to the static condition, a plurality of auxiliary bitstreams based on the video signal, wherein each of the plurality of auxiliary bitstreams is to be encoded with full resolution chroma information, wherein the upsample emulator includes
a first redirector to both (1) generate, in response to the static condition, a first pointer to first chroma information associated with a first auxiliary bitstream and (2) process the first pointer in an encoder as luma information, and a second redirector to both (1) generate, in response to the static condition, a second pointer to second chroma information associated with a second auxiliary bitstream and (2) process the second pointer in the encoder as the luma information, and/or
a first monochrome controller to both (1) generate, in response to the static condition, a first monochrome video based on the first chroma information associated with the video signal and (2) process the first monochrome video in the encoder, and a second monochrome controller to both (1) generate, in response to the static condition, a second monochrome video based on the second chroma information associated with the video signal and (2) process the second monochrome video in the encoder.

US Pat. No. 10,771,794

ADAPTIVE PARTITION CODING

GE VIDEO COMPRESSION, LLC...

1. A decoder for reconstructing a block of a depth map associated with a texture picture from a data stream, configured to:determine a texture threshold based on a mean of sample values of a reference block of a texture picture;
determine a contour partition of the reference block of the texture picture based on the texture threshold;
predict a contour partition of a block in a depth map based on the contour partition of the reference block such that the block of the depth map is segmented into two portions, wherein the block in the depth map corresponds to the reference block of the texture picture; and
reconstruct the block of the depth map by decoding the two portions associated with the contour partition.

US Pat. No. 10,771,793

EFFECTIVE PREDICTION USING PARTITION CODING

GE VIDEO COMPRESSION, LLC...

1. A decoder for reconstructing a depth map of a video signal using encoded information from a data stream, the decoder configured to:derive a bi-partition of a block of the depth map into first and second portions;
associate each of neighboring samples of the depth map with a respective one of the first and second portions, the neighboring samples adjoining the block of the depth map;
predict the block of the depth map by determining a first predicted value for the first portion based on values of a first set of the neighboring samples, or determining a second predicted value for the second portion based on values of a second set of the neighboring samples;
determine, from the data stream, one or more refinement values for the first or second predicted value, wherein the one or more refinement values include an absolute value of a first or second refinement value; and
refine the prediction of the block by applying the first refinement value to the first predicted value for the first portion or applying the second refinement value to the second predicted value for the second portion.

US Pat. No. 10,771,792

ENCODING DATA ARRAYS

Arm Limited, Cambridge (...

1. An apparatus for encoding an array of data elements, or a stream of such arrays, the apparatus comprising an encoder comprising:an encoding circuit operable to, when encoding an array of data elements, or a stream of such arrays, encode the array(s) of data elements as a plurality of independent segments, wherein each independent segment can be decoded independently;
an output circuit operable to output an encoded data stream including the plurality of independent segments; and
a header generating circuit operable to generate a header for output with an encoded data stream, the header containing information indicative of the location of each of the plurality of independent segments within the encoded data stream,
wherein the encoding circuit is configured to, when encoding a data array, or stream of data arrays:
allocate an output buffer for the encoded data stream;
pass a plurality of data element sets associated with an independent segment in parallel to a plurality of processing cores;
pass the encoded data from each processing core to an internal buffer;
when all of the encoded data for an independent segment is present in the internal buffer, stitch the encoded data for that independent segment together in order; and
write out the stitched independent segment to the output buffer.

US Pat. No. 10,771,791

VIEW-INDEPENDENT DECODING FOR OMNIDIRECTIONAL VIDEO

MEDIATEK INC., Hsinchu (...

1. A method comprising:receiving a bitstream comprising an encoded sequence of omnidirectional images, each omnidirectional image having a plurality of views, each view having a set of view-specific data for decoding the view;
receiving a selection of a view from the plurality of views of one of the omnidirectional images in the sequence;
for the one of the omnidirectional images in the sequence, decoding the set of view-specific data of the selected view of the one of the omnidirectional images and also decoding a part of at least a first other view of the plurality of views of the one of the omnidirectional images that is adjacent to the selected view without decoding at least a second other view of the plurality of views of the one of the omnidirectional images that is not adjacent to the selected view; and
providing the selected view of the omnidirectional image based on the decoded set of view-specific data for display.

US Pat. No. 10,771,790

VIDEO DECODING DEVICE AND METHOD USING INVERSE QUANTIZATION

NEC CORPORATION, Minato-...

1. A video decoding device for decoding image blocks based on inverse quantization of input compressed video data to execute a process of generating image data as a set of the image blocks, comprisingquantization step size decoding unit which decodes a quantization step size that controls a granularity of the inverse quantization,
wherein the quantization step size decoding unit calculates the quantization step size that controls the granularity of the inverse quantization by, based on an image prediction parameter, selectively using a mean value of at least a quantization step size assigned to a leftwardly adjacent neighboring image block already decoded and a quantization step size assigned to an upwardly adjacent neighboring image block already decoded or a quantization step size assigned to an image block decoded immediately before.

US Pat. No. 10,771,789

COMPLEXITY ADAPTIVE RATE CONTROL

Google LLC, Mountain Vie...

1. A method comprising:accessing a media item comprising a first chunk and a second chunk;
determining, by a processing device, an average bit rate of the first chunk based on a storage size of the first chunk and a playback time of the first chunk;
determining, by the processing device executing a complexity measurement module, a first media complexity measure for the first chunk and a second media complexity measure for the second chunk, wherein the first media complexity measure is determined using a mathematical formula that is based on the average bit rate of the first chunk and is determined without performing an analysis of image pixel values of one or more frames of the first chunk;
selecting, by the processing device, a single pass encoder and a multiple pass encoder from a plurality of encoders, wherein the single pass encoder is selected based on the first media complexity measure of the first chunk and the multiple pass encoder is selected based on the second media complexity measure of the second chunk; and
encoding the first chunk of the media item using the single pass encoder and encoding the second chunk of the media item using the multiple pass encoder.

US Pat. No. 10,771,788

INFORMATION PROCESSING DEVICE AND INFORMATION PROCESSING METHOD

SONY CORPORATION, Tokyo ...

1. A non-transitory computer-readable storage medium storing executable instructions, which when executed by processing circuitry cause the processing circuitry to execute a method, the method comprising:receiving image data from an image sensor;
extracting a feature amount of the image data;
matching the feature amount of the image data with one or more reference feature amounts in a database based on a calculated distance between the extracted feature amount and the one or more reference feature amounts; and
in response to a matching result,
predict a result of a process for the image data, and
determine the process to be executed based on the predicted result.

US Pat. No. 10,771,787

DYNAMIC DATA COMPRESSION SYSTEMS AND METHODS FOR USE WITH VEHICLE DATA

TOYOTA MOTOR NORTH AMERIC...

1. A dynamic data compression system, comprising:a group of sensors arranged on-board a vehicle and operable to detect and capture driving event data, the group of sensors comprising a target sensor;
a controller coupled to the group of sensors and operable to receive one or more data streams indicative of the driving event data from the group of sensors; and
a communication interface coupled to the group of sensors and the controller for data transmission;
wherein the controller is further operable to:
analyze the one or more data streams;
determine a vehicle operation condition based on the one or more data streams, the vehicle operation condition comprising a speed of the vehicle, a location of the vehicle, a motion of the vehicle, or a combination thereof; and
determine whether or not to compress a data stream from the target sensor based on the vehicle operation condition;
wherein a data capture rate of the target sensor is based on the vehicle operation condition.

US Pat. No. 10,771,786

METHOD AND SYSTEM OF VIDEO CODING USING AN IMAGE DATA CORRECTION MASK

Intel Corporation, Santa...

1. A computer-implemented method of video coding comprising:receiving color pixel data of at least one image of a video sequence;
identifying, on the at least one image, at least one region that has only two uniform colors including a first area of one uniform color adjacent a second area of a uniform color;
forming a mask comprising mask image data at least comprising a map indicating which of the first and second areas at least some pixels of the image belong; and
placing the mask in a bitstream to transmit the mask to a receiver to have the color of at least some of the pixels of the first area or second area or both on a decompressed version of the image at the receiver modified by using the map to determine the correct color for the pixels to display the decompressed image with the pixels with corrected colors; and
wherein the mask is arranged to change the color values of pixels of the decompressed version of the image without first determining whether or not the color values of the pixels of the decompressed version of the image meet a criteria to avoid the change.

US Pat. No. 10,771,785

IMAGE ENCODING APPARATUS, METHOD OF CONTROLLING THE SAME, AND STORAGE MEDIUM

CANON KABUSHIKI KAISHA, ...

1. An image encoding apparatus for encoding a video, the apparatus comprising:at least one memory storing instructions; and
at least one processor that implements the instructions to execute a plurality of tasks, including:
a separation task that separates a plurality of planes respectively configured in a single component for a frame of an input video;
a wavelet transform task that performs a wavelet transformation of a plane of interest in the plurality of planes obtained by the separation task;
an extraction task that, from each sub-band obtained by the wavelet transform task, extracts in order blocks representing the same region of an image;
a quantizing task that, using a quantization parameter, quantizes wavelet transformation coefficients for each of the blocks; and
an encoding tasks that encodes the wavelet transformation coefficients after the quantization by the quantizing task, wherein
the quantizing task:
determines, for each of the blocks, a correction parameter for correcting the quantization parameter based on a direct current value and an alternating current value of the block; and
quantizes the wavelet transformation coefficients in accordance with the quantization parameter resulting from correction by the determined correction.

US Pat. No. 10,771,784

SYSTEM FOR CODING HIGH DYNAMIC RANGE AND WIDE COLOR GAMUT SEQUENCES

ARRIS Enterprises LLC, S...

1. A digital video decoding system for decoding a bit stream in a format that does not accommodate a digital video data set including high dynamic range (HDR) and wide color gamut (WCG) video data, to reconstruct an output digital video data set including at least one of HDR and WCG video data, the digital video decoding system comprising:a decoder for decoding the bit stream to recover a digital video data set from the bit stream;
an intermediate color conversion process configured to:
extract intermediate color conversion metadata from the bit stream, the intermediate color conversion metadata identifying an input color space of the digital video data set extracted from the bit stream and an intermediate color space to which the digital video data set is to be converted;
convert the inverse transformed portion of the digital video data set from the input color space to the intermediate color space to produce intermediate color converted digital video data; and
an inverse compression transfer function process configured to extract compression metadata from the bit stream and to apply an inverse compression transfer function to the intermediate color converted video data to generate the digital video data set to be applied to the output color conversion process;
predictively decode the bit stream using one or more reference frames;
apply a color conversion process to the output digital video data set, the color conversion process being an inverse of the output color conversion process to generate a color converted video data set;
generate a perceptual transfer function process from the metadata identifying the inverse perceptual transfer function to be applied to the received video data;
apply the perceptual transfer function to the color converted video data set to generate a transformed video data set;
generate a perceptual normalization function using the metadata identifying the inverse perceptual normalization function;
apply the perceptual normalization function to the transformed video data set to generate a perceptually normalized video data set;
generate a further reference frame for use in predictively decoding the bit stream from the perceptually normalized video data set.

US Pat. No. 10,771,783

TRANSFORMS FOR LARGE VIDEO AND IMAGE BLOCKS

GOOGLE LLC, Mountain Vie...

1. A method for encoding a current block of a video frame, the method comprising:generating a prediction residual block for the current block, the prediction residual block including pixel values, the prediction residual block having a first size;
at a transform stage of an encoder used for the encoding of the current block:
transforming the pixel values of the prediction residual block to produce transform coefficients;
determining that the transform coefficients exceed a threshold cardinality;
responsive to determining that the transform coefficients exceed the threshold cardinality, discarding a number of the transform coefficients such that a remaining number of the transform coefficients does not exceed the threshold cardinality; and
generating a transform block using the remaining number of the transform coefficients, the transform block having a second size smaller than the first size;
quantizing the remaining number of the transform coefficients within the transform block to produce quantized transform coefficients; and
encoding the quantized transform coefficients to a bitstream.

US Pat. No. 10,771,782

METHOD AND APPARATUS FOR VIDEO CODING

Tencent America LLC, Pal...

1. A method for video decoding in a video decoder, the method comprising:decoding a coded video sequence to obtain an intra prediction mode;
determining an intra prediction angle that corresponds to the intra prediction mode based on a predetermined plurality of intra prediction modes and a corresponding predetermined plurality of intra prediction angles; and
reconstructing at least one sample of a block using the intra prediction angle that is determined to correspond to the indicated intra prediction mode, wherein
the plurality of intra prediction modes includes at least one of a first plurality of wide angle prediction modes and a second plurality of wide angle prediction modes,
the first plurality of wide angle prediction modes is beyond a bottom left direction diagonal mode, and
the second plurality of wide angle prediction modes is beyond a top right direction diagonal mode.

US Pat. No. 10,771,781

METHOD AND APPARATUS FOR DERIVING INTRA PREDICTION MODE

Electronics and Telecommu...

1. A decoding method, comprising:deriving an intra prediction mode of a target block; and
performing intra prediction for the target block that uses the derived intra prediction mode, wherein the intra prediction mode of the target block is derived using a Most Probable Mode (MPM),
an MPM list for the MPM is configured for the target block, and
the MPM list is used for intra prediction for each of a plurality of sub-blocks generated by dividing the target block.

US Pat. No. 10,771,780

METHOD AND APPARATUS FOR SIGNALING AND CONSTRUCTION OF VIDEO CODING REFERENCE PICTURE LISTS

VID SCALE, INC., Wilming...

1. A video decoder apparatus, comprising:a processor configured to generate a temporary ordered list of reference pictures from a decoded picture buffer (DPB), in which the temporary ordered list is ordered with any reference pictures for a current picture that is currently being decoded that are in the DPB and that are temporally before the current picture listed in order by temporal distance from the current picture, followed by any reference pictures for the current picture that are in the DPB and that are temporally later than the current picture listed in order by temporal distance from the current picture, followed by any long term reference pictures for the current picture that are in the DPB; and
the processor further configured to generate a reference picture list by selecting reference pictures from the temporary ordered list of reference pictures, wherein, when the reference picture list is to be a modified list, generating the reference picture list by, at least, for each entry in the reference picture list, reading an index into the temporary ordered list of reference pictures and selecting, for the entry in the reference picture list, a reference picture from the temporary ordered list of reference pictures that is identified by the index.

US Pat. No. 10,771,779

METHOD AND APPARATUS FOR ENCODING VIDEO USING VARIABLE PARTITIONS FOR PREDICTIVE ENCODING, AND METHOD AND APPARATUS FOR DECODING VIDEO USING VARIABLE PARTITIONS FOR PREDICTIVE ENCODING

SAMSUNG ELECTRONICS CO., ...

1. An apparatus for decoding a video, the apparatus comprising:a receiver configured to receive a bitstream including information about a size of a maximum coding unit, and split information; and
a decoder configured to split a picture into a plurality of maximum coding units using the information about the size of the maximum coding unit, hierarchically split the maximum coding unit into one or more coding units based on the split information, determine one or more prediction units in a coding unit among the one or more coding units using partition type information,
wherein the partition type information is determined based on a size of the coding unit and indicates one of a symmetric type and an asymmetric type partitioning of the one or more prediction units,
wherein the decoder determines one or more transform units in the coding unit, performs prediction on a prediction unit among the one or more prediction units in the coding unit and performs inverse-transformation on the one or more transform units in the coding unit, and generates a reconstructed coding unit based on the prediction and the inverse-transformation, and
wherein the partition type information indicates the one of the symmetric type and the asymmetric type when a size of the coding unit is larger than a predetermined size.

US Pat. No. 10,771,778

METHOD AND DEVICE FOR MPM LIST GENERATION FOR MULTI-LINE INTRA PREDICTION

TENCENT AMERICA LLC, Pal...

1. A video decoding method performed by at least one processor to control multi-line intra prediction using a non-zero reference line, the method comprising:determining whether an intra prediction mode of a first neighboring block of a current block is an angular mode;
determining whether an intra prediction mode of a second neighboring block of the current block is an angular mode; and
generating a Most Probable Mode (MPM) list that consists of six candidate modes for intra prediction of the current block, wherein
each of the six candidate modes are angular modes, and
the MPM list is generated such as to include the intra prediction mode of the first neighboring block in a case where the intra prediction mode of the first neighboring block is determined to be an angular mode, and to include the intra prediction mode of the second neighboring block in a case where the intra prediction mode of the second neighboring block is determined to be an angular mode.

US Pat. No. 10,771,777

ICON-BASED HOME CERTIFICATION, IN-HOME LEAKAGE TESTING, AND ANTENNA MATCHING PAD

VIAVI SOLUTIONS, INC., S...

1. A signal generator for determining leakage at a subscriber's premises in a cable television (CATV) network, the signal generator comprising:a frequency source comprising at least one oscillator, the frequency source operable to generate (i) a first test signal having a first frequency in an aeronautical band and a first signal level, and (ii) a second test signal having a second frequency in an Long Term Evolution (LTE) band and a second signal level, wherein the first and second signal levels are each 40 dB to 70 dB above a third signal level of a cable signal supplied by the CATV network;
shielding at the signal generator to prevent radiation over the air of the first and second test signals from the signal generator; and
a connector configured to be secured to a network port at the subscriber's premises to electrically couple the frequency source to cable wiring at the subscriber's premises, in order to supply the first and second test signals through the connector and the network port to the cable wiring at the subscriber's premises, wherein the connector is configured to be secured to the network port in order to supply the first and second test signals through the connector and the network port to the cable wiring at the subscriber's premises for maintaining a high power offset between the first and second test signals and the cable signal supplied by the CATV network, wherein the third signal level of the cable signal supplied by the CATV network is in a range of ?5 dBmV to 0 dBmV.

US Pat. No. 10,771,776

APPARATUS AND METHOD FOR GENERATING A CAMERA MODEL FOR AN IMAGING SYSTEM

SONY CORPORATION, Tokyo ...

1. An apparatus comprising:circuitry configured to
obtain a calibration image of a target;
derive a sparse image based on the calibration image, wherein the sparse image includes image points;
derive ray support points based on the image points by performing an image to target mapping of the image points based on a polynomial function, wherein the ray support points are indicative of light rays reflected by the target and incidenting on an image sensor; and
generate a camera model based on the derived ray support points;
derive an error metric of the derived ray support points, wherein the camera model is generated further based on the error metric;
calculate ray-target intersections;
compare the ray-target intersections with the ray support points to derive the error metric of the derived ray support points; and
transform the error metric from target coordinates into image coordinates.

US Pat. No. 10,771,775

IMAGING DEVICE, IMAGING SYSTEM, MOVING BODY, AND CONTROL METHOD

CANON KABUSHIKI KAISHA, ...

1. An imaging device comprising:a pixel area of a semiconductor substrate;
a plurality of pixels arranged in the pixel area including:
a light-receiving pixel arranged to receive incident light and output a pixel signal based on the incident light, the light-receiving pixel being arranged in the pixel area; and
a reference pixel arranged to output a pixel signal for configuring a failure detection signal, the reference pixel being arranged in the pixel area; and
an output control unit connected to the reference pixel and configured to input a control signal to the reference pixel to control a level of the pixel signal to be output by the reference pixel; and
a processing unit arranged to determine whether or not the failure detection signal is correct based on abnormality information indicating the abnormality of the reference pixel.

US Pat. No. 10,771,774

DISPLAY APPARATUS AND METHOD OF PRODUCING IMAGES HAVING SPATIALLY-VARIABLE ANGULAR RESOLUTIONS

Varjo Technologies Oy, H...

1. A display apparatus for producing an image having a spatially-variable angular resolution on an image plane, the display apparatus comprising:an image renderer per eye;
at least one optical element arranged on an optical path between the image renderer and the image plane, the at least one optical element comprising at least a first optical portion and a second optical portion having different optical properties with respect to magnification; and
a processor coupled to the image renderer, wherein the processor or an image source communicably coupled to the processor is configured to generate a warped image based upon the optical properties of the first optical portion and the second optical portion,
wherein the processor is configured to render the warped image via the image renderer, wherein projections of a first portion and a second portion of the warped image are to be differently magnified by the first optical portion and the second optical portion of the at least one optical element, respectively, to produce the image on the image plane in a manner that the produced image appears de-warped to a user, and
wherein, when generating the warped image, the processor or the image source is configured to adjust an intensity of the first portion and the second portion of the warped image in a manner that, upon being differently magnified, the projections of the first portion and the second portion of the warped image produce the image on the image plane that appears to have a uniform brightness across the image.

US Pat. No. 10,771,773

HEAD-MOUNTED DISPLAY DEVICES AND ADAPTIVE MASKING METHODS THEREOF

HTC Corporation, Taoyuan...

1. A head-mounted display device, comprising:a display system, configured to selectively display a first content to be visually recognized as being superimposed on a scenery of a surrounding environment, or not display the first content;
a light modulator, positioned between the display system and the surrounding environment, comprising an array of pixels, wherein a portion of the pixels is configured to modulate light to present a display of a second content, while the rest of the pixels are configured to be substantially transparent to light;
an ambient light sensor, configured to detect a lighting condition of the surrounding environment; and
a controller, configured to adjust, according to the lighting condition, at least one of a first power level applied to allow the display system to display the first content, and a second power level applied to allow the light modulator to modulate light,
wherein the adjustment of at least one of the first power level and the second power level comprises: increasing the second power level and decreasing the first power level when the lighting condition indicates that a brightness is greater than a predetermined threshold, and increasing the first power level and decreasing the second power level when the lighting condition indicates that the brightness is lower than the predetermined threshold.

US Pat. No. 10,771,772

IMAGE DISPLAY DEVICE

HITACHI-LG DATA STORAGE, ...

1. An image display device that displays a first image and a second image in parallel, comprising:a light source unit;
a first panel unit which is illuminated by light emitted from the light source unit, and generates and displays the first image;
a second panel unit which is illuminated by light emitted from the light source unit and generates and displays the second image;
a timing control unit that controls operation timings of the light source unit, the first panel unit, and the second panel unit,
wherein a period of one frame of the first image displayed by the first panel unit includes a standby period in which the first panel unit is not illuminated by the light from the light source unit for image generation preparation and a lighting period in which the first panel unit is illuminated by the light from the light source unit,
wherein a period of one frame of the second image displayed by the second panel unit includes a standby period in which the second panel unit is not illuminated by the light from the light source unit for image generation preparation and a lighting period in which the second panel unit is illuminated by the light from the light source unit, and
wherein the timing control unit performs control such that the period of one frame of the first image displayed by the first panel unit and the period of one frame of the second image displayed by the second panel unit overlap and frame start times of the respective periods are shifted from each other by a predetermined delay time Td,
wherein the timing control unit sets the delay time Td so that a sum Tsum of overlap lighting periods in which a lighting period in which the light source unit illuminates the first panel unit and a lighting period in which the light source unit illuminates the second panel unit overlaps in the period of one frame of the first image displayed by the first panel unit is a minimum.

US Pat. No. 10,771,771

METHOD OF DETERMINING CALIBRATION PARAMETER FOR THREE-DIMENSIONAL (3D) DISPLAY DEVICE AND 3D DISPLAY DEVICE USING THE METHOD

SAMSUNG ELECTRONICS CO., ...

37. A device comprising:memory storing computer-executable instructions; and
one or more processors configured to execute the computer-executable instructions such that the one or more processors are configured to,
determine a calibration parameter for a three-dimensional (3D) display device based on,
a first image having a first pattern and a second image having a second pattern, the second image being a version of the first image that is displayed with a panel and an optical layer of the 3D display device,
a parameter corresponding to a period of the first pattern,
a parameter corresponding to a gradient of the second pattern, and
a parameter corresponding to a period of the second pattern, and
calibrate subpixels of the panel based on the calibration parameter to indicate a propagation direction of light from each subpixel,
wherein the parameter corresponding to the period of the first pattern is an interval between lines included in the first pattern, and
wherein the one or more processors are further configured to calculate a rotation angle ? between the optical layer and the panel and a pitch p of the optical layer based on relations between the rotation angle ? and the pitch p, the relations comprising:
a first relation between the rotation angle ? and the pitch p being based on the parameter corresponding to the period of the first pattern and the parameter corresponding to the gradient of the second pattern; and
a second relation between the rotation angle ? and the pitch p being based on the parameter corresponding to the period of the second pattern.

US Pat. No. 10,771,770

3D DISPLAY DEVICE AND A DRIVING METHOD THEREOF

BOE TECHNOLOGY GROUP CO.,...

1. A 3D display device, comprising: a liquid crystal display panel for monochrome display, and an electroluminescence display panel for color display disposed under the liquid crystal display panel; wherein,the electroluminescence display panel comprises a plurality of regions arranged in a matrix, the plurality of regions form columns of bright regions and columns of dark regions, which are arranged alternately; in a 3D display portrait mode, each column of bright regions comprises bright regions with a same emitting color, and adjacent columns of bright regions have different emitting colors; in a 3D display landscape mode, each column of bright regions comprises bright regions, which are adjacent in a column direction, with different emitting colors, and the emitting colors of the bright regions, which are provided in a same row, of each column of bright regions are the same; and
the liquid crystal display panel comprises a plurality of first sub-pixels arranged in a matrix; each bright region of the electroluminescence display panel corresponds to at least two first sub-pixels adjacent in a row direction of the liquid crystal display panel; in the 3D display portrait mode and the 3D display landscape mode, in each of the first sub-pixels corresponding to a same bright region, grey scales displayed by the first sub-pixels, which are configured to provide grey scale information of images of different viewpoints, are different from each other;
in the 3D display portrait mode, each of the bright regions corresponds to N adjacent first sub-pixels, which are provided in a row; and
in the 3D display landscape mode, each of the bright regions corresponds to M*N first sub-pixels, which are provided in N rows, adjacent in the row direction or in the column direction, wherein, M and N are integers larger than 1; and
M and N are even numbers:
in the 3D display portrait mode, the grey scales respectively displayed by N/2 first sub-pixels on the left side and N/2 first sub-pixels on a right side of the first sub-pixels corresponding to a same bright region are different; and
in the 3D display landscape mode, the grey scales respectively displayed by M*N/2 first sub-pixels on the left side and M*N/2 first sub-pixels on the right side of the first sub-pixels corresponding to a same bright region are different.

US Pat. No. 10,771,769

DISTANCE MEASURING APPARATUS, DISTANCE MEASURING METHOD, AND IMAGING APPARATUS

CANON KABUSHIKI KAISHA, ...

1. A distance measuring apparatus comprising:at least one processor; and
a memory storing an instruction which, when the instruction is executed by the processor, causes the distance measuring apparatus to function as:
an imaging unit capable of acquiring a plurality of images having view points different from one another; and
a controlling unit configured to perform controlling to acquire the plurality of images with the imaging unit in a state in which a patterned light is projected on any region using a projecting unit disposed in a position optically conjugate to the imaging unit and measure a distance on the basis of the plurality of images acquired by the imaging unit,
wherein the control unit controls at least one of illuminance of a patterned light projected using a projecting unit, a wavelength of the patterned light and whether the patterned light is projected or not in each of regions based on the image by the imaging unit.

US Pat. No. 10,771,768

SYSTEMS AND METHODS FOR IMPROVED DEPTH SENSING

QUALCOMM Incorporated, S...

1. An imaging device comprising:a plurality of transmitters including:
a first transmitter configured to transmit a first structured light pattern focused at a first distance and having a first depth of field; and
a second transmitter configured to transmit a second structured light pattern focused at a second distance and having a second depth of field, wherein the second depth of field is wider than the first depth of field, the second distance is different than the first distance, and the second structured light pattern is transmitted after transmitting the first structured light pattern;
a receiver configured to:
focus within the first depth of field at the first distance to capture a first image of a scene, the first image representing the first structured light pattern; and
focus within the second depth of field at the second distance to capture a second image of the scene, the second image representing the second structured light pattern; and
an electronic hardware processor configured to generate a depth map of the scene based on the first image and the second image.

US Pat. No. 10,771,767

PROJECTOR FOR ACTIVE STEREO DEPTH SENSORS

Intel Corporation, Santa...

1. A stereoscopic imaging device comprising:a projector to project a pattern toward a scene;
a lens moveably mounted in an optical path between the projector and the scene, wherein the lens comprises a wedge portion having a wedge angle and a planar portion or a second wedge portion, and wherein the lens being moveably mounted in the optical path comprises the wedge portion and the planar portion or the second wedge portion of the lens being moveable within the optical path; and
a controller to provide a signal to move the wedge portion and the planar portion or the second wedge portion within the optical path at a rate synchronized to an image capture rate of the scene to provide only the wedge portion of the lens within the optical path during a first image capture and only the planar portion or the second wedge portion of the lens within the optical path during a second image capture.

US Pat. No. 10,771,766

METHOD AND APPARATUS FOR ACTIVE STEREO VISION

MEDIATEK INC., Hsinchu (...

1. An apparatus, comprising:an electromagnetic (EM) wave emitter which, during operation, emits EM waves toward a scene;
a first sensor which, during operation, captures a first image of the scene in an infrared (IR) spectrum; and
a second sensor which, during operation, captures a second image of the scene in a light spectrum,
wherein the first image and second image, when processed, enable active stereo vision,
wherein the EM wave emitter comprises an IR projector which projects a structured IR light during operation, and
wherein each of the first sensor and the second sensor respectively comprises an IR camera, a red-green-blue (RGB) camera containing one or more pixels capable of receiving light in the IR spectrum, a monochrome camera containing one or more pixels capable of receiving light in the IR spectrum, a RGB camera with dual-band bandpass filtering to allow light in the visible spectrum and the IR spectrum to pass through, or a monochrome camera with dual-band bandpass filtering to allow light in the visible spectrum and the IR spectrum to pass through.

US Pat. No. 10,771,765

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM FOR EMBEDDING TIME STAMPED INFORMATION IN AN IMAGE

Canon Kabushiki Kaisha, ...

1. An information processing apparatus comprising:one or more processors; and
memory storing instructions that, when executed by the one or more processors, cause the information processing apparatus to perform operations including:
acquiring image data of a frame for each of a plurality of frames of video data captured in one captured video, wherein one frame of the plurality of frames of video data includes a plurality of pixels, each associated with input color difference signal data,
generating information about time as generated time information, and
replacing the input color difference signal data for more than one pixel of the plurality of pixels of the one frame with the generated time information, which are subsequently output as part of an output color difference signal data for the one frame of the one captured video.

US Pat. No. 10,771,764

METHOD FOR TRANSMITTING 360-DEGREE VIDEO, METHOD FOR RECEIVING 360-DEGREE VIDEO, APPARATUS FOR TRANSMITTING 360-DEGREE VIDEO, AND APPARATUS FOR RECEIVING 360-DEGREE VIDEO

LG Electronics Inc., Seo...

1. A 360-degree video data processing method performed by a 360-degree video reception apparatus, the method comprising:receiving 360-degree video data including encoded pictures for a specific viewing position;
deriving metadata;
decoding the encoded pictures; and
rendering the decoded pictures based on the metadata,
wherein:
the metadata includes viewing space information, and
the viewing space information includes information indicating a shape type of the specific viewing space, when the shape type of a specific viewing space is an ellipsoid, the viewing space information includes information indicating a semi-axis length of a x axis of the specific viewing space, information indicating a semi-axis length of a y axis of the specific viewing space, and information indicating a semi-axis length of a z axis of the specific viewing space.

US Pat. No. 10,771,763

VOLUMETRIC VIDEO-BASED AUGMENTATION WITH USER-GENERATED CONTENT

1. A method comprising:obtaining, by a processing system including at least one processor, a source video, wherein the source video is a two-dimensional video;
selecting, by the processing system, a volumetric video associated with at least one feature of the source video from a library of volumetric videos;
identifying, by the processing system, a first object in the source video;
performing, by the processing system, a spatial alignment of the source video to the volumetric video, by detecting key points in both the source video and the volumetric video and calculating a plurality of vectors between the first object and the key points;
determining, by the processing system, a location of the first object within a space of the volumetric video, wherein the location of the first object is determined in accordance with the spatial alignment;
obtaining, by the processing system, a three-dimensional object model of the first object;
texture mapping, by the processing system, the first object to the three-dimensional object model of the first object to generate an enhanced three-dimensional object model of the first object; and
modifying, by the processing system, the volumetric video to include the enhanced three-dimensional object model of the first object in the location of the first object within the space of the volumetric video.

US Pat. No. 10,771,762

IMAGE PROCESSING APPARATUS, IMAGE PICKUP APPARATUS, IMAGE PROCESSING METHOD, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM THAT CORRECT A PARALLAX IMAGE BASED ON A CORRECTION VALUE CALCULATED USING A CAPTURED IMAGE

Canon Kabushiki Kaisha, ...

1. An image processing apparatus comprising:(A) a memory storing instructions;
(B) one or more processors that execute the instructions stored in the memory; and
(C) an image processing circuit that, based on the instructions executed by the one or more processors, is configured to function as:
(a) an acquisition unit configured to acquire (i) a parallax image generated based on a signal from one of a plurality of photoelectric converters that receive light beams passing through partial pupil regions of an imaging optical system different from each other, and (ii) a captured image generated by combining a plurality of signals from the plurality of photoelectric converters;
(b) a determination unit configured to determine whether the parallax image contains a defect;
(c) an image processing unit configured (i) to calculate a correction value by using pixel values of the captured image corresponding to the defect determined by the determination unit, and (ii) to correct one or more pixel values of the parallax image that include the defect with the corresponding correction value; and
(d) a storage unit configured to store the parallax image corrected by the image processing unit in a storage medium.

US Pat. No. 10,771,761

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD AND STORING UNIT

Canon Kabushiki Kaisha, ...

1. An information processing apparatus comprising:one or more memories storing instructions; and
one or more processors executing the instructions:
to determine, among a plurality of captured images obtained by a plurality of cameras, a display target image related to a virtual viewpoint image, based on a position of a virtual viewpoint and a view direction from the virtual viewpoint, the virtual viewpoint image being generated based on the display target image and the position of the virtual viewpoint and the view direction from the virtual viewpoint; and
to cause a displaying unit to display the determined display target image in a displaying mode according to a degree of contribution of the determined display target image to generation of the virtual viewpoint image.

US Pat. No. 10,771,760

INFORMATION PROCESSING DEVICE, CONTROL METHOD OF INFORMATION PROCESSING DEVICE, AND STORAGE MEDIUM

Canon Kabushiki Kaisha, ...

1. An information processing apparatus comprising:one or more hardware processors; and
one or more memories which store instructions executable by the one or more hardware processors to cause the information processing apparatus to perform at least:
determining a scene regarding which a virtual viewpoint image is to be generated, the scene being included in an event captured by a plurality of imaging apparatuses that obtain captured images for generating the virtual viewpoint image; and
determining, based on the determined scene, a view direction and a position of a virtual viewpoint corresponding to the virtual viewpoint image to be generated.

US Pat. No. 10,771,759

METHOD AND APPARATUS FOR TRANSMITTING DATA IN NETWORK SYSTEM

Samsung Electronics Co., ...

1. A method for transmitting three-dimensional (3D) content, the method comprising:identifying, by a sending entity, a guide message including guide information on at least a portion of the 3D content to be displayed in a receiving entity; and
transmitting, to the receiving entity, the guide message,
wherein the guide information comprises flag information indicating whether to present only a guide region associated with the guide information in the receiving entity and type information on a type relating to the guide region.

US Pat. No. 10,771,758

IMMERSIVE VIEWING USING A PLANAR ARRAY OF CAMERAS

Intel Corporation, Santa...

1. A system for generating a virtual view from multi-view images comprising:a memory to store a plurality of planar images of a scene; and
a processor coupled to the memory, the processor to:
attain a first planar image representative of the scene based on the plurality of planar images;
determine, based on a viewer position relative to a display region, a first crop position of the first planar image and a second crop position of the first planar image;
crop the first planar image to a cropped planar image to fill the display region based on the first and second crop positions, wherein the first and second crop positions define an asymmetric frustum of the first planar image corresponding to a virtual window representing the display region, the first crop position is at a position away from a midpoint of the first planar image toward a first edge of the first planar image by a ratio of a product of a focal length corresponding to the first planar image and a lateral position of virtual viewer position from the first edge of the virtual window to a distance of the virtual viewer position from the virtual window, and the virtual viewer position corresponds to the viewer position; and
provide the cropped planar image for presentation to the viewer.

US Pat. No. 10,771,757

METHOD AND APPARTUS FOR STEREOSCOPIC FOCUS CONTROL OF STEREO CAMERA

MEDIATEK INC., Hsinchu (...

1. A control apparatus for controlling a camera, the control apparatus comprising:processing circuitry configured to:
receive sets of images generated by the camera with corresponding focal setting values, each set of images including a first image and a second image, and each focal setting value including a value for controlling focus of the camera when the corresponding image is taken;
establish a mapping relation that associates a disparity value with at least one focal setting value that are paired based on analyzing disparity values of the sets of images and the corresponding focal setting values; and
control the focus of the camera according to the established mapping relation.

US Pat. No. 10,771,756

SIGNAL TRANSMISSION DEVICE AND SIGNAL TRANSMISSION METHOD

SONY CORPORATION, Tokyo ...

1. A signal transmission device, comprising:a first processor configured to:
receive an image signal;
determine a quantization error based on a black-level error amount and a clamp correction amount of the image signal, wherein the black-level error amount is associated with a black level of the image signal, and the clamp correction amount is based on the black-level error amount; and
notify a stage subsequent to a bit precision constraint region of the quantization error generated in a quantization process on a signal to be transmitted via the bit precision constraint region, wherein the bit precision constraint region is a region where a bit precision constraint occurs; and
a second processor configured to:
receive the quantization error; and
execute, based on the received quantization error, an inverse quantization process on the signal transmitted via the bit precision constraint region.

US Pat. No. 10,771,755

IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM

Sony Corporation, Tokyo ...

1. An image processing apparatus comprising a signal correction unit that:accepts a subsampled image obtained by copying an adjacent pixel value to a pixel value thinning position, as an input image;
calculates a correction target signal global gain which is a ratio between a slope of a correction target signal and a slope of a luminance signal of a global region made up of a plurality of consecutive pixels including a pixel position of a correction target pixel of the input image;
calculates a luminance local slope which is a slope of the luminance signal of a local region smaller than the global region, the local region being a region within the global region, including the pixel position of the correction target pixel of the input image; and
calculates a corrected pixel value of the correction target pixel by multiplying the luminance local slope by the correction target signal global gain and applying a multiplication result.

US Pat. No. 10,771,754

IMAGE WHITE BALANCE CORRECTION METHOD AND ELECTRONIC DEVICE

SAMSUNG ELECTRONICS CO., ...

1. A method of correcting a white balance (WB) of an image, the method comprising:obtaining a first image captured by photographing a subject when a flash emits light, and a second image captured by photographing the subject when the flash emits no light;
obtaining a WB gain of the first image and a WB gain of the second image;
obtaining a color balance (CB) of flash light representing a strength of each color component of the flash by:
obtaining a first luminance value corresponding to the first image and a second luminance value corresponding to the second image based on an exposure control value of each of the first image and the second image, wherein the exposure control value includes an aperture value, a shutter velocity, or sensor sensitivity;
obtaining a first luminance difference corresponding to the first image and a second luminance difference corresponding to the second image based on color components of each of the first image and the second image;
obtaining a first representative luminance value corresponding to the first image, based on a sum of the first luminance value and the first luminance difference;
obtaining a second representative luminance value corresponding to the second image, based on a sum of the second luminance value and the second luminance difference;
obtaining an influence of the flash light on the first image based on the first representative luminance value and the second representative luminance value; and
obtaining the CB of the flash light, based on the influence of the flash light on the first image and the obtained WB gains of the first and second images; and
correcting a WB of the first image, based on the CB of the flash light.

US Pat. No. 10,771,753

CONTENT PRESENTATION METHOD, CONTENT PRESENTATION MODE PUSH METHOD, AND INTELLIGENT TERMINAL

HUAWEI TECHNOLOGIES CO., ...

1. A content presentation method, comprising:storing, by an intelligent terminal, a plurality of preset rule policies indicating an association between a plurality of user use scenarios and a plurality of presentation modes;
causing, by the intelligent terminal, content to be displayed to the user in a second presentation mode;
acquiring, by the intelligent terminal, context data of the intelligent terminal after causing the content to be displayed, wherein the context data is associated with at least one of an environment around the intelligent terminal or a status of the intelligent terminal;
identifying, by the intelligent terminal, a plurality of user use scenario scenarios according to the context data by:
searching for scenario configuration information, wherein the scenario configuration information comprises a correspondence between a context data threshold range and the user use scenario;
determining whether the context data changes compared with context data acquired previously; and
identifying the plurality of user use scenarios according to the context data based on the scenario configuration and based on whether the context data changes compared with the context data acquired previously, wherein each of the plurality of user use scenarios indicates a scene in which the user currently uses an intelligent device;
matching, by the intelligent terminal, each of the plurality of user use scenarios to a preset rule policy, wherein the preset rule policy indicates an association between a user use scenario and a presentation mode indicating a mode of presenting the content to the user;
storing, by the intelligent terminal, a plurality of preset rule policies that are ordered according to a priority associated with each of the plurality of preset rule policies;
determining, by the intelligent terminal, the priority for the preset rule policy corresponding to each of the plurality of user use scenarios;
selecting, by the intelligent terminal, the presentation mode corresponding to the preset rule policy having a highest priority after determining the priority for the preset rule policy corresponding to each of the plurality of user use scenarios, wherein the presentation mode is determined in response to receiving a selection of the presentation mode from the user; and
causing, by the intelligent terminal, the content to be displayed to the user in the presentation mode.

US Pat. No. 10,771,752

DISPLAY SYSTEM, CONTROL DEVICE, CONTROL METHOD FOR DISPLAY SYSTEM, AND COMPUTER PROGRAM

SEIKO EPSON CORPORATION, ...

1. A display system comprising:a projector; and
a control device coupled to the projector, wherein the projector includes:
a correcting section configured to apply distortion correction to input image data to generate corrected image data; and
a projecting section configured to project a corrected image based on the corrected image data, and
the control device includes:
a generating section configured to apply, based on correction data indicating content of the distortion correction acquired from the projector, processing including the distortion correction to reference image data indicating a reference image including a plurality of lattice points to generate preview image data indicating a preview image;
a display section configured to display the preview image based on the preview image data;
an accepting section configured to accept enlarging operation for the preview image, selecting operation for selecting a lattice point set as a correction target among a plurality of lattice points included in the preview image to be enlarged, and changing operation for changing a position to the lattice point to be selected; and
a transmitting section configured to generate, according to the changing operation, a changing command for changing the distortion correction in the correcting section and transmit the changing command to the projector.

US Pat. No. 10,771,751

PROJECTION IMAGE ADJUSTMENT SYSTEM AND PROJECTION IMAGE ADJUSTMENT METHOD

PANASONIC INTELLECTUAL PR...

1. A projection image adjustment system, comprising: a plurality of projectors including a first projector, a second projector and a third projector, each of the first, second and third projectors configured to project a first projection image, a second projection image and a third projection image, respectively, on a projection screen, each of the first projector, the second projector and the third projector are distinct from each other; a first camera configured to capture a plurality of first projection images projected on the projection screen, the plurality of first projection images including at least the first projection image; a second camera configured to capture a plurality of second projection images projected on the projection screen, the plurality of second projection images including at least the third projection image which is not included in the plurality of first projection images captured by the first camera; a display unit configured to display an image captured by the first camera or the second camera; and a controller configured to control the plurality of projectors, the first camera, and the second camera, wherein the controller is further configured to: accept a selection of one of the first camera and the second camera as a selected camera; set a first charge-of-shoot region and a second charge-of-shoot region, the first camera being in charge of shooting the first charge-of-shoot region including the first projection image being projected by the first projector and the second projection image being projected by the second projector, the second camera being in charge of shooting the second charge-of-shoot region including the second projection image being projected by the second projector and the third projection image being projected by the third projector; project a first format pattern by using the plurality of projectors that project the plurality of the first or the second projection images captured by the selected camera; display on a display unit an image of the first charge-of-shoot region or an image of the second charge-of-shoot region, captured by the selected cameras, select a camera layout for the first camera and the second camera based on a layout configuration of the first projector, the second projector and the third projector, and set the first charge-of-shoot region and the second charge-of-shoot region based on the camera layout, andcomputes geometric correction data and transmits the geometric correction data to the plurality of projectors.

US Pat. No. 10,771,750

PROJECTOR

SEIKO EPSON CORPORATION, ...

1. A projector to be installed in a ceiling of a building having a hanging ceiling structure, the projector comprising:a projection section that projects image light, the projection section including a light source, a light modulator that modulates light emitted from the light source to produce the image light, and a projection system that projects the image light from the light modulator;
a power supply section that supplies the projection section with electric power;
a first enclosure that accommodates the projection section; and
a second enclosure that accommodates the power supply section, wherein
the second enclosure is configured to be fixed to a support member that is supported by a building frame of the building, the second enclosure is not fixed to the ceiling, the second enclosure is directly fixed to the support member, and the first enclosure is indirectly fixed to the support member via the second enclosure,
a partitioning surface of the ceiling is configured to be support by the support member and to partition a space into a first space that is an indoor space of the building and a second space that is a space behind the ceiling of the building,
the first enclosure and the second enclosure are configured to be separate from each other,
at least part of the first enclosure is disposed in the second space such that a weight of the first enclosure on the first space is reduced,
a projection port formed in the first enclosure is exposed through an opening provided in the partitioning surface,
the projection section projects the image light through the projection port into the first space, and
the first enclosure is fixed to the second enclosure.

US Pat. No. 10,771,749

ELECTRONIC APPARATUS, DISPLAY SYSTEM, AND CONTROL METHOD OF ELECTRONIC APPARATUS

SEIKO EPSON CORPORATION, ...

1. An electronic apparatus comprising:a communication unit that communicates with a display apparatus; and
a processor programmed to
cause the communication unit to communicate with the display apparatus to acquire information of the display apparatus;
set a setting value based on the information of the display apparatus;
detect an operation of a pointing device;
cause the communication unit to transmit operation data to the display apparatus based on the operation, the operation data being data for designating a display position of a pointer displayed by the display apparatus according to the operation; and
obtain the display position after a movement of the pointer corresponding to the operation based on the setting value.

US Pat. No. 10,771,748

SYSTEM AND METHOD FOR INTERACTIVE AERIAL IMAGING

METROPOLITAN LIFE INSURAN...

1. A system for providing aerial images or video, comprising:a server operable to present a user platform on a user mobile device to a user, wherein the server configured to receive an image request via the user platform from the user mobile device when the user is at an event location, the image request including a current location of the user mobile device; and
an image system on an airship flying above the event location, the image system in communication with the server via a ground station, including:
a plurality of cameras mounted on an exterior of the airship at a predetermined angle from each other, the plurality of cameras utilized concurrently to generate images with a combined field of view;
a communications package capable of real-time or near real-time transmission with the ground station; and
a processor coupled to the plurality of cameras and the communications package, the processor configured to:
receive an instruction from the server to process the image request from the user, the instruction includes data indicating the current location of the user mobile device provided by the user mobile device;
instruct the plurality of cameras to capture images of the current location of the user mobile device provided in the image request, the captured images including a first image captured by a first camera and a second image captured by a second camera; and
instruct the communications package to transmit the images to the server which:
combines the first image captured by the first camera and the second image captured by the second camera into a stitched panoramic image with the combined field of view from an aerial vantage point over the event location,
processes the data included in the instruction that indicates the current location of the user mobile device,
identifies a point within the stitched panoramic image that represents the current location of the user mobile device based on the processed data; and
flags the identified point within the stitched panoramic image for display on the user mobile device so that the user views the current location of the user mobile device from the aerial vantage point.

US Pat. No. 10,771,747

IMAGING APPARATUS AND IMAGING SYSTEM

Canon Kabushiki Kaisha, ...

1. An imaging apparatus which is capable of using H.264 and H.265 as a coding method of coding a captured image and to communicate, via a network, with a first external apparatus configured to use a first command set that supports H.264 and does not support H.265 and a second external apparatus configured to use a second command set that supports H.264 and H.265, the imaging apparatus comprising:a request reception unit configured to receive a request command for requesting information indicating a coding method usable by the imaging apparatus from at least one of the first and the second external apparatuses;
a transmission unit configured to transmit a response indicating H.264 and not indicating H.265 in response to the request command, in a case where the request command received by the request reception unit is the command of the first command set;
an information reception unit configured to receive information specifying the coding method from among options including at least H.264 to H.265 via the network; and
a change unit configured to change, from H.264 to H.265, a configuration set on the imaging apparatus as a coding method usable by the imaging apparatus if information specifying H.265 as the encoding method is received by the information reception unit in a state where H.264 is set as the configuration.

US Pat. No. 10,771,746

SYSTEM AND METHOD FOR SYNCHRONIZING CAMERA FOOTAGE FROM A PLURALITY OF CAMERAS IN CANVASSING A SCENE

Probable Cause Solutions ...

1. A method for synchronizing camera footage from a plurality of cameras, the method comprising:providing a database associated with a plurality of cameras, accessible via a user device, wherein the database includes a correction associated with each of the plurality of cameras to at least one of a date and a time in metadata associated with footage recorded by the corresponding camera, to facilitate synchronization of the footage recorded by each of the plurality of cameras to an actual date and time;
applying, by a processing device, the correction stored in the database associated with at least one camera of the plurality of cameras to determine at least one of an adjusted date and an adjusted time in the metadata for the at least one of the plurality of cameras corresponding to footage recorded at a particular date and time; and
synchronizing, by the processing device, the footage of the at least one of the plurality of cameras to the particular date and time, based on the at least one of the adjusted date and the adjusted time determined from the correction to the metadata stored in the database for the at least one of the plurality of cameras.

US Pat. No. 10,771,745

IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM HAVING PROGRAM STORED THEREIN

NEC CORPORATION, Minato-...

1. An image processing apparatus comprising:at least one memory that stores a set of instructions; and
at least one processor configured to execute the set of instructions to:
determine a first region based on an occurrence position where a phenomenon is estimated to occur in an image;
set a first condition in which the first region is encoded, and a second condition in which a second region included in the image and being a region other than the first region is encoded in such a way that the first region enhances image quality as compared with the second region;
estimate the occurrence position, based on data acquired by a sensor; and
specify a direction of a sound source, based on sound data collected by two or more microphones, and determine a position in the image being indicated by the direction of the sound source, as the occurrence position.

US Pat. No. 10,771,744

PHOTOGRAPHY CONTROL METHOD, PHOTOGRAPHY CONTROL SYSTEM, AND PHOTOGRAPHY CONTROL SERVER

PANASONIC INTELLECTUAL PR...

1. A photography control method of a photography control system for extracting at least one set of photography data of a subject from a plurality of sets of photography data a) obtained between a time the subject entered and exited a pavilion with a photographing camera i) that is installed facing a photography spot ii) that starts photographing automatically in response to acquiring first subject entrance information on the subject with a first device at the pavilion entrance, and iii) that stops photographing in response to acquiring first subject exit information with a second device at the pavilion exit, and elapsing of a predetermined time, and b) by extracting the at least one set of photography data from the plurality of sets of photography data when the at least one set of photography data was taken during the time period between the acquiring of the first subject entrance information at the pavilion entrance and the acquiring of the first subject exit information at the pavilion exit, and when at least one set of photography data contains an optically readable code matching identification information identifying the subject acquired by the first and second devices at the pavilion entrance and pavilion exit, respectively, the method comprising:i) acquiring the first subject entrance information indicating the entrance time at which a subject has entered an area including the photography spot with the first device at the pavilion entrance;
ii) acquiring the first subject exit information indicating the exit time at which the subject has exited the area including the photography spot with the & second device at the pavilion exit located at a location different from the first device;
iii) photographing, using the photographing camera, the photography spot to obtain the plurality of sets of photography data by
automatically starting photographing of the subject at the photography spot in response to detection by the first device of the first subject entrance information at the pavilion entrance, and
automatically stopping photographing in response to
detecting by the second device of the first subject exit information at the pavilion exit, and
elapsing of a predetermined time;
iv) saving the plurality of sets of photography data, taken by the photographing camera, in a first photography data storage unit in association with identification information of the subject acquired by the first device;
v) acquiring the identification information identifying the subject with the first and second devices;
vi) determining the time period between the time the first device detects entry of the subject into the area and the time the second device detects exiting of the subject from the area;
vii) extracting, from the plurality of sets of photography data saved in the first photography data storage unit, the at least one set of photography data taken during the determined time period between the time the first device detects entry of the subject into the pavilion and the time the second device detects exiting of the subject from the pavilion, corresponding to the identification information acquired in the acquiring of identification information by
extracting the at least one set of photography data from the plurality of sets of photography data in response to
taking the at least one set of photography data during the time period between the time the first device detects entry of the subject at the pavilion entrance and the time the second device detects exit of the subject at the pavilion exit of the subject when the photographing camera is automatically started and stopped in response to acquiring of the first subject entrance and exit information at the pavilion entrance and exit, respectively, and
reading the optically readable code with a third device different from the first and second devices, and matching the optically readable code with the identification information of the subject in the first photography data storage unit; and
xiii) saving the extracted at least one set of photography data in a second photography data storage unit in a manner associated with the identification information of the subject.

US Pat. No. 10,771,743

APPARATUS, SYSTEM AND METHOD FOR A WEB-BASED INTERACTIVE VIDEO PLATFORM

Shoutpoint, Inc., Newpor...

1. A computer implemented method for broadcasting live videos to a plurality of attendees during a live event, the method comprising, under the control of one or more processors:establishing an online session associated with a live event, the online session configured to be broadcast to a plurality of attendees;
determining a first broadcaster and a second broadcaster of the live event;
identifying a scheduled time of the live event;
determining a countdown to the schedule time;
generating, before the event, a first graphical user interface comprising a preview screen with a status of the live event;
accessing a current list of attendees of the online session;
determining identifying information of the attendees in the current list of attendees;
updating the first graphical user interface with the identifying information of the attendees in the current list of attendees; and
transmitting the first graphical user interface to the attendees in the current list of attendees; and
in response to a live event initiation request:
generating a second graphical user interface comprising:
a first broadcaster graphical user interface element configured to stream, during the live event, a first live video associated with the first broadcaster; and
a second broadcaster graphical user interface element configured to stream, during the live event, a second live video associated with the second broadcaster;
transmitting the second graphical user interface to the plurality of attendees; and
broadcasting, in real-time, the first live video of the first broadcaster and the second live video of the second broadcaster via the second graphical user interface during the online event.

US Pat. No. 10,771,742

DEVICES WITH ENHANCED AUDIO

APPLE INC., Cupertino, C...

1. A method for directing audio of a computing device, comprising:determining if there is a plurality of chat instances on a display of the computing device;
determining whether the plurality of chat instances are arranged in different locations on the display of the computing device;
determining a chat instance of the plurality of chat instances that correlates to an audio instance;
determining a location on the display of the computing device of the chat instance that correlates to the audio instance; and
modifying sound output from a plurality of audio output devices of the computing device to be directed from or appear to be directed from the determined location of the chat instance that correlates to the audio instance.

US Pat. No. 10,771,741

ADDING AN INDIVIDUAL TO A VIDEO CONFERENCE

International Business Ma...

1. A computer-implemented method for adding an individual to a video conference, comprising:capturing a first video stream of a user from a first camera of a user device and providing the first video to a device of at least one video conference participant other than the user;
actively scanning, while the user is participating in the video conference, an environment where the user is located using a second camera to capture a second video stream from the second camera;
analyzing the second video stream captured by the second camera to determine whether an individual appears in the second video stream;
in response to detecting the individual in the second video stream, determining an identity for the individual;
prompting the user to confirm adding the individual to the video conference; and
in response to receiving the confirmation from the user to add the individual to the video conference, adding the second video stream to the ongoing video conference, thereby adding the individual to the video conference.

US Pat. No. 10,771,740

ADDING AN INDIVIDUAL TO A VIDEO CONFERENCE

International Business Ma...

1. A computer system for adding an individual to a video conference, comprising:one or more processors, one or more computer-readable memories, one or more computer-readable tangible storage media, and program instructions stored on at least one of the one or more computer-readable tangible storage media for execution by at least one of the one or more processors via at least one of the one or more computer-readable memories, wherein the computer system is capable of performing a method comprising:
capturing a first video stream of a user from a first camera of a user device and providing the first video to a device of at least one video conference participant other than the user;
actively scanning, while the user is participating in the video conference, an environment where the user is located using a second camera to capture a second video stream from the second camera;
analyzing the second video stream captured by the second camera to determine whether an individual appears in the second video stream;
in response to detecting the individual in the second video stream, determining an identity for the individual;
prompting the user to confirm adding the individual to the video conference; and
in response to receiving the confirmation from the user to add the individual to the video conference, adding the second video stream to the ongoing video conference, thereby adding the individual to the video conference.

US Pat. No. 10,771,739

INFORMATION PROCESSING DEVICE AND INFORMATION PROCESSING METHOD

Sony Corporation, (JP)

1. An information processing device comprising:at least one processor configured to:
transmit, based on a start condition set by at least one mobile device, acceptance information to at least one display device via a server, wherein the acceptance information allows the at least one display device to acquire an image that is captured by an imaging unit of the at least one mobile device;
receive, from the at least one display device which has received the acceptance information, application for connection to the at least one mobile device,
control, based on the received application, the at least one mobile device to start distribution of the image to the display device, and
allow, based on permission level settings, one or more actions performed at the display device to control movement of a body part of a user at the information processing device through a drive unit at the information processing device.

US Pat. No. 10,771,738

METHODS AND SYSTEMS FOR MULTI-PANE VIDEO COMMUNICATIONS

POPIO IP HOLDINGS, LLC, ...

1. A method comprising:establishing a first connection between a mobile device and a support terminal;
conducting a video chat between the mobile device and the support terminal transmitted through the first connection;
providing the video chat on a display screen of the mobile device;
receiving a remote storage location of a display element pushed through a second connection by the support terminal while providing the video chat on the display screen of the mobile device;
providing, on the display screen of the mobile device, a selectable option to accept the display element in response to receiving the remote storage location of the display element and during the video chat;
in response to receiving a user interaction with the selectable option to accept the display element and during the video chat, retrieving the display element from the remote storage location; and
based on retrieving the display element from the remote storage location:
providing the video chat in a first pane displayed on the display screen of the mobile device; and
providing the display element in a second pane displayed on the display screen of the mobile device concurrently while providing the video chat in the first pane.

US Pat. No. 10,771,737

TRANSITIONING A TELEPHONE NETWORK PHONE CALL TO A VIDEO CALL

FACEBOOK, INC., Menlo Pa...

1. A method, comprising:determining, by an electronic communication system, that a first user using a first client device is on a phone call over a telephone network with a second user using a second client device;
determining, by the electronic communication system, a first user identifier associated with the first user and a second user identifier associated with the second user;
identifying, by the electronic communication system, the first client device and the second client device;
generating, during the phone call, a null video call connection between the first user device and the second user device; and
providing, to the first client device after establishing the null video call connection, an option to switch the phone call to a video call.

US Pat. No. 10,771,736

COMPOSITING AND TRANSMITTING CONTEXTUAL INFORMATION DURING AN AUDIO OR VIDEO CALL

MICROSOFT TECHNOLOGY LICE...

1. A method for augmenting person-to-person communication, the method comprising:obtaining user settings for a receiving device;
initiating a call between a sending device and the receiving device, the call comprising a video stream;
determining, by the receiving device, contextual information to be sent with the video stream from the sending device during the call based upon the obtained user settings, the contextual information comprising an additional video stream;
receiving, by the receiving device, the contextual information separate from the video stream during the call; and
displaying, by the receiving device, the received contextual information separate from the video stream during the call.

US Pat. No. 10,771,735

DATA CABLE, ELECTRONIC SYSTEM AND METHOD FOR TRANSMITTING MIPI SIGNAL

MEDIATEK SINGAPORE PTE. L...

1. An electronic system, comprising:a first electronic device configured to generate at least one pair of MIPI (Mobile Industry Processor Interface) differential signals;
a HDMI data cable and a second electronic device connected to the first electronic device via the HDMI data cable;
wherein the HDMI data cable comprises at least one signal transmission path for transmitting the at least one pair of MIPI differential signals between the first electronic device and the second electronic device without converting the MIPI differential signals to other signal format, each signal transmission path transmits one pair of MIPI differential signals and performs impedance matching and shielded grounding processing,
Wherein the shielded grounding processing comprises transmitting one pair of MIPI differential signals and a ground signal in one signal transmission path and providing a shielding layer for shielding the one signal transmission path in the HDMI data cable.

US Pat. No. 10,771,734

SYSTEM AND METHOD FOR SUPPORTING SELECTIVE BACKTRACKING DATA RECORDING

SZ DJI TECHNOLOGY CO., LT...

1. A data processing method, comprising:receiving data associated with a time sequence from one or more data sources from one or more capturing devices;
storing the received data in a memory;
removing a portion of the data stored in the memory after receiving a synchronization signal; and
forwarding the removed data to a storage, in response to a control signal for a time period in the time sequence, wherein the time period includes a past portion that equals a maximum image frame number associated with the memory divided by an image speed and a current portion that is a difference between the time period and the past portion.

US Pat. No. 10,771,733

METHOD AND APPARATUS FOR PROCESSING VIDEO PLAYING

Hangzhou Hikvision Digita...

1. A method for processing video playing, comprising:obtaining a split screen mode switch instruction;
when the number of split screens corresponding to a target split screen mode in the split screen mode switch instruction is smaller than the number of split screens corresponding to a current split screen mode, determining that a first video(s) being played that is to be hidden exists;
if the first video(s) being played that is to be hidden exists, generating a hiding instruction for each of the first video(s); and
when the first video(s) is being recorded, acquiring and storing a bit stream of the first video(s) based on a current bit stream type of the first video(s), and stopping the decoding and rendering of the first video(s).

US Pat. No. 10,771,732

SYSTEM, IMAGING APPARATUS, INFORMATION PROCESSING APPARATUS, AND RECORDING MEDIUM

CANON KABUSHIKI KAISHA, ...

1. A system comprising an imaging apparatus and an information processing apparatus,wherein the imaging apparatus includes:
a first detection unit configured to detect a state which is regarded as a notification target among a plurality of states that the imaging apparatus can take;
a first control unit configured to perform control so that a mark corresponding to the state detected by the first detection unit is displayed on a first display unit; and
a transmission unit configured to transmit identification information corresponding to the state, which is regarded as the notification target, in response to the detection of the state by the first detection unit, and
the information processing apparatus includes:
a reception unit configured to receive the identification information transmitted by the transmission unit; and
a first notification unit configured to performs notification of first notification information on the state corresponding to the identification information received by the reception unit by a method different from displaying.

US Pat. No. 10,771,731

DISPLAY DEVICE

FUNAI ELECTRIC CO., LTD.,...

1. A display device comprising:a display panel;
a light source disposed on a rear side relative to the display panel;
an optical member disposed on the rear side relative to the display panel;
a rear chassis that houses the light source, the rear chassis including an outer peripheral portion with a flat component that extends outward relative to a center of the display device and a bent part that extends perpendicular to the flat component from an outer edge of the flat component, the bent part including a screw hole; and
a frame member fastened to the bent part of the rear chassis with a screw that is threaded into the screw hole on the bent part of the rear chassis through the frame member, the screw having a longitudinal center axis that extends in a direction that is perpendicular to a normal direction of a display surface of the display panel, with the normal direction being perpendicular to the display surface of the display panel.

US Pat. No. 10,771,730

DISPLAY APPARATUS

LG Display Co., Ltd., Se...

1. A display apparatus, comprising:a display panel comprising:
a display area configured to display an image; and
a non-display area;
at least one first sound generator overlapping the display area; and
at least one second sound generator overlapping the non-display area,
wherein each of the at least one first sound generator and the at least one second sound generator is configured to vibrate the display panel to generate sound toward a front of the display panel.

US Pat. No. 10,771,729

DUAL-SCREEN ELECTRONIC DEVICES

Hatar Tanin, LLC, Novato...

15. A portable electronic device comprising:a first housing;
a second housing coupled to the first housing;
a processor disposed in the first housing;
a first touch-sensitive surface mounted to the first housing and in electronic communication with the processor;
a first power supply in the first housing and configured to power the processor and the first touch-sensitive surface;
a second touch-sensitive surface disposed in the second housing, in electronic communication with the processor, and powered by the first power supply;
a second power supply disposed in the second housing and configured to power the first touch-sensitive surface and the second touch-sensitive surface;
and a first memory disposed in the first housing and in electronic communication with the processor, the first memory storing non-transitory instructions which, when executed by the processor, cause the portable electronic device to:
display a first user interface on the first touch-sensitive surface;
display a second user interface on the second touch-sensitive surface;
and change a display on the second touch-sensitive surface in response to detecting an input on the first touch-sensitive surface.

US Pat. No. 10,771,728

ELECTRONIC DEVICE, CONTROL METHOD, AND RECORDING MEDIUM FOR DISPLAYING IMAGES BASED ON DETERMINED STATE

KYOCERA CORPORATION, Kyo...

1. An electronic device, comprising:a display;
a camera;
a plurality of sensors; and
a controller configured to
determine a state of the electronic device on the basis of a detection result of a first sensor among the plurality of sensors,
cause the display to display, when the determined state is a first state, a first overlay image in which first sensor information based on a detection result of a second sensor among the plurality of sensors is overlaid on an image captured by the camera, and
cause the display to display, when the determined state is a second state, a second overlay image in which second sensor information based on the detection result of the second sensor among the plurality of sensors is overlaid on the image captured by the camera,
wherein
the second sensor information differs from the first sensor information,
the first sensor information and the second sensor information include numerical information and text information based on the detection result of the second sensor,
the text information differs between the first sensor information and the second sensor information, and
the controller is configured to
determine whether the electronic device is underwater on the basis of the detection result of the first sensor,
cause the display to display the first overlay image when the electronic device is determined not to be underwater, and
cause the display to display the second overlay image when the electronic device is determined to be underwater.

US Pat. No. 10,771,727

MONITORING SYSTEM WITH HEADS-UP DISPLAY

1. A monitoring system comprising:a heads-up display device;
a receiver;
a cable connecting the receiver to the heads-up display device; and
a transmitter configured to wirelessly transmit a video signal to the receiver,
wherein the receiver is configured to receive the video signal and to provide video information to the heads-up display device via the cable,
wherein the receiver includes a battery, and is configured to supply power to the heads-up display device via the cable,
wherein the receiver is configured to detach from the heads-up display device,
wherein the heads-up display device does not include a battery, and
wherein the heads-up display device does not include a CPU or an operating system.