US Pat. No. 11,070,931

LOUDSPEAKER ASSEMBLY AND CONTROL

HARMAN INTERNATIONAL INDU...


1. A speaker assembly comprising:a plurality of transducers disposed on a speaker body configured to produce an audio beam being steerable based on at least one audio beam parameter, wherein the plurality of transducers comprises:two woofer arrays having woofer transducers for producing a bass-range output; and
at least one cylindrical array of transducers having a higher-frequency acoustic output being greater than the woofer transducers, the array of transducer disposed between the two woofer arrays and having a circumference less than the circumference of the speaker body at the woofer arrays, wherein the plurality of transducers define the audio beam having concentrated acoustic energy including the bass-range and higher-frequency audio outputs, wherein the at least one audio beam parameter varies to steer the audio beam; and

a light assembly arranged on the speaker body configured to produce a light output that varies as the audio beam is steered.

US Pat. No. 11,070,930

GENERATING PERSONALIZED END USER ROOM-RELATED TRANSFER FUNCTION (RRTF)

Sony Corporation, Tokyo ...


1. A system comprising:at least one computer medium that is not a transitory signal and that comprises instructions executable by at least one processor to:
receive indication of a desired origin of sound played on a sound bar;
identify a virtual location from whence the sound is emulated to originate; and
based on the desired origin and the virtual location, alter a first room related transfer function (RRTF) to establish a personalized RRTF.

US Pat. No. 11,070,929

LOW-COST HEARING AID PLATFORMS AND METHODS OF USE

GEORGIA TECH RESEARCH COR...


1. A device comprising:an electret microphone;
an amplifier;
a capacitor;
a printable circuit board (PCB), wherein the electret microphone, the amplifier, and the capacitor are disposed upon the PCB;
an audio output in electrical communication with the amplifier;
a power source in electrical communication with the PCB; and
non-processor gain control;
wherein the non-processor gain control is configured to control gain of the device free of signal processing from a signal processor; and
wherein a total harmonic distortion of the device, when the microphone is subjected to a 70 dB sound input, is less than 1% at each of 500 Hz, 800 Hz, and 1500 Hz.

US Pat. No. 11,070,928

HEARING DEVICE AND A METHOD FOR MANUFACTURING THEREOF


1. A hearing device to be worn in the ear of a user, said hearing device comprises a housing for accommodating an electronic component, wherein said housing is formed with an indentation, said indentation providing a space at an outer side of the housing, wherein said hearing device further comprises an antenna inserted into the indentation, wherein said housing is made of titan as an electromagnetic wave shielding material and has a thickness in the range from 0.2 mm to 0.4 mm.

US Pat. No. 11,070,927

DAMPING IN CONTACT HEARING SYSTEMS

Earlens Corporation, Men...


1. A tympanic lens, comprising:a chassis;
a perimeter platform connected to the chassis;
a microactuator connected to the chassis through two bias springs positioned at a proximal end of the microactuator;
a damper separate from and attached to the at least one bias spring;
an umbo platform attached to a distal end of the microactuator; and
a source of electrical signals mounted on said chassis and electrically connected to the microactuator through at least one wire,
wherein the damper comprises a viscoelastic material in contact with the at least one bias spring, and
wherein the viscoelastic material is configured to become stiffer or more viscous as a vibration frequency of the tympanic lens increases.

US Pat. No. 11,070,926

HEARING DEVICE FOR RECEIVING LOCATION INFORMATION FROM WIRELESS NETWORK


1. A hearing aid comprising:a microphone for converting an audio input signal into a microphone output signal;
a processing unit configured to provide a processed output signal based on the microphone output signal for compensating a hearing loss of a user;
a receiver connected to the processing unit, the receiver configured to convert the processed output signal into an audio output signal; and
a wireless radio receiver unit connected to the processing unit, the wireless radio receiver unit configured to receive information from a wireless network;
wherein the processing unit is configured to determine a location of the wireless network based on the information received from the wireless network;
wherein the processing unit is configured to determine whether the determined location matches with a predetermined location stored in a memory of the hearing aid;
wherein the processing unit is configured to determine a sound processing profile based on the determined location of the wireless network;
wherein the processing unit is configured to provide the processed output signal based on the determined sound processing profile; and
wherein the hearing aid is configured to receive the information from the wireless network without using an electronic device of the user, and wherein the processing unit is configured to determine the location of the wireless network based on the information that is received by the hearing aid without using the electronic device of the user.

US Pat. No. 11,070,924

METHOD AND APPARATUS FOR HEARING IMPROVEMENT BASED ON COCHLEAR MODEL

GOLDENEAR COMPANY, INC., ...


1. An apparatus for hearing measurement based on a cochlear model comprising:a processor; and
a memory connected to the processor,
wherein the memory stores program instructions executable by the processor to output an interface in which n buttons corresponding to n frequency bands into which an audible frequency band is divided at 1/k octave resolution are arranged in a cochlear model,
output acoustic signals corresponding to a predetermined hearing threshold in each of the n frequency bands,
receive a user's input for whether each of the n frequency bands is inaudible at the predetermined hearing threshold, and
output acoustic stimulation signals corresponding to the inaudible frequency band input by the user in predetermined sizes.

US Pat. No. 11,070,923

METHOD FOR DIRECTIONAL SIGNAL PROCESSING FOR A HEARING AID AND HEARING SYSTEM

Svantos Pte. Ltd., Singa...


1. A method for directional signal processing for a hearing aid, which comprises the steps of:generating, via a first input transducer of the hearing aid, a first input signal from a sound signal in an environment;
generating, via a second input transducer of the hearing aid, a second input signal from the sound signal in the environment;
generating a first calibration directional signal having a relative attenuation in a direction of a first useful signal source in the environment on a basis of the first input signal and on a basis of the second input signal;
generating a second calibration directional signal having a relative attenuation in a direction of a second useful signal source in the environment on a basis of the first input signal and on a basis of the second input signal;
determining a relative gain parameter on a basis of the first calibration directional signal and the second calibration directional signal;
generating a first processing directional signal and a second processing directional signal on a basis of both the first input signal and the second input signal;
generating a source-sensitive directional signal on a basis of the first processing directional signal, the second processing directional signal and the relative gain parameter; and
generating an output signal of the hearing aid on a basis of the source-sensitive directional signal.

US Pat. No. 11,070,922

METHOD OF OPERATING A HEARING AID SYSTEM AND A HEARING AID SYSTEM


1. A method of operating a hearing aid system used by a hearing aid user, comprising the steps of:providing an input signal representing an acoustical signal from an input transducer of the hearing aid system;
providing the input signal to an auditory nerve compressor;
selecting a minimum output level for the auditory nerve compressor, wherein the minimum output level represents a hearing threshold level;
selecting a maximum output level for the auditory nerve compressor:
from a range between 30 and 50 dB SL if an auditory neurodegeneration has been identified in said hearing aid user for both medium-spontaneous rate and low-spontaneous rate auditory nerve fibers; or
from a range between 50 and 80 dB SL if an auditory neurodegeneration has been identified in said hearing aid user only for low-spontaneous rate auditory nerve fibers,
defining a minimum input signal level and a maximum input signal level;
operating the auditory nerve compressor according to a compression characteristic wherein the minimum input signal level is mapped onto the minimum output level of the auditory nerve compressor, and wherein the maximum input signal level is mapped onto the maximum output level of the auditory nerve compressor; and
using an output signal derived from the auditory nerve compressor output signal to drive an electrical-acoustical output transducer of the hearing aid system.

US Pat. No. 11,070,921

RECEIVER WITH INTEGRATED MEMBRANE MOVEMENT DETECTION

Sonion Nederland B.V., H...


1. A receiver for a hearable, said receiver comprisinga moveable membrane,
a motor configured to drive the moveable membrane to generate sound, the motor being rigidly connected to the moveable membrane, and
an arrangement, which is separate and distinct from the motor, configured to detect movements of the moveable membrane,
wherein the arrangement includes a pair of electrodes positioned on opposite sides of the moveable membrane, wherein each of the electrodes forms a capacitor with the moveable membrane.

US Pat. No. 11,070,920

DUAL FUNCTION TRANSDUCER

Apple Inc., Cupertino, C...


1. A dual function transducer assembly comprising:a magnet motor assembly comprising a first magnet plate and a second magnet plate arranged in parallel to one another along a first axis;
a sound output assembly coupled to the magnet motor assembly, the sound output assembly comprising a piston and a voice coil, and wherein the voice coil is arranged to cause a vibration of the piston in a direction parallel to the first axis; and
a shaker assembly coupled to the magnet motor assembly, the shaker assembly comprising a first shaker coil and a second shaker coil arranged to cause a vibration of the magnet motor assembly in a direction parallel to a second axis that is perpendicular to the first axis.

US Pat. No. 11,070,919

ACTIVE LOUDSPEAKER AND CABLE ASSEMBLY

Bose Corporation, Framin...


1. A cable assembly configured to electrically connect an active loudspeaker to a source device that comprises a source of electrical power and audio signals, comprising:a primary sheath;
a group of power conductors within the primary sheath and configured to carry electrical power;
a group of audio signal conductors within the primary sheath and configured to carry audio signals;
a secondary sheath within the primary sheath, wherein the group of power conductors is within the secondary sheath;
a tertiary sheath within the primary sheath and outside of the secondary sheath, wherein the group of audio signal conductors is within the tertiary sheath;
a cable-mount connector male plug comprising a first plurality of power pins that terminate a first end of each power conductor and a second plurality of audio signal pins that terminate a first end of each audio signal conductor, wherein the cable-mount connector plug is configured to be coupled to a panel mount connector female socket of the source device; and
a cable-mount connector female socket comprising a first plurality of power receptacles that terminate a second end of each power conductor and a second plurality of audio signal receptacles that terminate a second end of each audio signal conductor, wherein the cable-mount connector socket is configured to be coupled to a panel mount connector male plug of the active loudspeaker.

US Pat. No. 11,070,918

SOUND BAR WITH IMPROVED SOUND DISTRIBUTION

SSV WORKS, INC., Oxnard,...


1. A sound bar device, comprising:a sound bar body, said sound bar body comprising a sound bar body length and a sound bar body width, said sound bar body further comprising:a front face intended to emit sound;
a vertical spatial deviation point on said front face dividing said front face into left and right portions;
a horizontal spatial deviation point dividing said front face into top and bottom portions; and
wherein said vertical spatial deviation point and said horizontal spatial deviation point define four discrete curved subsurfaces on said front face, each of said subsurfaces facing in a different general direction; and

at least four speaker drivers at least partially in said sound bar body with at least one of said speaker drivers in each of said four subsurfaces on said front face, said at least four speaker drivers comprising a first speaker driver in a first of said subsurfaces configured to emit sound in a first direction, a second speaker driver in a second of said subsurfaces configured to emit sound in a second direction, a third speaker driver in a third of said subsurfaces configured to emit sound in a third direction, and a fourth speaker driver in a fourth of said subsurfaces configured to emit sound in a fourth direction, said at least four speaker drivers on two or more curved surfaces of said sound bar such that said first, second, third, and fourth speaker drivers face different directions, wherein said left and right portions of said front face curve in a direction away from said vertical spatial deviation point, and said top and bottom portions of said front face curve in a direction away from said horizontal spatial deviation point such that said front face comprises a continuous surface.

US Pat. No. 11,070,917

UN-TETHERED WIRELESS AUDIO SYSTEM

Apple Inc., Cupertino, C...


1. A first wireless audio output device, comprising:a transceiver configured to communicate with a second audio output device via a first wireless link and configured to communicate with an audio source via a second wireless link, wherein the transceiver receives, from the audio source via the second wireless link, a first audio packet; and
a controller configured to determine whether the second audio output device received the first audio packet directly from the audio source,
wherein, when it is determined the second audio output device did not receive the first second audio packet, the transceiver is configured to transmit, to the second audio output device via the first wireless link, the first audio packet.

US Pat. No. 11,070,916

SYSTEMS AND METHODS FOR DISTINGUISHING AUDIO USING POSITIONAL INFORMATION

INCONTACT, INC., Sandy, ...


1. A method of providing audio output, the method comprising:accepting a plurality of audio streams which are part of an interaction between communicating parties;
based on the audio streams, providing a plurality of audio outputs, each located at a different location in three-dimensional space; and
providing in a left-right channel audio stream an audio prompt corresponding to a first of a plurality of displayed computer applications;wherein the audio prompt is perceived when heard by a user to be at a location in three-dimensional space distinct from the location of another audio prompt corresponding to a second of the plurality of displayed computer applications; and
wherein the audio prompt is perceived when heard by the user to be at a location in three-dimensional space which correlates with a position of the first of the plurality of displayed computer applications on a display of the user.


US Pat. No. 11,070,915

VEHICLE AND AUDIO PROCESSING METHOD FOR THE SAME

Hyundai Motor Company, S...


1. An audio processing method for a vehicle, comprising:receiving, by a controller, a broadcast signal and obtaining modulation information of the broadcast signal;
determining, by the controller, volume information of the broadcast signal based on the modulation information of the broadcast signal; and
adjusting, by the controller, while the broadcast signal is being received, a volume of a power amplifier based on the determined volume information to adjust an output sound level of the broadcast signal to correspond to a preset output sound level of a broadcast receiving apparatus of the vehicle.

US Pat. No. 11,070,914

CONTROLLER AND CONTROL METHOD

SONY CORPORATION, Tokyo ...


1. A controller, comprising:an analysis unit configured to:analyze an audio signal input via a microphone; and
select an audio signal characteristic control method from a plurality of control methods based on a result of the analysis; and

a characteristic control unit configured to:execute the selected audio signal characteristic control method for an audio processing system, whereinthe audio processing system receives the audio signal via the microphone and outputs the audio signal via a speaker, and
the analysis unit is further configured to switch the selected audio signal characteristic control method for the audio processing system based on a level of the audio signal that becomes equal to or less than a noise determination threshold for a specific time period.



US Pat. No. 11,070,913

MILLIMETER WAVE SENSOR USED TO OPTIMIZE PERFORMANCE OF A BEAMFORMING MICROPHONE ARRAY

Crestron Electronics, Inc...


1. A beamforming microphone array comprising:a plurality of microphones each of which is adapted to receive an acoustic audio signal and convert the same to a microphone (mic) audio signal;
a wave sensor system adapted to determine locations of one or more people within a predetermined area about the beamforming microphone array and output the same as user location data signal; and
an adaptive beamforming circuit adapted to receive the user location data signal and plurality of mic audio signals and perform adaptive beamforming on the plurality of mic audio signals that takes into account the received user location data signal to adapt a plurality of beam signals, one for each of the microphones, to acquire sound from one or more specific locations in the predetermined area; and
a plurality of acoustic echo cancellation devices, one for each of the beam signal outputs from the adaptive beamforming circuit, wherein each of the plurality of acoustic echo cancellation devices is adapted to receive a respective beam signal from the adaptive beamforming circuit and perform acoustic echo cancellation on the received respective beam signal and output the echo-corrected beam signal, and whereinthe predetermined area is a conference room, and
if the user location data signal indicates that there are more people than beams that can be formed, then the adaptive beamforming circuit is further adapted to modify one or more of the fixed beam positions to cover two or more people in the conference room such that each person is covered by at least one fixed beam, and further wherein,
the adaptive beamforming circuit is adapted to adjust a beam width and shape to cover two or more people in the conference room.


US Pat. No. 11,070,912

AUDIO SYSTEM FOR DYNAMIC DETERMINATION OF PERSONALIZED ACOUSTIC TRANSFER FUNCTIONS

Facebook Technologies, LL...


1. An audio system comprising:a microphone array that includes a plurality of acoustic sensors that are configured to detect sounds within a local area surrounding the microphone array, and at least some of the plurality of acoustic sensors are coupled to a near-eye display (NED);
a controller configured to:estimate a direction of arrival (DoA) of a first detected sound of the detected sounds relative to a position of the NED within the local area, the estimate based on the detected sounds from the plurality of acoustic sensors;
generate one or more transfer functions based at least in part on the DoA estimation, the one or more transfer functions comprising a head-related transfer function (HRTF) for a user of the audio system;
update one of the one or more transfer functions based on position information received from an external system, the position information describing a position of the microphone array in the local area; and
synthesize audio content based on the updated transfer function; and

a speaker assembly configured to present the synthesized audio content to the user.

US Pat. No. 11,070,911

PERSONAL SOUND ZONE SYSTEM

Karma Automotive LLC, Ir...


1. A vehicle with an audio system comprising:a first sound zone;
a second sound zone;
a first plurality of microphones located in the first sound zone;
a second plurality of microphones located in the second sound zone;
wherein the first plurality of microphones is configured to receive a noise audio signal and produce a noise electronic signal corresponding to the noise audio signal;
a first audio controller corresponding with the first sound zone;
a second audio controller corresponding with the second sound zone;
wherein the first controller is configured to:receive a media electronic signal from a first audio input and the noise electronic signal;
convert the noise electronic signal into an anti-noise electronic signal; and
produce an output electronic signal by combining the media electronic signal and the anti-noise electronic signal; and

a first plurality of speakers located in the first sound zone;
a second plurality of speakers located in the second sound zone;
wherein the first of speakers is configured to receive the output electronic signal and emit an output audio signal corresponding to the output electronic signal;
wherein the first and second sound zones each has a dedicated sound environment that is separate from the dedicated sound environment of the other sound zone such that the first and second sound zones cancels audio produced by the speakers of the other sound zone;
a first audio output communicating with the first plurality of speakers;
a second audio output communicating with the second plurality of speakers;
a first control system of the first sound zone communicating with the first audio controller and a first audio output of the first sound zone;
a second control system of the second sound zone communicating with the second audio controller and a second audio output of the second sound zone;
wherein the first sound zone is configured to send at least a portion of the media electronic signal to the second sound zone via commands sent from the first control system to the first audio output;
wherein the portion of the media electronic signal sent by the first sound zone is received by a second audio input of the second sound zone;
wherein the first audio output does not include the noise electronic signal or the anti- noise electronic signal; and
wherein the second audio input is configured to receive a second media electronic signal in addition to the first media electronic signal received.

US Pat. No. 11,070,910

PROCESSING DEVICE AND A PROCESSING METHOD FOR VOICE COMMUNICATION

SONY CORPORATION, Tokyo ...


1. An apparatus for voice communication, comprising:a microphone configured to collect a voice signal;
a noise gate configured to execute a noise gate processing operation on the collected voice signal to output an output signal, whereinthe output signal is at a specific signal level based on a level of the collected voice signal that is higher than a designated level, and
a signal level of the output signal is attenuated based on the level of the collected voice signal that is one of equal to or lower than the designated level; a compressor configured to:
execute a compressor processing operation on the output signal; and
obtain a processed signal based on the execution of the compressor processing operation on the output signal; and

a transmitter configured to transmit the processed signal for the voice communication.

US Pat. No. 11,070,909

SPEAKER APPARATUS, METHOD FOR PROCESSING INPUT SIGNALS THEREOF, AND AUDIO SYSTEM

Harman International Indu...


15. An audio system, comprising:a speaker apparatus, comprising a first plurality of speakers arranged at an interval in a row, and a second plurality of speakers symmetrically disposed at two sides of the row of the first plurality of speakers with openings at the two sides facing outwardly, where acoustic energy radiation generated by the first plurality of speakers is greater in a first zone than in a second zone in a first frequency range, acoustic energy radiation generated by the second plurality of speakers is greater in a third zone than in a fourth zone in a second frequency range, and the first frequency range overlaps with the second frequency range; and
a processor, configured to process input signals of the speaker apparatus, comprising:a first obtainment circuitry, configured to obtain digital signals based on the input signals;
a first filter, configured to filter the digital signals to obtain first digital signals in the first frequency range;
a second filter, configured to filter the digital signals to obtain second digital signals in the second frequency range; and
a digital signal processing circuitry, configured to process the first digital signals using a beamforming method based on Digital Signal Processing (DSP), to make the acoustic energy radiation generated by the first plurality of speakers greater in the first zone than in the second zone;

wherein the processed first digital signals are adapted to be input to the first plurality of speakers, and the second digital signals are adapted to be input to the second plurality of speakers.

US Pat. No. 11,070,908

SOUND GENERATOR FOR MOBILE ENTITIES

PIONEER CORPORATION, Tok...


1. A sound generating device for mobile object comprising:a speaker unit configured to emit a sound towards a box-like space formed by a mobile object; and
an enclosure accommodating the speaker unit, wherein
the speaker unit is provided with
a resonance element configured to generate a resonance sound of a frequency different from a lowest resonance frequency of the speaker unit, the resonance element being an elastic member connected to a diaphragm of the speaker unit, the elastic member having an outer periphery connected to the diaphragm and an inner periphery connected to a weight, and
at least a part of the enclosure also serves as a body of the mobile object.

US Pat. No. 11,070,907

SIGNAL MATCHING METHOD AND DEVICE


1. A method for matching sensors comprising:generating a first sensor signal from a first sensor;
generating a second sensor signal from a second sensor;
separating the first sensor signal into a magnitude component and a phase component;
separating the second sensor signal into a magnitude component and a phase component;
determining if the magnitude component of at least one of the first or second sensor signals is above a self-noise threshold;
determining if a phase difference of the phase components of the phase components is within a specified tolerance of a predetermined phase difference threshold; and
if the magnitude component of at least one of the first or second sensor signals is above the self-noise threshold, and the phase difference is within the specified tolerance of the predetermined phase difference threshold, matching the magnitude of the first sensor signal to that of the second sensor signal by multiplying the magnitude of the first sensor signal by a magnitude correction value that is a function of a ratio of the magnitude components of the first and second sensor signals.

US Pat. No. 11,070,905

SELF-COOLING HEADSET

Hewlett-Packard Developme...


1. A self-cooling headset comprising:an ear cup to form an ear enclosure when placed over a user's ear;
an exit port located toward a top side of the ear cup and having a first check valve with a first cracking pressure installed therein to open and release a volume of air from the ear enclosure through the exit port when a positive pressure within the ear enclosure overcomes the first cracking pressure; and,
an entry port located toward a bottom side of the ear cup and having a second check valve with a second cracking pressure different from the first cracking pressure installed therein to open and admit a volume of air into the ear enclosure through the entry port when a partial vacuum within the ear enclosure causes an external pressure to overcome the second cracking pressure, the locations of the exit port and entry port to facilitate removal of warm air from the ear enclosure by natural convection.

US Pat. No. 11,070,904

FORCE-ACTIVATED EARPHONE

Apple Inc., Cupertino, C...


15. An earphone, comprising:a housing, comprising:a speaker; and
a stem extending from the speaker and defining:a touch input surface; and
a force input surface opposite the touch input surface;


a flexible circuit disposed in the housing and comprising:a first circuitry section;
a second circuitry section; and
a third circuitry section;

wherein the flexible circuit flexes to allow the second circuitry section to move:toward the third circuitry section when a force is applied to the force input surface; and
away from the third circuitry section when the force is no longer applied; and
a controller, disposed in the housing, that is operative to determine a touch to the touch input surface using a first change in a first mutual capacitance detected using the first circuitry section and a non-binary amount of the force using a second change in a second mutual capacitance detected using the second circuitry section and the third circuitry section.


US Pat. No. 11,070,902

CONTACT HEARING SYSTEM WITH WEARABLE COMMUNICATION APPARATUS

Earlens Corporation, Men...


1. A wearable communication apparatus, comprising:a support structure configured to be wearable by a user, wherein the support structure comprises a frame to position one or more displays in front of one or both eyes of the user to view one or more images on the one or more displays and wherein the audio signal corresponds to the one or more images;
a contact transducer assembly configured to produce vibrations of an eardrum, wherein the contact transducer assembly is configured to produce wide bandwidth vibrations and reside on a lateral surface of the eardrum; and
circuitry at least partially positioned on the support structure and configured to drive the contact transducer assembly with an audio signal from an output transducer coupled to the circuitry;
wherein the output transducer comprises a magnetic field generator configured for positioning in an ear canal of the user, the magnetic field generator generating an electromagnetic signal wherein the electromagnetic signal provides both power and signal to the contact transducer;
wherein the output transducer is shaped and configured to provide a widely vented ear canal such that ambient sound passes to the eardrum of the user;
wherein the circuitry is adapted to produce a display audio signal corresponding to the one or more images;
wherein the circuitry generates the display audio signal and transmits the display audio signal to the contact transducer assembly by means of the electromagnetic signal in order to provide the user with the display audio signal; and
wherein the wearable communication apparatus further comprises one or more microphones to receive ambient sound;
wherein the circuitry receives the ambient sound, amplifies the ambient sound, and transmits amplified ambient sound to the contact transducer assembly by means of the electromagnetic signal in order to provide the user with amplified ambient sound;
wherein the one or more microphones comprises:
a first microphone located on a first side of the support structure to place the first microphone on a first side of the user;
a second microphone located on the support structure to place the second microphone on a second side of the user;
a third microphone located on the first side of the support structure to place the third microphone on the first side of the user; and
a fourth microphone located on the support structure to place the fourth microphone on the second side of the user;
wherein the first microphone is located away from a first ear canal opening and the third microphone is located near the first ear canal opening to detect high frequency spatial localization cues on the first side of the user;
wherein the second microphone is located away from a second ear canal opening and the fourth microphone is located near the second ear canal opening to detect high frequency spatial localization cues on the second side of the user; and
wherein the first, second, third and fourth microphones comprise a directional microphone, the directional microphone being oriented toward a field of view of the wearable communication apparatus.

US Pat. No. 11,070,901

THIN-TYPE PHONE RECEIVER

SUZHOU YICHUAN ELECTRONIC...


1. A thin-type phone receiver, comprising:a housing, having a mounting cavity;
a vibration membrane assembly, comprising:
a frame sealedly fixed on the housing wherein a mounting area is formed by a part of a side wall of the frame and an inner wall of the housing,
a diaphragm at least partially made of soft magnetic material and fixed on the frame with one end of the diaphragm hanging in an inner space of the frame, and
a sealing membrane;
a coil, sealedly fixed in the mounting area and sealedly sleeved on the frame,
wherein the vibration membrane assembly separates the mounting cavity into a first cavity and a second cavity that are arranged side by side and separate from each other;
wherein the vibration membrane assembly separates the mounting cavity into a first cavity and a second cavity that are arranged side by side and not communicated with each other;
the diaphragm comprises a first portion fixed on the frame and extending into the inner hole of the coil, and a second portion formed on the first portion and hanging outside the coil,
wherein the second portion has a width in a radial direction of the coil larger than that of the first portion, and
wherein a first gap is formed between the diaphragm and the frame, and the sealing membrane sealedly covers an entirety of the first gap, such that the first portion of the diaphragm extending into the inner hole of the coil can generate vibration.

US Pat. No. 11,070,900

MICROPHONE

Audio-Technica Corporatio...


1. A microphone comprising:a microphone unit;
an impedance converter that converts output impedance of the microphone unit;
a light source that notifies an operation state of the microphone unit;
a conversion substrate on which the impedance converter is mounted;
a light source substrate on which the light source is mounted; and
a connection substrate to which a signal line that transmits a signal from the impedance converter and a power line that transmits power to the light source are connected, wherein
the conversion substrate, the light source substrate, and the connection substrate are three-dimensionally connected to one another to constitute one substrate unit.

US Pat. No. 11,070,899

ELECTRONIC DEVICE INCLUDING SOUND BROADCASTING ELEMENT

Innolux Corporation, Mia...


1. An electronic device comprising:a display panel;
a protection layer disposed on the display panel; and
a sound broadcasting element disposed adjacent to the protection layer and contacting the protection layer,
wherein the sound broadcasting element comprises a piezoelectric component and a connection component, the connection component is connected between the piezoelectric component and the protection layer, the connection component is located on a side of the display panel, a distance between the piezoelectric component and the display panel is in a range of 0.1 mm to 5 cm, and a back surface of the display panel is located between the piezoelectric component and the protection layer.

US Pat. No. 11,070,898

MUTUALLY SECURE OPTICAL DATA NETWORK AND METHOD


1. A digital data network communication method comprises:accepting a plurality of data streams into a passive optical network (PON) interface router, interconnected with at least one secondary PON interface router serving a plurality of user devices through a plurality of optical network units (ONUs);
configuring said PON interface router to virtually separate the information intended for at least one of a plurality of private user devices, wherein said at least one of a plurality of private user devices is connected to at least one of said plurality of ONUs;
wherein said configuring comprises:generating at least one independently unique virtual routing table by using Virtual Routing and Forwarding (VRF);
virtually separating at least one private data stream intended for said at least one private user device from said plurality of data streams using said at least one independently unique virtual routing table to create at least one virtually separated private data stream;
uniquely labelling Multi-Protocol Label Switching (MPLS) data packages contained in said at least one virtually separated private data stream using (MPLS), whereby said MPLS data packages are further identified as MPLS labelled data packages;

sending said plurality of data streams, including said at least one virtually separated private data stream comprising said MPLS labelled data packages, to the said at least one secondary PON interface router;
forwarding said plurality of data streams including said at least one virtual separated private data stream from said at least one secondary PON interface router to a PON optical line terminal (OLT) without altering the virtual separation of said at least one virtually separated private data stream and without altering said MPLS labelled data packages;
aggregating within said OLT said plurality of data streams and said at least one virtually separated private data stream into a common data feed;
distributing said common data feed to said plurality of ONUs;
wherein said distributing comprises:replicating said common data feed using at least one optical splitter connected to said plurality of ONUs;
delivering said common data feed to said plurality of ONUs;

extracting within said at least one of said plurality of ONUs, said at least one virtually separated private data stream including said MPLS labelled data packages from said common data feed;
sending said at least one virtually separated private data stream including said MPLS labelled data packages from said at least one of said plurality of ONUs to said at least one of a plurality of private user devices.

US Pat. No. 11,070,897

COMMUNICATION SYSTEM EMPLOYING OPTICAL FRAME TEMPLATES

Nubis Communications, Inc...


1. An apparatus comprising:a first optical interface connectable to receive a sequence of optical frame templates, each of the optical frame templates comprising a respective frame header and a respective frame body, the frame body comprising a respective optical pulse train;
an optical splitter connected to the first optical interface;
an optical modulator connected to a first output of the optical splitter and configured to load data into the respective frame bodies to convert the sequence of optical frame templates into a corresponding sequence of loaded optical frames; and
an optical receiver connected to a second output of the optical splitter and configured to extract control information from the respective frame headers.

US Pat. No. 11,070,896

DYNAMIC OPTICAL SWITCHING IN A TELECOMMUNICATIONS NETWORK

Level 3 Communications, L...


1. A method for providing a data network, the method comprising:connecting, via a plurality of optical fiber connections, a plurality of participant sites to an optical switching device, each of the plurality of participant sites connected to the optical switching device with a respective optical fiber connection of the plurality of optical fiber connections, wherein a total length of each of the respective optical fiber connections defines a network service area with an outer boundary defined by a maximum total length of optical fiber with a transmission line loss measurement less than an acceptable transmission line loss value based on a type of the optical fiber of the respective optical fiber connection.

US Pat. No. 11,070,895

SYSTEM AND METHOD FOR MONITORING GAS EMISSION OF PERISHABLE PRODUCTS

Walmart Apollo, LLC, Ben...


1. A system for automatically monitoring gas emissions of perishable goods in a retail sales environment comprising:a display fixture configured to store and display for sale a group of perishable items;
one or more gas emission sensors associated with the display fixture and configured to measure gas emissions from the group of perishable items;
a control circuit coupled to the one or more gas emission sensors and configured to:receive a gas emission measurement;
compare the gas emission measurement with stored gas emission data associated with a category of the group of perishable items; and
make a determination corresponding to the group of perishable items based on the comparison; and
determine whether to apply a discount to the group of perishable items based on the determination.


US Pat. No. 11,070,894

METHODS, SYSTEMS, AND MEDIA FOR PRESENTING INTERACTIVE ELEMENTS WITHIN VIDEO CONTENT

Google LLC, Mountain Vie...


1. A method for presenting interactive elements within video content is provided, comprising:causing, at a first time point, an initial view of immersive video content to be presented on a user device, wherein the immersive video content includes at least the initial view having a first horizontal field of view at a first angular direction, and a first interactive element to be presented on the user device, the first interactive element at a first angular position outside of the first horizontal field of view at the first time point;
receiving, from the user device at a second time point, a first input including a selection on the user device of the initial view of the immersive video content and a gesture, received from the user device, in a direction towards the first angular position;
in response to receiving the first input, causing a viewpoint of the immersive video content to change to a first view having a second horizontal field of view at a second angular direction different from the first angular direction;
determining that the first angular position is within the second horizontal field of view of the first view of the immersive video content;
in response to determining that the first angular position is within the second horizontal field of view of the first view, identifying a content creator associated with the first interactive element;
determining a length of time in which the first angular position is within the viewpoint of the second horizontal field of view before a user input causes a change in viewpoint from the first view to a second view, wherein the first angular position is not within the second view; and
assigning attribution information that indicates the presentation of the first interactive element to the content creator associated with the first interactive element, the attribution information determined based on the determined length of time.

US Pat. No. 11,070,893

METHOD AND APPARATUS FOR ENCODING MEDIA DATA COMPRISING GENERATED CONTENT

Canon Kabushiki Kaisha, ...


1. A method for encapsulating encoded media data composed of a plurality of samples, the method comprising by a server device:obtaining encoded media data;
encapsulating the encoded media data in a set of at least one first track;
generating a second track;
wherein:
the second track describes samples comprising a set of transformation rules and parameters adapted to be applied on samples of at least one first track;
the second track comprises references to at least one of the first tracks; and
the second track comprises in a metadata part, a description of the set of transformation operators.

US Pat. No. 11,070,892

METHODS AND APPARATUS TO PRESENT SUPPLEMENTAL MEDIA ON A SECOND SCREEN

THE NIELSEN COMPANY (US),...


1. A digital media device comprising:a secondary device registrar to register a computing device with the digital media device in response to establishing a connection with the computing device;
a secondary media initiator to transmit a first notification to the computing device via a communication channel, the first notification including a location at which supplemental media is accessible by the computing device, the supplemental media associated with a first type of media identifiable via at least one of a signature or a code; and
a skip enabler to:in response to a lapse in the connection between the computing device and the digital media device for at least a threshold amount of time, provide, via a presentation device, a warning to indicate that media skipping functionality on the digital media device with respect to the first type of media is to be disabled if the connection with the computing device is not reestablished; and

in response to a determination that the connection with the computing device has not been reestablished, disable the media skipping functionality on the digital media device with respect to the first type of media.

US Pat. No. 11,070,891

OPTIMIZATION OF SUBTITLES FOR VIDEO CONTENT

Amazon Technologies, Inc....


1. A computer-implemented method comprising:under control of a computing system comprising one or more computing devices configured to execute specific instructions,analyzing a first version of video content using a visual interest model of visual attention of viewers;
determining a point of visual interest based on analyzing the first version of the video content using the visual interest model;
determining a first distance between the point of visual interest and a first candidate location for display of subtitles of the video content;
determining a second distance between the point of visual interest and a second candidate location for display of the subtitles;
determining a first location score based at least partly on applying a first weight, associated with the first candidate location, to the first distance, wherein the first weight is one of a plurality of weights that bias determination of a display location for the subtitles toward a central display location;
determining a second location score based at least partly on applying a second weight, associated with the second candidate location, to the second distance, wherein the second weight is one of the plurality of weights;
determining, based at least partly on an analysis of the first location score with respect to the second location score, that the subtitles are to be displayed at the first candidate location; and
generating a second version of the video content comprising subtitle data specifying that the subtitles are to be displayed at the first candidate location.


US Pat. No. 11,070,890

USER CUSTOMIZATION OF USER INTERFACES FOR INTERACTIVE TELEVISION

Comcast Cable Communicati...


1. A method comprising:causing, by a computing device, output of a user interface;
receiving, by the computing device, a first indication, via the user interface, of a selection of a first layout customization to a first layout for a first interactive content user interface associated with a first content application;
causing, by the computing device, output of the first interactive content user interface, wherein the first interactive content user interface is based on the first layout and the first layout customization;
receiving, by the computing device, a second indication of a selection of a second content application;
based on the selection of the second content application, receiving, by the computing device via a network, an application data package comprising:a layout file indicating a second layout for a second interactive content user interface associated with the second content application, and
content, separate from the layout file, configured to populate the second interactive content user interface;

determining, by the computing device, and based on the first layout customization, the layout file indicating the second layout for the second interactive content use interface, and the content configured to populate the second interactive content user interface, the second interactive content user interface; and
causing, by the computing device, output of the second interactive content user interface.

US Pat. No. 11,070,889

CHANNEL BAR USER INTERFACE

Apple Inc., Cupertino, C...


1. A method comprising:at an electronic device:displaying, on a display, a first media item;
while displaying the first media item, receiving a trigger to display information about additional media items;
in response to receiving the trigger to display the information about the additional media items, displaying, overlaid on at least a portion of the first media item, a plurality of representations of additional media items, wherein:the additional media items are available for viewing by the electronic device from a plurality of media sources and are selected based on one or more selection criteria, including past activity of a user of the electronic device; and
a first representation of the plurality of representations of additional media items corresponding to a first additional media item includes a first media portion of content associated with the first additional media item, wherein a second representation of the plurality of representations of additional media items corresponding to a first collection of episodic media has a current focus;

while displaying the plurality of representations of additional media items overlaid on the at least the portion of the first media item and while displaying the first media item, receiving, via one or more input devices, a first user input for causing the first representation corresponding to the first additional media item to have the current focus;
in response to receiving the first user input, playing a second media portion of the content associated with the first additional media item, different from the first media portion of the content associated with the first additional media item, while maintaining display of the first media item;
while displaying the plurality of representations of additional media items overlaid on the at least the portion of the first media item, receiving, via the one or more input devices, a sequence of inputs including an input selecting a respective representation of the plurality of representations; and
in response to receiving the sequence of inputs selecting the respective representation:in accordance with a determination that the respective representation is the first representation, displaying, on the display, the first additional media item; and
in accordance with a determination that the respective representation is the second representation, displaying, on the display, a landing page associated with the first collection of episodic media.



US Pat. No. 11,070,888

CONTENT STRUCTURE AWARE MULTIMEDIA STREAMING SERVICE FOR MOVIES, TV SHOWS AND MULTIMEDIA CONTENTS

WeMovie Technologies, Sa...


1. A computer-implemented method for processing a multimedia content, comprising:receiving one or more media files and metadata information of the multimedia content, wherein each of the one or more media files comprises raw video or audio data captured at a production stage for producing the multimedia content, and wherein the metadata information indicates production stage information of the multimedia content, the metadata information determined during or after the production stage for producing the multimedia content;
determining a hierarchical structure of the multimedia content based on the production stage information of the multimedia content, wherein the hierarchical structure indicates that the multimedia content comprises multiple scenes, each of the multiple scenes comprising multiple shots produced with corresponding devices and cast, wherein the production stage information includes a time-domain start position and a duration for each of the multiple scenes for the determining of the hierarchical structure;
generating, for individual scenes in the multimedia content, multiple edited media files for different viewers or viewer groups using the raw video or audio data captured at the production stage, wherein the multiple edited media files are stored separately from the raw video or audio data in the hierarchical structure of the multiple content;
identifying, for individual scenes in the hierarchical structure of the multimedia content, characteristics associated with the individual scenes based on the production stage information;
generating multiple copies of the multimedia content at different compression levels, wherein the different compression levels are adaptively adjusted for the individual scenes based on the characteristics associated with the individual scenes; and
dividing each of the multiple copies of the multimedia content into segments based on the hierarchical structure, wherein a length of a segment is adaptively adjusted based on the characteristics associated with the individual scenes.

US Pat. No. 11,070,887

VIDEO CONTENT DEEP DIVING

Verizon Media Inc., New ...


1. A method for displaying content associated with a video element of a video, comprising:displaying a video through a video player interface of a computing device, the video comprising one or more video elements tagged with content tags;
receiving a user interaction with a video element of the video;
identifying a content tag, corresponding to a first topic, used to tag the video element;
retrieving a set of content associated with the first topic;
determining a user context of a user of the computing device, wherein the user context is indicative of an intention to perform a first action in association with the first topic associated with the content tag;
determining that first content amongst the set of content has a first level of relevance to performing the first action of the user context in association with the first topic;
determining that second content amongst the set of content has a second level of relevance to performing the first action of the user context in association with the first topic;
responsive to determining that the second level of relevance of the second content to performing the first action is lower than the first level of relevance of the first content to performing the first action:selecting, based upon the user context, the first content with the first level of relevance to performing the first action from the set of content but not the second content with the second level of relevance to performing the first action from the set of content, wherein the same first action is associated with two or more different levels of relevance in association with the same user of the computing device; and

displaying the first content but not the second content, within a content interface at least partially overlying the video player interface, through the computing device, wherein the displaying the first content comprises displaying the first content overlying the video.

US Pat. No. 11,070,886

METHOD AND APPARATUS FOR LOOPING A VIDEO FILE

HONG KONG LIVEME CORPORAT...


1. A method for looping a video file, applicable to an electronic device capable of playing the video file and comprising:playing video frames of the video file in positive order after a playing instruction for the video file is obtained, wherein each of the video frames of the video file contains a video frame identifier;
obtaining a second-to-last video frame of the video file based on the video frame identifier contained in each of the video frames, and playing the video frames of the video file in reverse order starting from the second-to-last video frame until a first preset video frame after the last video frame of the video file is played;
returning to the step of playing the video frames of the video file in positive order until a stop playing instruction is obtained to stop playing the video file.

US Pat. No. 11,070,885

METHOD AND APPARATUS FOR CONTROLLING PLAYBACK OF MEDIA USING A SINGLE CONTROL

INTERDIGITAL MADISON PATE...


1. A method to control the playback of media comprising:receiving a first signal in response to user activation of a control;
designating a time position in a playback of a first media in response to the receipt of the first signal; and
activating an operation depending on the time designated position in the playback of the first media, wherein the operation to be performed is based on the designated time position in the playback of the first media and wherein user input determines what operation is to be performed based on the time position in the playback of the first media.

US Pat. No. 11,070,884

CONTROLLING NETWORKED MEDIA CAPTURE DEVICES

Comcast Cable Communicati...


1. A method comprising:receiving, a media playback instruction configured to alter playback of media by a media playback device, wherein the media playback instruction comprises one or more of a play command, a rewind command, or a fast forward command; and
sending, based on information associating the media playback instruction with a media capture instruction, the media capture instruction to a media capture device that comprises a camera, wherein the media capture instruction is configured to control the camera.

US Pat. No. 11,070,883

SYSTEM AND METHOD FOR PROVIDING A LIST OF VIDEO-ON-DEMAND PROGRAMS

Rovi Guides, Inc., San J...


1. A method for providing a list of video-on-demand programs to a viewer, the method comprising:causing a display of a media asset at user equipment;
determining whether playback of the media asset has reached a pre-defined amount;
in response to determining that the playback of the media asset has reached the pre-defined amount:determining a plurality of video-on-demand programs that are available;

determining, from the plurality of video-on-demand programs, a first video-on-demand program that is related to the media asset;
determining, from the plurality of video-on-demand programs, a second video-on-demand program different from the first video-on-demand program, based on an indicated level of interest associated with each video-on-demand program of the plurality of video-on-demand programs; and
automatically, independently of receiving a viewer request, causing a simultaneous display, while the media asset is being played, of a first identifier corresponding to the determined first video-on-demand program, wherein the first identifier includes a plurality of markings that indicate user interest and user viewing history, and a second identifier corresponding to the determined second video-on-demand program different from the determined first video-on-demand program, wherein the second identifier includes a plurality of markings that indicate user interest and user viewing history, and first identifier and the second identifier are sorted according to the ranking of the plurality of markings.

US Pat. No. 11,070,882

GLOBAL SPEECH USER INTERFACE

Promptu Systems Corporati...


1. A global speech user interface (GSUI) device, comprising:a processor device configured for performing speech recognition to transcribe spoken commands for use in navigating among one or more applications hosted on a communications system;
said processor device configured for displaying a set of visual cues to dynamically guide a user in issuing spoken commands to navigate among the one or more applications, said visual cues comprising any of:a set of immediate speech feedback overlays, each of which provides non-textual feedback information;
a set of feedback overlays, each of which provides information about a problem that said communications system is experiencing;
a help menu overlay that shows help services available, each of said services being accessible by spoken command; and
a main menu overlay that shows a list of services available to the user, each of said services being accessible by spoken command; and

said processor device further configured for, responsive to said communications system encountering a problem recognizing the spoken commands, providing a list including a plurality of possible recognition matches,wherein the problem recognizing the spoken commands satisfies a particular recognition feedback condition of a plurality of recognition feedback conditions that correspond to amounts of information about an utterance, and
wherein the list including the plurality of possible recognition matches is provided in response to satisfying the particular recognition feedback condition.


US Pat. No. 11,070,881

SYSTEMS AND METHODS FOR EVALUATING MODELS THAT GENERATE RECOMMENDATIONS

Verizon Patent and Licens...


1. A method, comprising:receiving, by a device, content data, a first model, and a second model,the content data including a first identifier of a first content item and a first set of metadata associated with the first content item, and
the first model being trained on different types of metadata than the second model;

processing, by the device and within a first testing mode of the device, the first set of metadata associated with the first content item, to generate first recommendations from the first model and second recommendations from the second model;
providing, by the device, the first identifier of the first content item and a combination of the first recommendations and the second recommendations to client devices;
receiving, by the device and from the client devices, user-generated target recommendations based on the combination of the first recommendations and the second recommendations;
processing, by the device and within the first testing mode of the device, the user-generated target recommendations, the first recommendations, and the second recommendations, to determine a first performance score of the first model, a second performance score of the second model, and provide feedback for updating the first model and the second model; and
causing, by the device, the first model and/or the second model to be updated based on the feedback, the first performance score of the first model, and/or the second performance score of the second model.

US Pat. No. 11,070,880

CUSTOMIZED RECOMMENDATIONS OF MULTIMEDIA CONTENT STREAMS

The DIRECTV Group, Inc., ...


1. A method, comprising:facilitating, by a system comprising a memory and a processor, an output, at a user equipment, of a recommendation of a first multimedia content stream that comprises a content item based on a comparison between a first weight assigned to the first multimedia content stream and a second weight assigned to a second multimedia content stream that comprises the content item, wherein the first weight and the second weight are assigned based on respective determined interests at the user equipment for the first multimedia content stream and the second multimedia content stream; and
facilitating, by the system, respective transmissions of the first multimedia content stream and the second multimedia content stream to the user equipment in an order of transmission based on a defined transmission order.

US Pat. No. 11,070,879

MEDIA CONTENT RECOMMENDATION THROUGH CHATBOTS


1. A method for recommending media content through intelligent automated chatting, comprising:receiving a message in a conversation;
identifying a new topic based on the message and context of the conversation;
identifying a media content from a set of media contents based on the new topic;
generating a short video clip for the media content using a neural network model including a convolution neural network (CNN) part and a recurrent neural network (RNN) part, and wherein the generating the short video clip further comprises:dividing the media content into a plurality of clips;
mapping the plurality of clips into a plurality of vectors through the CNN part;
selecting a part of vectors representing clips that should be remained through the RNN part; and
generating the short video clip based on the part of vectors; and

providing a recommendation of the media content in the conversation.

US Pat. No. 11,070,878

METHOD AND APPARATUS FOR AUTHORIZING RECEPTION OF MEDIA PROGRAMS ON A SECONDARY RECEIVER BASED UPON RECEPTION OF THE MEDIA PROGRAM BY A PRIMARY RECEIVER

Fox Latin American Channe...


1. A method of authorizing reception of one or more media programs, the method comprising:initiating a transmission of a first media program of the one or more media programs from a provider to a primary receiver authorized to receive the first media program, the first media program having media content;
storing a timestamped version of the media content of the transmitted first media program;
receiving media data from a secondary receiver, wherein the media data is generated by the secondary receiver from the media content that is being obtained by the secondary receiver from the primary receiver;
comparing the media data received from the secondary receiver with the media content being transmitted by the provider to the primary receiver to determine a match between the media data received from the secondary receiver and the media content being transmitted by the provider to the primary receiver; and
authorizing reception of the one or more media programs from the provider by the secondary receiver when the comparing determines the match;
wherein the media content comprises audio content, image content or video content, and the media data comprises audio data, image data or video data generated from the audio content, the image content or the video content, respectively;
wherein comparing the media data received from the secondary receiver with the media content being transmitted by the provider to the primary receiver comprises comparing the stored timestamped version of the media content of the transmitted first media program with a media data timestamp of the media data.

US Pat. No. 11,070,877

SYSTEMS AND METHODS FOR CONFLICT DETECTION BASED ON USER PREFERENCES

Rovi Guides, Inc., San J...


1. A method of indicating that a conflict exists between two devices each attempting to interact with a same media guidance application associated with a group of users, the method comprising:determining whether at least two devices are attempting to browse, within a predetermined time period, a plurality of media assets represented on a guide page of the same media guidance application associated with the group of users; and
in response to determining that at least two devices are attempting to browse, within the predetermined time period, the plurality of media assets represented on the guide page of the same media guidance application associated with the group of users, determining whether the conflict exists between the at least two devices by:receiving, from a first device among the at least two devices, a first request to generate for display a first media asset selected from among the plurality of media assets represented on the guide page of the same media guidance application associated with the group of users; and
receiving, from a second device among the at least two devices, a second request to generate for display a second media asset selected from among the plurality of media assets represented on the guide page of the same media guidance application associated with the group of users; and

generating for display, on the first device, a notification indicating the second request to generate for display the second media asset, wherein the notification comprises a prompt to accept or reject the second request from the second device.

US Pat. No. 11,070,876

SECURITY MONITORING WITH ATTACK DETECTION IN AN AUDIO/VIDEO PROCESSING DEVICE

Avago Technologies Intern...


1. A security-monitoring system for a device, the system comprising:a data-collection module configured to collect one or more security parameters associated with a data stream input to the device;
a security-monitoring module configured to monitor a plurality of activities associated with the data stream within the device; and
an attack-detection module configured to detect one or more attacks on the device based on the one or more security parameters and the monitored plurality of activities, wherein the attack-detection module detect attacks by using one or more trusted processors.

US Pat. No. 11,070,875

METHODS AND APPARATUS TO DETERMINE A NUMBER OF PEOPLE IN AN AREA

THE NIELSEN COMPANY (US),...


1. A base station comprising:a mobile system controller to:change a configuration setting of the base station to cause one or more mobile devices registered with the base station to re-register with the base station, the base station to not provide network access to a first one of the one or more mobile devices registered with the base station;
determine whether a threshold amount of time has elapsed without communication from the first one of the one or more mobile devices that was previously registered with the base station; and
in response to determining the threshold amount of time has elapsed, change the configuration setting of the base station again to cause the first one of the one or more mobile devices that was previously registered with the base station to re-register with the base station;

a registration manager to respond to registration requests received from the one or more mobile devices after the configuration setting is changed by storing device identification information for the one or more mobile devices associated with the registration requests; and
a counter controller to determine a count of mobile devices at a location of the base station based on the stored device identification information, the counter controller to determine the count in response to a request.

US Pat. No. 11,070,874

METHODS AND APPARATUS TO VERIFY PRESENTATION OF MEDIA CONTENT

The Nielsen Company (US),...


1. An apparatus for use with a set-top box (STB) and a media presentation device, comprising:means for interfacing an input to receive a first audio signal associated with a program selected by a user via the STB;
means for transducing to receive a free-field radiating second audio signal output by at least one of the media presentation device or an audio system associated with the media presentation device;
means for filtering having adaptive coefficients to delay and to attenuate the first audio signal to form an estimate of the second audio signal;
means for comparing to compare the second audio signal to the estimate of the second audio signal to form an output;
means for protecting privacy to facilitate burst mode operation of the means for comparing by periodically disabling the means for transducing and the means for comparing; and
means for interfacing an output to provide a value indicative of whether the program selected by the user via the STB is presented at the media presentation device based on the output.

US Pat. No. 11,070,873

LOCALLY GENERATED SPOT BEAM REPLACEMENT

DISH NETWORK L.L.C., Eng...


1. A method, comprisingreceiving, via a satellite antenna at a user's premises, a spot beam signal and at least one other orbital signal;
receiving, via an over-the-air antenna at the user's premises, at least one over-the-air television signal;
converting the at least one over-the-air television signal into a replacement spot beam signal;
combining the replacement spot beam signal with the at least one other orbital signal; and
outputting the combined signal to a content receiver for display to a user of video from the combined signal.

US Pat. No. 11,070,872

RECEIVING DEVICE, TRANSMITTING DEVICE, AND DATA PROCESSING METHOD

Saturn Licensing LLC, Ne...


1. A receiving device, comprising:processing circuitry configured toset broadcast reception data, received by the receiving device from a broadcast, as a media source object corresponding to a processing object of media reproduction by applying an application programming interface (API) of a web application,
control the web application to execute a process for obtaining broadcast content segment information of broadcast content in the broadcast reception data that is stored in a buffer associated with the media source object, and
control the web application to execute, based on the broadcast content segment information of a segment of the broadcast content to be replaced being obtained, a process of replacement or appendance of the broadcast content in the broadcast reception data stored in the buffer with network content in application reception data received via a network, wherein

the web application is related to a program or a channel.

US Pat. No. 11,070,871

METHODS AND APPARATUS TO DETECT COMMERCIAL ADVERTISEMENTS ASSOCIATED WITH MEDIA PRESENTATIONS

The Nielsen Company (US),...


1. A computer system, comprising:memory; and
a processor to execute computer readable instructions to:detect a change in box-formatting between a first video frame and a subsequent video frame by detecting edges between a primary display area and screen filler areas of the box-formatting to detect an appearance or disappearance of the screen filler areas, the screen filler areas including motion images in at least two of a left portion, a right portion, a top portion or a bottom portion of the box-formatting of the first video frame or the subsequent video frame; and
indicate a transition between the first video frame and the subsequent video frame as a commercial transition based on the detected change in box-formatting.


US Pat. No. 11,070,870

METHOD FOR DISPLAYING IMAGE AND MOBILE ROBOT IMPLEMENTING SAME

LG ELECTRONICS INC., Seo...


1. A robot comprising:a transceiver configured to communicate with a plurality of display devices in a space including a plurality of sub-spaces and an external server; and
a controller configured to select a first display device among the plurality of display devices based on at least one of a location of the robot in the space, a situation of a user of the robot, a type of an image, or state information on the plurality of display devices,
wherein the transceiver is further configured to transmit the image to the first display device, so that the transmitted image is executed on the first display device,
wherein the image is a first image for a video call that is transmitted by the external server, and the situation of the user is a situation in which the user and the robot are not in a same sub-space of the plurality of sub-spaces, and
wherein the controller is further configured to select, among the plurality of display devices, a display device located in a sub-space of the plurality of sub-spaces, in which the user is located, as the first display device.

US Pat. No. 11,070,869

METHOD FOR CONTROLLING INTERNET OF THINGS DEVICES WITH DIGITAL TV RECEIVERS USING TRANSMISSION FROM A BROADCASTER IN A TRANSPORT STREAM FLOW


1. A method for controlling Internet of Things (IoT) devices with digital TV (DTV) receivers using transmission from a broadcaster in a transport stream flow comprising:receiving at the DTV receiver a broadcasting DTV signal from a broadcaster through a MPEG-2 transport of stream flow, wherein the DTV signal includes data added by the broadcaster and synchronized with a desirable video scene together with other data present at the Transport Stream flow;
enabling a pairing mode on the IoT device by the user;
enabling a pairing mode on the TV receiver by the user;
TV receiver start searching for IoT devices to be paired and added to the list of IoT devices compatible with the TV receiver until a given timeout;
checking if some IoT device was found,
in a negative case, returning to the enabling pairing mode on the IoT device and processing is started again,
in a positive case, the IoT device is paired with the TV receiver and the paired device is added to a pairing list stored at TV device memory to be used by a communication application;
after concluding the pairing for one IoT device, checking if there is another IoT device to be paired,
in a negative case, the pairing is finished,
in a positive case, the pairing is repeated until all IoT devices are added to the pairing list;
filtering IoT commands and timestamp information from the data in the DTV signal;
synchronizing IoT commands input at the DTV receiver with the video flow through timestamps;
broadcasting IoT commands as received from the data in the DTV signal based on digital TV technology standard to paired IoT devices connected to a home network; and
extracting the IoT commands from the signal received through a broadcast technique on the tuned DTV channel (TS) executing,
wherein the IoT commands are configured to control at least one of a robotic seat, a wearable device, and an air conditioner.

US Pat. No. 11,070,868

SYSTEM AND METHOD FOR CAPTURING AUDIO OR VIDEO DATA

UNITED SERVICES AUTOMOBIL...


1. A video recording apparatus comprising:a video recorder configured to record video data;
a memory configured to store the video data;
a biometric sensor configured to measure a biometric characteristic of a user; and
a processing circuitry in communication with the memory, the video recorder, and the biometric sensor, wherein the processing circuitry is configured to:control storage of the video data into a video segment on the memory;
detect a predetermined biometric measurement measured by the biometric sensor as a first trigger event within the video data during storage of the video data into the video segment; and
implement a first trigger event protocol in response to the detection of the predetermined biometric measurement as the first trigger event to:control storage, on the memory, of the video data into a first video portion, wherein the first video portion includes the predetermined biometric measurement as the first trigger event; and
flag the first video portion for preservation.



US Pat. No. 11,070,867

METHOD, DEVICE, AND COMPUTING APPARATUS FOR ACQUIRING BROADCASTING CONTENT

Alibaba Group Holding Lim...


1. A method for acquiring and storing broadcasting multimedia content performed by computer programs stored in a memory and executable by a processor, the method comprising:while broadcasting multimedia content is being broadcasted on a webpage, analyzing, by the processor, a broadcasting mechanism of a multimedia resource corresponding to the broadcasting multimedia content, wherein analyzing the broadcasting mechanism of the multimedia resource includes:recognizing, by the processor, a type of a multimedia player on the webpage and an output mode for broadcasting the multimedia resource;
acquiring, by the processor, based at least in part on the recognized multimedia player type and the recognized output mode, the broadcasting multimedia content at a gradually decreasing frame rate;
storing, by the processor, during the broadcasting of the broadcasting multimedia content on the webpage, the acquired broadcasting multimedia content in the memory; and
when acquiring the broadcasting multimedia content and when the webpage is a non-current webpage, timely triggering, by the processor, the multimedia player to keep broadcasting the multimedia content.


US Pat. No. 11,070,866

SYNCHRONIZING MEDIA CONTENT TAG DATA

TiVo Solutions Inc., San...


1. A method, comprising:receiving, at a media device, a plurality of time-based metadata associated with a reference version of a media content item, the plurality of time-based metadata created by a service provider while the media content item was originally broadcast by a content provider, the plurality of time-based metadata including a first hash value data sequence of in-band data in the media content item indicating a first time-based position in the media content item and a second hash value data sequence of in-band data in the media content item indicating a second time-based position in the media content item;
storing, on a storage device at the media device, a second version of the media content item;
in response to a first user input, causing the stored media content item to be displayed;
while causing the stored media content item to be displayed, calculating hash value sequences of in-band data in the stored media content item;
determining that a first one of the calculated hash value sequences, which is associated with a first time-based location in the stored media content item, corresponds to the first hash value data sequence; and
in response to a second user input:determining that the displaying of the stored media content item has reached the first time-based location;
determining that a second one of the calculated hash value sequences, which is associated with a second time-based location in the stored media content item, corresponds to the second hash value data sequence; and
preventing display of a portion of the stored media content item beginning at the first time-based location in the stored media content item associated with the first calculated hash value sequence and ending at the second time-based location in the stored media content item associated with the second calculated hash value sequence.


US Pat. No. 11,070,865

MULTI SENSORY INPUT TO IMPROVE HANDS-FREE ACTIONS OF AN ELECTRONIC DEVICE

Google LLC, Mountain Vie...


1. A method for media playback, the method comprising:determining, using a computing device having one or more sensors, whether context information associated with the computing device indicates that the computing device is currently presenting a media content item on the computing device based on sensor data from the one or more sensors and a current operating state of the computing device;
determining, using the computing device, whether location information associated with the computing device indicates that the computing device is proximal to a media playback device that is capable of creating a casting session;
detecting, using the computing device, a plurality of user-initiated interactions with the computing device, wherein at least one of the plurality of user-initiated interactions is a physical interaction with the computing device;
determining whether the physical interaction with the computing device from the plurality of detected user-initiated interactions is indicative of a cast action;
in response to (1) determining that context information indicates that the computing device is currently presenting the media content item on the computing device, (2) determining that the location information indicates that the computing device is proximal to the media playback device capable of creating the casting session, and (3) determining that the physical interaction with the computing device is indicative of the cast action, creating the casting session between the computing device and the media playback device that includes causing the media content item to be casted to the media playback device;
detecting a gesture by a viewer of the media content item on the media playback device;
determining a manner in which playback of the media content item on the media playback device is to be changed based on the detected gesture; and
causing playback of the media content item on the media playback device to be modified based on the determined manner in which playback of the media content item is to be changed.

US Pat. No. 11,070,864

AUTO PAUSE WHEN DISTURBED

Sony Group Corporation, ...


1. An electronic device comprising:an electronic processor;
a memory operatively coupled to the processor; anda media player module for executing media content on the electronic device, the media player module stored in the memory and executable by the processor, wherein when executed by the processor the media player module causes the electronic device to i) monitor sound within an ambient environment in which the electronic device resides;

ii) compare the monitored sound in the ambient environment to a prescribed sound threshold and compare a frequency of the sound in the ambient environment with a frequency of sound output by the electronic device; and
iii) pause execution of the media content upon the sound in the ambient environment exceeding the prescribed sound threshold when the frequency of the ambient sound is substantially different from the frequency of the sound output by the electronic device.

US Pat. No. 11,070,863

METHOD AND SYSTEM TO SHARE A SNAPSHOT EXTRACTED FROM A VIDEO TRANSMISSION

NAGRAVISION S.A., Chesea...


1. A method of creating one or more snapshots, the method comprising:receiving, by a first device, a video having a first quality level, the first device including a receiver device;
buffering, by the first device, the video in a buffer, the buffer containing a portion of the video during a predefined time;
converting, by the first device, the video into a lower-quality video having a second quality level, the second quality level being associated with a lower quality than the first quality level;
sending, by the first device, the lower-quality video to a second device;
receiving, by the first device from the second device as the second device presents the lower-quality video, a command to produce a snapshot using the video having the first quality level;
based on the command received by the first device, producing a plurality of downsampled snapshot images from the portion of the video stored in the buffer, each of the plurality of downsampled snapshot images representing a different portion of the video at a different time;
sending the plurality of downsampled snapshot images to a network;
receiving, from the second device, a selection of at least one snapshot image from the plurality of downsampled snapshot images; and
producing, by the first device at least based on the at least one selected snapshot image, a snapshot from the video having the first quality level.

US Pat. No. 11,070,862

SYSTEM AND METHOD FOR DYNAMICALLY PROVIDING PERSONALIZED TELEVISION SHOWS


1. A device comprising:a processing system including a processor; and
a memory that stores executable instructions that, when executed by the processing system, facilitate performance of operations comprising:
obtaining a user profile for a user of a media service;
selecting, in accordance with the user profile, program components for inclusion in a media program to be presented via the media service, wherein the program components comprise backgrounds, characters, plot elements, spoken dialogue, music, props, product placements, or combinations thereof;
generating the media program from the program components selected, thereby creating a presentable media program that can be presented via the media service;
presenting the presentable media program at equipment of the user;
obtaining sensor data associated with the user, wherein the sensor data comprises real-time biometric data of the user generated by a biometric sensor worn by the user;
determining a user engagement level during presentation of the presentable media program, based on the sensor data; and
adjusting at least one of the program components during presentation of the presentable media program responsive to the user engagement level, thereby generating a modified media program for presentation to the user, the modified media program accordingly not available prior to determination of the user engagement level,
wherein the presentable media program is presented at equipment of a plurality of users, wherein the plurality of users includes the user, and wherein the adjusting is performed responsive to a composite user engagement level for the plurality of users, the modified media program thereby presented at the equipment of each of the plurality of users,
wherein the user profile includes data regarding the user's: personal background, home city, occupation, relationships, likes, dislikes, and tolerance for strong language,
wherein the presentable media program includes a first voice profile of the user, a second voice profile of the user's family or friends, a logo of the user's favorite sports team, or any combination thereof, and
wherein the adjusting of the at least one of the program components is further responsive to an interpretation of a response of the user during the presenting of the presentable media program in accordance with information from a social media feed of the user and a report regarding an occurrence of an event.

US Pat. No. 11,070,861

TECHNIQUES FOR GENERATING PROMOTIONAL PLANS TO INCREASE VIEWERSHIP

Disney Enterprises, Inc.,...


1. A computer-implemented method for scheduling promotionals for different pieces of scheduled content, the method comprising:receiving, via at least one processor, predicted viewing behavior data;
receiving, via the at least one processor, historical respondent viewership data;
partitioning, via the at least one processor, the predicted viewing behavior data and the historical respondent viewership data according to a plurality of viewing patterns, the viewing patterns segmenting respondents based on at least one of an engagement level of each respondent in watching a scheduled content or a commitment level of each respondent in watching different episodes associated with the scheduled content;
generating, via the at least one processor, a statistical model based on differences between the partitioned predicted viewing behavior data and the partitioned historical respondent viewership data, wherein the statistical model predicts probabilities that respondents of each of the plurality of viewing patterns upon viewing promotionals targeting a first piece of scheduled content and associated with each of a plurality of conversion strategies will view the first piece of scheduled content and uncertainties in each of the probabilities;
generating an optimization objective based on a plurality of historical promotional schedules;
automatically determining a plurality of promotional schedules based on the statistical model, a broadcast schedule, planning objectives, and a first risk tolerance, each of the plurality of promotional schedules comprising a plurality of promotionals and a corresponding point in time when each of the plurality of promotionals should be displayed, each of the plurality of promotionals being associated with one of the plurality of conversion strategies based on the plurality of viewing patterns, wherein automatically determining the plurality of promotional schedules comprises evaluating the statistical model for a first promotional with respect to the optimization objective and the first risk tolerance;
displaying each of the plurality of promotional schedules on a graphical user interface;
receiving a selection of a first promotional schedule from the plurality of promotional schedules via the graphical user interface;
causing each of the promotionals in the first promotional schedule to be displayed at the corresponding point in time; and
subsequent to displaying one or more of the promotionals in the first promotional schedule at the corresponding point in time:receiving, via the at least one processor, new respondent viewership data associated with the one or more of the promotionals;
determining, via the at least one processor, an error between viewership data predicted by the statistical model and the new respondent viewership data; and
updating, via the at least one processor, the statistical model based on the error.


US Pat. No. 11,070,860

CONTENT DELIVERY

Comcast Cable Communicati...


18. A method comprising:receiving, by a computing device, a request for content for output via a primary device;
determining, by the computing device, based on the request, and via wireless communication with a plurality of wireless devices associated with a plurality of users, an association of the request with the plurality of users;
determining, based on the request and the association of the request with the plurality of users,a time associated with the request;

determining, based on the determined time and content previously output, via the primary device, to the plurality of users at related times, content for output via the primary device; and
sending, to the primary device, the determined content.

US Pat. No. 11,070,859

APPARATUS FOR TRANSMITTING BROADCAST SIGNAL, APPARATUS FOR RECEIVING BROADCAST SIGNAL, METHOD FOR TRANSMITTING BROADCAST SIGNAL, AND METHOD FOR RECEIVING BROADCAST SIGNAL

LG ELECTRONICS INC., Seo...


1. An apparatus for receiving broadcast signals, the apparatus comprising:a receiver configured to receive a broadcast signal;
a time deinterleaver configured to time deinterleave a Time Interleaving (TI) block in the broadcast signal, the TI block including one or more Forward Error Correction (FEC) blocks,
the broadcast signal includes information for a maximum number related to the one or more FEC blocks and information for a number of the one or more FEC blocks;
a decoder configured to decode the broadcast signal, the decoded broadcast signal including a signal frame including:
one or more components included in a content of a service and content information describing the content, the content information including component information including role information for at least one of an audio component, a video component or a closed caption component of the one or more components,
the role information for the video component including an alternative view; and
a display configured to display information related to the content based on the role information of the one or more components of the content in the decoded broadcast signal.

US Pat. No. 11,070,858

APPARATUS FOR TRANSMITTING BROADCAST SIGNALS, APPARATUS FOR RECEIVING BROADCAST SIGNALS, METHOD FOR TRANSMITTING BROADCAST SIGNALS AND METHOD FOR RECEIVING BROADCAST SIGNALS

LG ELECTRONICS INC., Seo...


7. A broadcast receiver for processing a broadcast signal, the broadcast receiver comprising:a receiver to receive the broadcast signal including transport packets generated based on a Layered Coding Transport (LCT) format, wherein the transport packets are used to transport at least one delivery object and signaling data; and
a processor to process the transport packets,
wherein the signaling data includes a real-time attribute which is a boolean flag that is set to true when the transport packets carry a real-time content that is comprised of the at least one delivery object, and
wherein a header of at least one transport packet of the transport packets includes codepoint information for indicating a type of a payload that is carried by the at least one transport packet, transport object identifier information for identifying the at least one delivery object, information for identifying a starting position of a fragment of the at least one delivery object carried in the at least one transport packet, and extension information for delivering time information.

US Pat. No. 11,070,857

APPARATUS FOR TRANSMITTING BROADCAST SIGNALS, APPARATUS FOR RECEIVING BROADCAST SIGNALS, METHOD FOR TRANSMITTING BROADCAST SIGNALS AND METHOD FOR RECEIVING BROADCAST SIGNALS

LG ELECTRONICS INC., Seo...


1. A method of transmitting at least one broadcast signal in a digital transmitter, the method comprising:generating at least one link layer packet based on input data, wherein the input data is transformed into a payload of the at least one link layer packet,
wherein a fixed header of the at least one link layer packet includes a packet type field being 3-bit for representing a type of the input data, further the type of the input data relates to an IP (Internet Protocol) packet being a IPv4 packet or a compressed IP packet,
wherein the fixed header of the at least one link layer packet includes a packet configuration field being 1-bit for indicating a configuration of the payload and a C/S (Concatenation/Segmentation) field for representing that the payload of the at least one link layer packet carries a segment of the IP packet or a plurality of concatenated IP packets,
wherein the at least one link layer packet includes an extended header that includes a segment sequence number field for representing an order of the segment carried in the payload of the at least one link layer packet when the payload of the at least one link layer packet carries the segment of the IP packet,
wherein the at least one link layer packet includes count information for indicating the number of the plurality of the concatenated IP packets when the payload of the at least one link layer packet carries the plurality of the concatenated IP packets; and
transmitting the at least one broadcast signal.

US Pat. No. 11,070,856

DATA PROCESSING DEVICE AND DATA PROCESSING METHOD

Saturn Licensing LLC, Ne...


1. A data processing device comprising:processing circuitry configured tosplit an input stream into a split stream for each of a plurality of channels; and
generate a transmission frame based on the split stream of one of the plurality of channels and channel bonding signaling information for the one of the plurality of channels, wherein

the channel bonding signaling information includes signature information that uniquely identifies the input stream, and
the signature information is contained in a header of the transmission frame.

US Pat. No. 11,070,855

APPARATUS AND METHOD FOR CONFIGURING CONTROL MESSAGE IN BROADCASTING SYSTEM

Samsung Electronics Co., ...


1. A method of transmitting a signaling message based on a motion pictures experts group (MPEG) media transport (MMT) protocol by a content providing apparatus, the method comprising:identifying at least one table among a plurality of tables based on type information, the at least one table comprises signaling information required for consuming a content package at a content consuming apparatus;
generating, the signaling message comprising a payload, the type information, length information and extension information; and
transmitting the signaling message to the content consuming apparatus,
wherein the type information indicates an identifier of the signaling message,
wherein the length information indicates a length of the signaling message,
wherein the extension information includes information on the at least one table included in the payload,
wherein the payload includes the at least one table,
wherein the information on at least one table comprises table identifier information and table version information, for each of the at least one table included in the payload,
wherein the type information, the length information and the extension information are located before the payload, and
wherein one of the at least one table is a device capability information table (DCIT) including information on device capabilities for consuming the content package.

US Pat. No. 11,070,854

BROADCASTING SIGNAL TRANSMISSION DEVICE, BROADCASTING SIGNAL RECEPTION DEVICE, BROADCASTING SIGNAL TRANSMISSION METHOD, AND BROADCASTING SIGNAL RECEPTION METHOD

LG ELECTRONICS INC., Seo...


1. A method processing a broadcast content, the method comprising:receiving a broadcast signal;
demodulating the broadcast signal including time interleaved data, wherein the time interleaved data is interleaved by a Time interleaved (TI) block including one or more Forward Error Correction (FEC) blocks and one or more virtual FEC blocks;
displaying a broadcast content of a broadcast service in the demodulated broadcast signal on a display screen; and
detecting a first user input,
in response to the first user input that is to display an Electronic Service Guide (ESG):
displaying the ESG overlaid on the broadcast content being displayed on the display, screen;
determining whether information for controlling execution of an application, that is embedded in an audio component of the broadcast content, is received; and
controlling the execution of the application based on the determination; or
in response to the first user input that indicates a mute mode to mute sound of the broadcast content:
determining whether information for controlling execution of the application, that is embedded in a video component of the broadcast content, is received; and
controlling the execution of the application based on the determination.

US Pat. No. 11,070,853

INTERRUPTING PRESENTATION OF CONTENT DATA TO PRESENT ADDITIONAL CONTENT IN RESPONSE TO REACHING A TIMEPOINT RELATING TO THE CONTENT DATA AND NOTIFYING A SERVER

TiVo Solutions Inc., San...


1. A method, comprising:receiving, at a portable device, a set of timepoints relating to a media asset prior to generating, for display, any portion of the media asset, wherein each timepoint in the set of timepoints indicates a time in the media asset where a server is to be queried for additional content to be inserted into the display of the portion of the media asset;
sending, from the portable device, display signals corresponding to the media asset to a display device;
determining, by the portable device, that a timepoint among the set of timepoints has been reached in the media asset;
in response to determining that the timepoint has been reached in the media asset, querying the server for the additional content;
receiving, at the portable device, an identification of one or more particular additional content;
causing, by the portable device, an interruption of presentation of the media asset;
causing presentation of the received one or more particular additional content; and
after causing presentation of the received one or more particular additional content, causing, by the portable device, presentation of the media asset to resume.

US Pat. No. 11,070,852

METHOD AND SYSTEM FOR REMOTELY CONTROLLING CONSUMER ELECTRONIC DEVICES

Roku, Inc., San Jose, CA...


1. A method comprising:receiving, at a media system, a sequence of media content from a media content distributor over a communication network, the sequence of media content including steganographic data;
outputting, by the media system, the sequence of media content for presentation; and
while outputting the sequence of media content for presentation, (i) detecting, by the media system, the steganographic data in the sequence of media content and (ii) responsive to detecting the steganographic data in the sequence of media content, invoking, by the media system, an application on the media system,
wherein the steganographic data is encrypted, the method further comprising decrypting the steganographic data.

US Pat. No. 11,070,851

SYSTEM AND METHOD FOR PROVIDING IMAGE-BASED VIDEO SERVICE

Enswers Co., Ltd., Seoul...


1. A server comprising:a cache configured to:
store one or more records, wherein each of the one or more records associates a network address with video information, wherein the network address specifies a server that is configured to present a particular image that is related to the video information; and
a subsystem configured to:
receive, from a client device, image-related data that includes a first network address, wherein the first network address specifies a particular server configured to present a first image;
search the cache for a record that matches the first network address;
if a matching record is found, communicate video information associated with the matching record to the client device; and
if a matching record is not found:
communicate a network request to the particular server for the first image;
subsequently receive the first image from the particular server;
extract a fingerprint from the first image;
determine video information associated with the fingerprint; and
communicate the determined video information to the client device.

US Pat. No. 11,070,850

CACHE EVICTION DURING OFF-PEAK TRANSACTIONS

Tivo Corporation, San Jo...


1. A method, comprising:determining, by a computing device, for a time duration, request activity for content stored in at least one second content cache;
determining, based on the request activity and a high request activity threshold, that at least one time period of request activity satisfies the high request activity threshold for content stored in the at least one second content cache;
determining, based on the at least one time period of request activity satisfying the high request activity threshold, a future first time period associated with an expected increase in request activity for content stored in a first content cache;
determining, based on the future first time period associated with an expected increase in request activity for content stored in the first content cache, to cache content in the first content cache before the future first time period; and
causing first content to be cached in the first content cache before the future first time period.

US Pat. No. 11,070,849

EVENT PRODUCTION AND DISTRIBUTION NETWORKS, SYSTEMS, APPARATUSES, AND METHODS RELATED THERETO


1. An event production system, comprising:a performance server in data communication with one or more video cameras configured to capture video of a performance on a stage, one or more microphones configured to capture audio of the performance on the stage, one or more speakers configured to provide audio to the stage, and an audience display screen displaying content visible from the stage, and comprising a processor and memory storing computer readable instructions that, when executed by the processor, configure the event production system to perform:registering a plurality of users for a first event, wherein each registration is associated with a benefit level selected from a plurality of benefit levels, and wherein each benefit level within the plurality of benefit levels is associated with a different set of benefits;
at an event start time, initiating an event audiovisual stream from the one or more video cameras and one or more microphones for playback to, for each registered user, a device associated with that user;
displaying on the audience display screen a surrogate associated with each of the plurality of users during the event, wherein a first benefit level is associated with a first surrogate size for each user registered at the first benefit level, and wherein a second benefit level is associated with a second surrogate size for each user registered at the second benefit level, and wherein the system is configured to present each surrogate on the audience display screen at a size defined by each user's corresponding benefit level; and
providing an interactive feature through which one or more registered users dynamically interact with a performer providing the performance during the event.


US Pat. No. 11,070,848

METHOD FOR EFFICIENT SIGNALING OF VIRTUAL BOUNDARY FOR LOOP FILTERING CONTROL

TENCENT AMERICA LLC, Pal...


1. A method of coding a video sequence, executable by a processor, comprising:receiving video data containing one or more faces;
selecting one or more virtual boundaries between the one or more faces of the received video data, wherein a distance between virtual boundaries is a number of luma samples that is greater than or equal to a size of a coding tree unit associated with the video sequence; and
disabling in-loop filtering between the one or more selected virtual boundaries from the virtual boundaries.

US Pat. No. 11,070,847

INTRA-PREDICTION WITH FAR NEIGHBORING PIXELS

QUALCOMM Incorporated, S...


1. A method for decoding video data, the method comprising:determining a current coding unit of the video data is coded in an intra prediction mode;
determining samples for intra predicting the current coding unit, wherein the samples comprise a first set of samples that are already reconstructed and belong to a first coding unit that is adjacent to the current coding unit and a second set of samples that are already reconstructed and belong to a second coding unit that is not adjacent to the current coding unit, wherein determining the samples for intra predicting the current coding unit comprises locating, for a first sample to be predicted for the second set of samples, a first available sample on a line starting from the first sample along a direction defined by the intra-prediction mode of the current coding unit, wherein the first available sample does not border the current coding unit; and
generating a predictive block for the current coding unit based on the samples in accordance with the intra prediction mode.

US Pat. No. 11,070,846

MULTI-LAYERED VIDEO STREAMING SYSTEMS AND METHODS

REALNETWORKS, INC., Seat...


1. A method of encoding a video frame of a sequence of video frames to generate a multi-layer transport stream representative of the video frame, the video frame including an array of pixels and the multi-layer transport stream representative of the video frame, the method comprising:dividing the video frame along a plurality of horizontal and vertical axes, thereby defining a plurality of patches, including a first row of patches above a transverse horizontal axis of the video frame, a second row of patches above the first row, a third row of patches below the transverse horizontal axis, and a fourth row of patches below the third row, wherein each corresponding patch in the first row has a first height, each corresponding patch in the second row has a second height that is greater than the first height, each corresponding patch in the third row has a third height, and each corresponding patch in the fourth row has a fourth height that is greater than the third height;
selecting one or more of the plurality of patches for supplemental processing;
generating encoded frame data corresponding to the video frame, the encoded frame data being generated at a first resolution;
generating encoded patch data corresponding to the selected one or more of the plurality of patches, the encoded patch data being generated at a second resolution that is higher than the first resolution;
generating a base layer of the multi-layer transport stream, the base layer including the encoded frame data; and
generating a supplemental layer of the multi-layer transport stream, the supplemental layer including the encoded patch data.

US Pat. No. 11,070,845

SYSTEMS AND METHODS FOR SIGNALING OF MOTION-CONSTRAINED TILE SETS FOR VIRTUAL REALITY APPLICATIONS

SHARP KABUSHIKI KAISHA, ...


1. A method of signaling of a motion-constrained tile set, the method comprising:generating a motion-constrained tile set extraction information message including syntax elements that provide information corresponding to motion-constrained tile set sub-bitstream extraction including:a syntax element specifying the number of replacement picture parameter sets in an extraction information set, and
for each of the syntax elements specifying the number of replacement picture parameter sets in an extraction information set, an instance of a syntax element specifying a temporal identifier of the replacement picture parameter set; and

transmitting the generated message over a communications medium.

US Pat. No. 11,070,844

METHOD AND APPARATUS FOR VIDEO ENCODING AND/OR DECODING TO PREVENT START CODE CONFUSION

TEXAS INSTRUMENTS INCORPO...


1. A method comprising:inserting an emulation prevention byte in slice data;
generating a slice header of the slice data, wherein the inserting the emulation prevention byte in the slice data is performed prior to the generating the slice header;
determining whether the slice header is byte aligned;
when the slice header is not byte aligned, aligning bytes of the slice header;
inserting an emulation prevention byte in the slice header; and
combining the slice header and the slice data.

US Pat. No. 11,070,843

CODING OF LAST SIGNIFICANT COEFFICIENT FLAGS

GOOGLE LLC, Mountain Vie...


1. An apparatus for decoding a transform block, wherein the transform block is decoded using a scan order, the apparatus comprising:a processor configured to:decode, from an encoded bitstream, a first syntax element indicating a group of consecutive scan positions in the scan order, wherein the scan order is a one-dimensional structure that specifies an order of traversal of coefficients of the transform block, and wherein the group of consecutive scan positions includes a scan position of a last non-zero coefficient;
determine an offset within the group of consecutive scan positions of a last non-zero coefficient, wherein the offset indicates a distance from a first scan position of the group of consecutive scan positions; and
decode, from the encoded bitstream, coefficients up to the last non-zero coefficient of the transform block.


US Pat. No. 11,070,842

VIDEO DECODING METHOD AND APPARATUS AND VIDEO ENCODING METHOD AND APPARATUS

SAMSUNG ELECTRONICS CO., ...


1. A video decoding method comprising:determining whether to add a motion vector to a motion vector candidate list of a current block;
when it is determined to add the motion vector, determining a base motion vector in the motion vector candidate list;
determining the motion vector candidate list comprising an additional motion vector candidate determined based on the base motion vector and predetermined offset information; and
determining a motion vector of the current block by using the additional motion vector candidate determined in the motion vector candidate list,
wherein, when a prediction mode of the current block is one of a skip affine mode, a merge affine mode, and an inter affine mode,
wherein the motion vector, the base motion vector, and the additional motion vector candidate of the motion vector candidate list are affine-transformed motion vectors, and
wherein the affine-transformed motion vectors are 4 affine parameters.

US Pat. No. 11,070,841

IMAGE PROCESSING APPARATUS AND METHOD FOR CODING SKIP INFORMATION

SONY CORPORATION, Tokyo ...


1. An image processing apparatus, comprising:an encoding section configured to skip, where secondary transform is to be performed for a primary transform coefficient obtained by primary transform of a prediction residual that is a difference between an image and a prediction image of the image, encoding of first information relating to skip of performing the primary transform on the prediction residual,
wherein the encoding section is implemented via at least one processor.

US Pat. No. 11,070,840

BIT DEPTH VARIABLE FOR HIGH PRECISION DATA IN WEIGHTED PREDICTION SYNTAX AND SEMANTICS

ARRIS Enterprises LLC, S...


1. A method for decoding a bitstream, the method comprising:identifying one or more weight flags signaled in the bitstream that indicates presence of weighting factors for at least one of a luma component and/or a chroma component;
determining a first weighting factor for performing weighted prediction for a current unit of a current picture the first weighting factor for weighting pixels of a first reference unit of a first reference picture when performing motion compensation for the current unit;
determining a second weighting factor for weighting pixels of a second reference unit of a second reference picture when performing motion compensation for the current unit,
wherein when weighting factors for a luma component is present:determining from a signaled delta_luma_weight_IO syntax element a difference of the first weighting factor and the second weighting factor applied to a luma prediction value for list O prediction using a variable RefPicListO[i] for a first luma component, and
deriving a variable Luma WeightLO associated with the luma component weighting factors, wherein when the one or more weight flags indicates presence of the weighting factor for a luma component, LumaWeightLO is derived to be equal to (1< wherein luma_log2_weight_denom is a base 2 logarithm of a denominator for all luma weighting factors, and BitDepthy is a bit depth for the luma component of the respective reference picture; and

wherein when weighting factors for a chroma component is present:determining from a delta_chroma_weight_IO[i][i] syntax element a difference of the first weighting factor and the second weighting factor applied to a chroma prediction value for list O prediction using a variable RefPicListO[i] with j equal to O for Cb or j equal to 1 for Cr for a second component; and
deriving a variable ChromaWeightLO associated with the chroma component weighting factor, wherein when the one or more weight flags indicates presence of the weighting factor for a chroma component,
ChromaWeightLO is derived to be equal to ((1<<(luma log
2 weight denom+delta chroma log
2 weight denom))+delta chroma weight IO, delta chroma weight IO in a range
of ?(1<<(BitDepthc?1)), ((1< wherein delta_chroma_log 2_weight_denom is a difference of a base 2 logarithm of a denominator for all chroma weighting factors, and BitDepthc is a bit depth for the chroma component of the respective reference picture;

wherein the delta_chroma_weight_IO[i][i] syntax element is within the range set by the ChromaWeightLO, and
wherein the second component comprises a chroma component of the frist reference unit or the second reference unit.

US Pat. No. 11,070,839

HYBRID VIDEO CODING

FRAUNHOFER-GESELLSCHAFT Z...


1. Hybrid video decoder comprising a computer programmed to, or a microprocessor configured to:predict a reference frame of a video by intra prediction or motion-compensated prediction to obtain a prediction signal of the reference frame;
predict a residual signal of the reference frame, which relates to a prediction error of the prediction signal of the reference frame, by motion-compensated prediction from a reference residual signal of a further reference frame of the video to obtain a residual prediction signal of the reference frame;
entropy decode a final residual signal of the reference frame;
reconstruct the reference frame by summingthe prediction signal of the reference frame;
the residual prediction signal of the reference frame; and
the final residual signal of the reference frame,

predict a predetermined frame of the video by intra prediction or motion-compensated prediction to obtain a prediction signal of the predetermined frame;
predict a residual signal of the predetermined frame, which relates to a prediction error of the prediction signal of the predetermined frame, by motion-compensated prediction from a reference residual signal of the reference frame to obtain a residual prediction signal of the predetermined frame;
entropy decode a final residual signal of the predetermined frame; and
reconstruct the predetermined frame by summingthe prediction signal of the predetermined frame;
the residual prediction signal of the predetermined frame; and
the final residual signal of the predetermined frame.


US Pat. No. 11,070,838

MOTION COMPENSATION AT A FINER PRECISION THAN MOTION VECTOR DIFFERENTIAL

InterDigital VC Holdings,...


1. A method, comprising:obtaining an additional precision value for a motion vector, having an initial precision, for a block of video data from at least one neighboring block previously encoded;
refining the initial precision of the motion vector for the block of video data by assigning the additional precision value to the initial precision of the motion vector for the block of video data;
refining a plurality of motion vectors for a first row and/or column of sub-blocks of the block of video data using the refined motion vector for the block of video data and template matching;
refining a plurality of motion vectors for a second row and/or column of sub-blocks of the block of video data using an average of spatial neighboring motion vectors and the refined motion vector for the block of video data;
performing motion compensation for the block of video data by using the refined motion vectors of the sub-blocks; and
encoding said motion compensated block of video data.

US Pat. No. 11,070,837

ENCODING METHOD, DECODING METHOD, ENCODER, AND DECODER

PANASONIC INTELLECTUAL PR...


1. A decoder comprising:circuitry; and
memory,
wherein, using the memory, the circuitry:
decodes data corresponding to a current block of an image;
inverse quantizes the data;
performs, depending on a size of the current block, either (i) an inverse secondary transform on a part of a result of the inverse quantization and an inverse primary transform on a result of the inverse secondary transform and another part of the result of the inverse quantization, or (ii) the inverse secondary transform on the result of the inverse quantization and the inverse primary transform on the result of the inverse secondary transform; and
derives the image based on a prediction error derived from a result of the inverse primary transform.

US Pat. No. 11,070,835

CONDITIONS FOR UPDATING LUTS

BEIJING BYTEDANCE NETWORK...


1. A method of coding video data, comprising:maintaining one or multiple tables, wherein each table includes one or more motion candidates derived from one or more video blocks that have been coded, and arrangement of the motion candidates in the each table is based on a sequence of addition of the motion candidates into the table;
constructing a motion candidate list for a current video block;
determining motion information of the current video block using the motion candidate list; and
coding the current video block based on the determined motion information;
wherein whether the determined motion information is used to update a table of the one or multiple tables is based on coded information of the current video block, wherein the table, after updating, is used to construct a motion candidate list of a subsequent video block, and
wherein using the table to construct the motion candidate list comprises checking at least one motion candidate of the table to determine whether to add the checked motion candidate from the table to the motion candidate list of the subsequent video block, and the coded information comprises a coding mode of the current video block.

US Pat. No. 11,070,834

LOW-COMPLEXITY METHOD FOR GENERATING SYNTHETIC REFERENCE FRAMES IN VIDEO CODING

CISCO TECHNOLOGY, INC., ...


1. A method comprising:obtaining at least a first reference frame and a second reference frame of a video signal;
generating a synthetic reference frame from the first reference frame and the second reference frame, wherein a temporal position of the synthetic reference frame with respect to the first reference frame and the second reference frame is determined by integer weights that are proportional to a distance between the synthetic reference frame and the first reference frame and the second reference frame, respectively, such that a temporally nearer reference frame has a larger weight, by:dividing the synthetic reference frame into a plurality of blocks, and further dividing the plurality of blocks into a plurality of sub-blocks;
searching for motion vectors in the first reference frame and the second reference frame for each of the plurality of blocks in the synthetic reference frame, wherein the searching comprises:determining that motion estimation cannot be bypassed;
in response to determining that the motion estimation cannot be bypassed, for each of the plurality of blocks in raster order, determining candidate motion vectors from lower layer blocks and from neighbor blocks; and
identifying, for each sub-block, a primary motion vector in a farther one of the first reference frame and the second reference frame;

deriving motion vector information for each of the plurality of blocks in the synthetic reference frame from motion vectors identified in each of the first reference frame and the second reference frame, including deriving a secondary motion vector in a nearer one of the first reference frame and the second reference frame from the primary motion vector by scaling the primary motion vector to an appropriate scale, wherein the primary motion vector is determined to pixel accuracy and the secondary motion vector is rounded to achieve a same level of accuracy as the primary motion vector;
identifying reference blocks in each of the first reference frame and the second reference frame using the motion vector information for each of the plurality of blocks in the synthetic reference frame; and
combining the reference blocks from each of the first reference frame and the second reference frame to derive an interpolated block of the synthetic reference frame; and

performing the motion estimation using the synthetic reference frame.

US Pat. No. 11,070,833

METHOD AND SYSTEM FOR ENCODING VIDEO WITH OVERLAY

Axis AB, Lund (SE)


1. A method of encoding video data performed in a camera, comprising:receiving an image sequence comprising a first not video encoded input image frame and a second not video encoded input image frame,
receiving an overlay to be applied to the image sequence, the overlay comprising a picture element and spatial coordinates for positioning the picture element in the first and second input image frames,
adding the picture element to the first and second input image frames in accordance with the spatial coordinates, thereby generating an overlaid image sequence comprising a first generated image frame and a second generated image frame,
encoding a video stream containing output image frames without overlay and corresponding output image frames with overlay, wherein:
the first input image frame is encoded as an intra-frame to form a first output image frame,
the second input image frame is encoded as an inter-frame with reference to the first output image frame to form a second output image frame,
the first generated image frame is encoded as an inter-frame with reference to the first output image frame to form a first overlaid output image frame,
the second generated image frame is encoded as an inter-frame to form a second overlaid output image frame, wherein a first part of the second generated image frame is encoded with reference to the first overlaid output image frame, and a second part of the second generated image frame is encoded with reference to the second output image frame,
whereby video data covered by the overlay in the overlaid output image frames is accessible in the output image frames without overlay.

US Pat. No. 11,070,832

SYNTAX AND SEMANTICS FOR BUFFERING INFORMATION TO SIMPLIFY VIDEO SPLICING

Microsoft Technology Lice...


1. One or more computer-readable memory or storage devices having stored thereon encoded video in a bitstream for at least part of a video sequence, the encoded video having been produced by operations comprising:setting a coded picture buffer removal delay (“CPBRD”) delta value for a given access unit for a current picture of the video sequence, the current picture having a buffering period SEI message associated with the current picture;
setting a value of a flag for the given access unit, wherein:if the value of the flag is a first value, a CPBRD value in a picture timing SEI message for the given access unit indicates an increment value specifying a nominal coded picture buffer (“CPB”) removal time of the current picture relative to a nominal CPB removal time of a first preceding picture in decoding order, the first preceding picture having a buffering period SEI message associated with the first preceding picture; and
if the value of the flag is a second value, the nominal CPB removal time of the current picture is indicated, by the CPBRD delta value, as an increment value relative to a nominal CPB removal time of a second preceding picture in decoding order; and

signaling, as part of the bitstream, the CPBRD delta value and the value of the flag for the given access unit in the buffering period SEI message associated with the current picture.

US Pat. No. 11,070,831

METHOD AND DEVICE FOR PROCESSING VIDEO SIGNAL

LG ELECTRONICS INC., Seo...


1. A method of decoding a bitstream for a video signal by a decoding apparatus, the method comprising:obtaining a first flag information from the bitstream, the first flag information indicating whether an intra linear interpolation prediction is performed for a current block;
skipping parsing of a second flag information indicating whether an intra prediction mode of the current block is derived from a neighboring block of the current block and obtaining a first index information from the bitstream based on the first flag information indicating that the intra linear interpolation prediction is performed for the current block, or
parsing the second flag information based on the first flag information indicating that the intra linear interpolation prediction is not performed for the current block;
constructing a candidate mode list based on an intra prediction mode of the neighboring block of the current block;
determining a candidate mode indicated by the first index information in the candidate mode list as the intra prediction mode of the current block; and
generating a predictor for the current block by performing an intra LIP based on the determined intra prediction mode.

US Pat. No. 11,070,830

CODING AND DECODING METHOD WITH COLOR CONVERSION AND CORRESPONDING DEVICES

INTERDIGITAL MADISON PATE...


1. A decoding method comprising:decoding one standard dynamic range luminance component L? and two standard dynamic range chrominance components U? and V? from a bitstream produced by an encoder;
color converting the standard dynamic range luminance component L? into L and the two decoded standard dynamic range chrominance components U? and V? into Ur and Vr, respectively, as follows:L=L?+max(0,aU?+bV?), Ur=?(L)*U?,Vr=?(L)*V?

where a and b are constants and where ?(L) is a parameter that depends on L;
applying independently:a dynamic expansion function on the color converted standard dynamic range luminance component L to obtain a high dynamic range luminance component Y, wherein the dynamic expansion function is an inverse of a dynamic reduction function applied to a high dynamic range luminance component on the encoder's side; and
a color transfer operation on the color converted standard dynamic range chrominance components Ur and Vr to obtain a first intermediate high dynamic range chrominance component and a second intermediate high dynamic range chrominance component in an intermediate color space; and

applying a color conversion matrix on the high dynamic range luminance component Y, the first intermediate high dynamic range chrominance component and the second intermediate high dynamic range chrominance component to obtain a high dynamic range picture in RGB color space.

US Pat. No. 11,070,829

REDUCING LATENCY IN WIRELESS VIRTUAL AND AUGMENTED REALITY SYSTEMS

ATI Technologies ULC, Ma...


1. A system comprising:a rendering unit configured to:render a first slice of a frame, wherein the first slice is a portion of the frame; and
render a second slice of a frame; and

an encoder configured to encode a first slice of a frame to generate an encoded version of the first slice, wherein the encoded version of the first slice is generated in parallel with the rendering unit rendering the second slice;
wherein the rendering unit is configured to generate a first indication to notify the encoder that a particular slice is being conveyed to the encoder, and the encoder is configured to generate a second indication to notify a decoder that an encoded version of a particular slice is being conveyed to the decoder.

US Pat. No. 11,070,828

METHOD FOR DECODING DATA, DATA DECODING DEVICE, AND METHOD FOR TRANSMITTING DATA

SUN PATENT TRUST, New Yo...


1. A playback method comprising:receiving a first packet group transmitted using a specified first transmission channel and a second packet group transmitted using a specified second transmission channel,
wherein in each of packets of the first packet group and the second packet group, encoded data is stored on a predetermined unit basis, the encoded data being generated by encoding a video signal,
wherein the encoded data is divided into a plurality of MPUs (Media Processing Units), wherein each of the plurality of MPUs is formed of a plurality of access units to be played back during a predetermined playback period,
wherein in each of the packets, the encoded data is stored on an access unit basis or on a unit basis, the unit being one of a plurality of units into which an access unit is divided, wherein in each of packets included in the first packet group, sequence number information is stored, the sequence number information indicating a sequence number for identifying an MPU to which access units stored in the packet belong, wherein in each of packets included in the second packet group, sequence number information is stored, the sequence number information indicating a sequence number for identifying an MPU to which access units stored in the packet belong, and
wherein in a packet included in the first packet group and a packet included in the second packet group, a same value is stored as the sequence number information, the packets storing access units to be played back during a same playback period;

receiving a control information packet to be transmitted using the first transmission channel or the second transmission channel, the control information packet including time information used to derive decoding times of the access units;deriving a decoding time for each of the access units using the time information; and
when a video signal to be played back during a certain playback period is decoded, decoding access units used for the playback in order of the decoding times from among a plurality of access units transmitted by a packet of the first packet group and a packet of the second packet group, the packets having a same sequence number associated with the playback period, each packet includes an ID indicating an asset,
wherein the playback method further comprises sorting each of the packets for each asset using an ID by storing each of the packets in buffers, each of the buffers being associated with a different one of the assets, and wherein the playback method further comprises
obtaining flag information indicating whether rearranging for each of the packets using the sequence number is required;
performing the rearranging each of the packets when the flag information indicates the rearranging is required; and
avoiding performing the rearranging each of the packets when the flag information indicates the rearranging is not required.

US Pat. No. 11,070,827

TRANSMISSION APPARATUS, TRANSMISSION METHOD, AND PROGRAM

SONY CORPORATION, Tokyo ...


1. A transmission apparatus comprising:circuitry configured togenerate a map of statistical values based on an actual transfer rate of a moving image for each transfer path of a plurality of transfer paths,
calculate prediction information including a predicted transfer rate and a predicted latency for each transfer path based on the generated map,
select a transfer path among the plurality of transfer paths to use for transfer of the moving image based the calculated prediction information, and
switch a reference layer for encoding of an enhancement layer of the moving image to an interlayer of the moving image according to the predicted transfer rate.


US Pat. No. 11,070,826

CONSTRAINED POSITION DEPENDENT INTRA PREDICTION COMBINATION (PDPC)

ARRIS Enterprises LLC, S...


1. A method for coding a video bitstream, the method comprising:receiving the video bitstream including at least one coding block;
generating a first level predictor for a pixel within said at least one coding block based on neighboring pixels and an intra prediction mode selected from sixty-seven intra prediction modes during encoding of said pixel;
determining if a second level prediction mode was combined with the at least one of the sixty-seven intra prediction modes selected during encoding of the pixel in the at least one coding block;
if the second level prediction mode was combined with the at least one of the sixty-seven intra prediction modes selected during encoding, deriving the pixel in the at least one coding block using the first level predictor generated based on a selected at least one intra prediction mode and a second level prediction process based on the second level prediction mode, else deriving the pixel using the generated first level predictor;
decoding the at least one coding block based on the derived pixel values within the coding block wherein the second level prediction mode is a position dependent intra prediction combination (PDPC) mode that combines the second level prediction process generated using the PDPC mode with the first level predictor;
wherein determining if the second level intra prediction mode was combined with the at least one of the intra prediction modes is determined based on a list of first level prediction modes for which the second level intra prediction mode is enabled and/or disabled;
wherein the list identifies constrained first level prediction modes, wherein constrained first level prediction modes are excluded from combination with the second level intra prediction mode;
wherein planar mode and DC mode are first level intra prediction modes included in the list of constrained modes and a position dependent intra prediction combination (PDPC) mode is the second level prediction mode, the list thereby excluding combination of either the planar mode or the DC mode with the PDPC mode;
wherein the position dependent intra prediction combination (PDPC) mode is further constrained for decoding I slices and excluded for P and/or B slices;
and wherein when the list indicates that a position dependent intra prediction combination (PDPC) mode is disabled for the selected at least one of the intra prediction modes used to generate the first level predictor, only the first level predictor is used for intra prediction of the pixel.

US Pat. No. 11,070,825

METHOD AND APPARATUS OF ENCODING OR DECODING VIDEO DATA WITH ADAPTIVE COLOUR TRANSFORM

MEDIATEK INC., Hsinchu (...


1. A method of processing video data in a video coding system, comprising:receiving input data associated with a current block in a current picture;
determining if a luma component and chroma components of the current block are coded using different splitting trees;
determining to disable Adaptive Colour Transform (ACT) on the current block when the luma component and the chroma components of the current block are coded using different splitting trees, wherein a colour space of the current block is converted to another colour space when ACT is performed on the current block; and
encoding or decoding the current block.

US Pat. No. 11,070,824

IMAGE PROCESSING DEVICE AND METHOD

SONY CORPORATION, Tokyo ...


1. An image processing device comprising:circuitry configured topredict a pixel value of a chroma component of encoded data by linear prediction from a pixel value of a luma component of the encoded data at a pixel location of the luma component that is determined using a filter selected on a basis of information regarding a pixel location of the chroma component and information regarding a color format of the encoded data;
generate a prediction image of the chroma component based on the predicted pixel value of the chroma component; and
decode the chroma component of the encoded data using the prediction image.


US Pat. No. 11,070,823

DC COEFFICIENT SIGNALING AT SMALL QUANTIZATION STEP SIZES

Microsoft Technology Lice...


1. In a computer system that implements a video decoder, a method comprising:receiving encoded data in a bit stream for at least part of a video sequence, wherein the encoded data includes, for a block of a picture of the video sequence:a first code that indicates a value for a DC differential for a DC coefficient of the block of the picture, wherein the DC differential represents a difference between the DC coefficient and a DC predictor; and
a second code that indicates a refinement of the value for the DC differential; and

decoding the encoded data, including, for the block of the picture of the video sequence:decoding the first code to determine the value for the DC differential;
decoding the second code to determine the refinement of the value for the DC differential; and
reconstructing the DC differential using the value for the DC differential and the refinement, including adding the refinement to a multiple of the value for the DC differential, wherein the multiple of the value for the DC differential is determined by multiplying the value for the DC differential by a factor that depends on a quantization step size.


US Pat. No. 11,070,822

CODING OF SIGNIFICANCE MAPS AND TRANSFORM COEFFICIENT BLOCKS

GE Video Compression, LLC...


1. An apparatus for decoding a transform coefficient block, the apparatus comprising:a decoder configured to extract, from a data stream,a significance map that indicates positions of significant transform coefficients within the transform coefficient block, and
the significant transform coefficients within the transform coefficient block; and

an associator configured to associate the significant transform coefficients with their respective positions in the transform coefficient block, wherein a plurality of sub-blocks of the transform coefficient block are scanned in a sub-block scan order, and transform coefficients within each of the plurality of sub-blocks are scanned in a position sub-scan order, which includes scanning from a position of the respective sub-block corresponding to a highest frequency in both vertical and horizontal directions to a position of the respective sub-block corresponding to a lowest frequency in both the vertical and horizontal directions, and each sub-block includes information about a plurality of pixels,
wherein the decoder is configured to use, in extracting the significant transform coefficients, context-adaptive entropy decoding such that each of the plurality of sub-blocks is entropy decoded using one or more contexts determined for that sub-block separately from the other of the plurality of sub-blocks, and
wherein, for entropy decoding the significant transform coefficients of a sub-block of the plurality of sub-blocks, a selected set of contexts from a plurality of sets of contexts is determined based on a specific value related to one or more of the significant transform coefficients of a previously traversed sub-block of the transform coefficient block and the sub-block scan order which is selected from a plurality of scan orders including a diagonal scan order.

US Pat. No. 11,070,821

SIDE INFORMATION SIGNALING FOR INTER PREDICTION WITH GEOMETRIC PARTITIONING

BEIJING BYTEDANCE NETWORK...


1. A method of coding video data, comprising:determining, for a conversion between a current block of a video and a bitstream of the video, whether the current block is coded with a geometric partitioning mode based on a size condition, wherein the geometric partitioning mode is disabled in case that the current block satisfies the size condition; and
performing the conversion based on the determining;
wherein in case that the current block is coded with the geometric partitioning mode, the bitstream includes multiple syntax elements among which one syntax element indicating a splitting pattern of the geometric partitioning mode for the current block and other syntax elements indicating multiple merge indices for the current block.

US Pat. No. 11,070,820

CONDITION DEPENDENT INTER PREDICTION WITH GEOMETRIC PARTITIONING

BEIJING BYTEDANCE NETWORK...


1. A method of coding video data, comprising:determining, for a conversion between a current block of a video and a bitstream of the video, whether use of a geometric partitioning mode is enabled for the current block based on a rule that uses a characteristic of the current block, wherein the current block is a coding block within a coding unit; and
performing the conversion according to the determining;
wherein the rule specifies that the geometric partitioning mode for the current block is disabled in case that the current block has a height-width ratio or a width-height ratio greater than a first threshold, where the first threshold is an integer;
wherein the rule further specifies that the geometric partitioning mode for the current block is disabled in case that the current block has a specific size in width and/or height.

US Pat. No. 11,070,819

METHOD AND APPARATUS OF HEVC DE-BLOCKING FILTER

TEXAS INSTRUMENTS INCORPO...


1. A de-blocking filter comprising:a boundary strength circuit configured to estimate a boundary strength index at vertical edges and at horizontal edges of a current block of a plurality of blocks, wherein the vertical edges comprise vertical pixel edges and horizontal edges comprise horizontal pixel edges;
a de-blocking filter engine comprising multiple cores, the de-blocking filter coupled to the boundary strength circuit and configured to:filter the vertical pixel edges of the vertical edges based on the boundary strength index in parallel;
filter the horizontal pixel edges of the horizontal edges of the current block based on the boundary strength index in parallel; and
filter a set of sub-blocks in a processing block and not filter remaining sub-blocks in the processing block, the processing block based on a collation of the current block and a set of sub-blocks of blocks adjacent to the current block.


US Pat. No. 11,070,818

DECODING A BLOCK OF VIDEO SAMPLES

Telefonaktiebolaget LM Er...


1. A method of decoding a block of video samples having non-zero coefficients from a video bitstream, the method being performed by a video decoder and comprising:decoding a Delta Quantization Parameter Resolution Parameter, DQPRP, value from the video bitstream;
deriving a Predicted Quantization Parameter, PQP, value for a block having a non-zero residual;
decoding a Decoded Block Delta Quantization Parameter, DDQP, value for the block;
deriving a Scaled Block Delta Quantization Parameter, SDQP, value for the block from the DDQP value and the DQPRP value;
deriving a block Quantization Parameter, QP, value for the block from the SDQP value and the PQP value;
decoding at least one non-zero coefficient of the block from the video bitstream;
deriving values of a residual block for the block by first scaling the at least one non-zero coefficient using the QP value and thereafter applying an inverse transform;
deriving values of a prediction block for the block by spatial or temporal prediction; and
deriving decoded sample values for the block by adding the values of the residual block and the values of the prediction block.

US Pat. No. 11,070,817

VIDEO ENCODING METHOD, COMPUTER DEVICE, AND STORAGE MEDIUM FOR DETERMINING SKIP STATUS

TENCENT TECHNOLOGY (SHENZ...


1. A video encoding method, comprising:obtaining, by processing circuitry, a plurality of subunits by performing preset division on a current coding unit;
determining, by the processing circuitry, a skip status of an intra-frame prediction mode corresponding to the current coding unit according to differences between the plurality of subunits of the current coding unit, the skip status indicating whether the intra-frame prediction mode is skipped for the current coding unit; and
in a case that the skip status of the intra-frame prediction mode corresponding to the current coding unit is determined to be skipped according to the differences between the plurality of subunits,
skipping, by the processing circuitry, execution of the intra-frame prediction mode for the current coding unit, and encoding the current coding unit according to an inter-frame prediction mode.

US Pat. No. 11,070,816

CONVERSION OF DECODED BLOCK VECTOR FOR INTRA PICTURE BLOCK COMPENSATION

TENCENT AMERICA LLC, Pal...


1. A method of video decoding, comprising:receiving a coded video bitstream including a current picture;
determining whether a current block in a current coding tree unit (CTU) included in the current picture is coded in intra block copy (IBC) mode based on a flag included in the coded video bitstream; and
in response to the current block being determined as coded in IBC mode,determining a block vector that points to a first reference block of the current block;
performing a modification operation on the block vector when the first reference block is not valid for the current block, the block vector is modified by the modification operation to point to a second reference block that is valid for the current block; and
decoding the current block based on the modified block vector.


US Pat. No. 11,070,815

METHOD AND APPARATUS OF INTRA-INTER PREDICTION MODE FOR VIDEO CODING

MEDIATEK INC., Hsinchu (...


4. A method of video coding performed by a video encoder or a video decoder, the method comprising:receiving input data associated with a current block in a current picture at an encoder side or receiving a video bitstream including compressed data of the current block at a decoder side;
disabling an Intra-Inter prediction mode for the current block if a size of the current block meets one of conditions including (i) being greater than a maximum block size and (ii) being smaller than a minimum block size;
in a case that the Intra-Inter prediction mode is not disabled, determining whether the Intra-Inter prediction mode is selected for the current block;
when the Intra-Inter prediction mode is determined as selected for the current block:deriving an Intra predictor from Infra reference pixels, the Intra reference pixels being in the current picture located above a top boundary of the current block or on a left side of a left boundary of the current block, and the Intra reference pixel being coded prior to the current block;
deriving an Inter predictor comprising Inter reference pixels located in a reference block in a reference picture, the reference block being coded prior to the current block:
generating an Intra-Inter predictor by blending the Intra predictor and the Inter predictor; and
encoding or decoding the current block using the Intra-Inter predictor; and

when the Intra-Inter prediction mode is determined as disabled or not selected for the current block:deriving either the Intra predictor or the Inter predictor; and
encoding or decoding the current block using the derived Intra predictor or the derived Inter predictor.


US Pat. No. 11,070,814

APPARATUS AND METHOD FOR VIDEO IMAGE ENCODING AND VIDEO IMAGE DECODING

FUJITSU LIMITED, Kawasak...


16. A non-transitory computer-readable medium storing a video image decoding program causing a computer to perform a process comprising:extracting a prediction error of a block to be decoded in a picture to be decoded and information indicating a candidate of a prediction vector in a list of candidates of the prediction vector of an in-picture motion vector indicating a spatial movement amount between the block to be decoded in the picture to be decoded and a decoded reference block in the picture to be decoded;
generating a list of candidates of a prediction vector of the in-picture motion vector based on at least one ofa first in-picture motion vector calculated in a first reference available region including a plurality of blocks including a block which is decoded before the block to be decoded is decoded in the picture to be decoded and which is not adjacent to the block to be decoded, and
a second in-picture motion vector calculated in a second reference available region including a plurality of blocks including a block in a position corresponding to the block to be decoded in a reference picture decoded before the picture to be decoded is decoded;

specifying a prediction vector based on the list and information indicating candidates of the prediction vector;
decoding the in-picture motion vector based on the specified prediction vector; and
decoding the block to be decoded based on the reference block specified based on the decoded in-picture motion vector and the prediction error.

US Pat. No. 11,070,813

GLOBAL MOTION ESTIMATION AND MODELING FOR ACCURATE GLOBAL MOTION COMPENSATION FOR EFFICIENT VIDEO PROCESSING OR CODING

Intel Corporation, Santa...


1. A system to perform efficient motion based video processing using global motion, comprising:a global motion analyzer, the global motion analyzer including one or more substrates and logic coupled to the one or more substrates, wherein the logic is to:obtain a plurality of block motion vectors for a plurality of blocks of a current frame with respect to a reference frame;
modify the plurality of block motion vectors, wherein the modification of the plurality of block motion vectors includes one or more of the following operations: smoothing of one or more of the plurality of block motion vectors, merging of one or more of the plurality of block motion vectors, and discarding of one or more of the plurality of block motion vectors;
restrict the modified plurality of block motion vectors by excluding a portion of the frame;
compute a plurality of candidate global motion models based on the restricted-modified plurality of block motion vectors for the current frame with respect to the reference frame, wherein each candidate global motion model comprises a set of candidate global motion model parameters representing global motion of the current frame, wherein the computation of the plurality of candidate global motion models further comprises operations to:choose a global motion region and a local motion region segmentation for selection of a valid region for choosing candidate motion vectors for global motion model computation, wherein the global motion region includes an area of blocks whose motion corresponds to globally moving objects, wherein the local motion region includes an area of blocks whose motion corresponds to locally moving objects, and
choose a set of global motion models in a first mode selected from among four parameter models, six parameter models, and eight parameter models as well as in a second mode selected from among six parameter models, eight parameter models, and twelve parameter models, wherein the first mode is selected for first definition scene sequences and the second mode is selected for second definition scene sequences wherein the first definition scene sequences and the second definition scene sequences have different levels of resolution;

determine a final global motion model from the computed plurality of candidate global motion models on a frame-by-frame basis, wherein each final global motion model comprises a set of final global motion model parameters representing global motion of the current frame;
modify a precision of the final global motion model parameters in response to one or more application parameters, wherein the application parameters, include one or more of the following application parameter types: coding bit-rate, resolution, and required visual quality, wherein the modification of the precision of the final global motion model parameters includes assigning a different accuracy to one or more model parameter of the final global motion model parameters;
map the modified-precision final global motion model parameters to a pixel-based coordinate system to determine a plurality of mapped global motion warping vectors for a plurality of reference frame control-grid points;
predict and encode the plurality of mapped global motion warping vectors for the current frame with respect to a plurality of previous mapped global motion warping vectors;
determine a final sub-pel filter to use for interpolation at an ?th pel location or a 1/16th pel location from among two or more sub-pel filter choices per frame;
apply the plurality of mapped global motion warping vectors at sub-pel locations to the reference frame and perform interpolation of pixels based on the determined final sub-pel filter to generate a global motion compensated warped reference frame; and

a power supply to provide power to the global motion analyzer.

US Pat. No. 11,070,812

COEFFICIENT DOMAIN BLOCK DIFFERENTIAL PULSE-CODE MODULATION IN VIDEO CODING

QUALCOMM Incorporated, S...


1. A method of decoding video data, the method comprising:determining, based on syntax elements in a bitstream that comprises an encoded representation of the video data, residual quantized samples of a block of the video data;
determining quantized residual values based on the residual quantized samples, wherein determining the quantized residual values comprises one of:based on the intra prediction being vertical prediction, determining the quantized residual values as:





where Q(ri,j) is a quantized residual value at position i,j, rk,j is one of the residual quantized samples, M is a number of rows of the block, and N is a number of columns of the block, orbased on the intra prediction being horizontal prediction, determining the quantized residual values as:





where ri,k is one of the residual quantized samples;
after determining the quantized residual values, inverse quantizing the quantized residual values;
generating predicted values by performing intra prediction for the block using unfiltered samples from above or left block boundary samples; and
reconstructing original sample values of the block based on the inverse-quantized quantized residual values and the prediction values.

US Pat. No. 11,070,811

METHOD AND DEVICE FOR ENCODING INTRA PREDICTION MODE FOR IMAGE PREDICTION UNIT, AND METHOD AND DEVICE FOR DECODING INTRA PREDICTION MODE FOR IMAGE PREDICTION UNIT

SAMSUNG ELECTRONICS CO., ...


1. A method of decoding an image, the method comprising:splitting the image into a plurality of maximum coding units;
hierarchically splitting a maximum coding unit from among the plurality of maximum coding units into at least one coding unit including a current coding unit based on split information;
obtaining intra prediction mode information regarding a prediction unit of a first image component of the image from a bitstream;
when an intra prediction mode indicated by the intra prediction mode information regarding the prediction unit of the first image component is one of a planar mode, a DC mode, a horizontal mode and a vertical mode, obtaining an intra prediction mode candidate group regarding a second image component, wherein the intra prediction mode candidate group regarding the second image component includes a diagonal mode;
obtaining intra prediction mode information regarding a prediction unit of the second image component;
determining an intra prediction mode regarding the prediction unit of the second image component from among a plurality of intra prediction modes which are included in the intra prediction mode candidate group regarding the second image component based on the obtained intra prediction mode information regarding the prediction unit of the second image component;
performing intra prediction on the prediction unit of the first image component according to the intra prediction mode indicated by the intra prediction mode information regarding the prediction unit of the first image component; and
performing the intra prediction on the prediction unit of the second image component according to the determined intra prediction mode,
wherein the performing the intra prediction on the prediction unit of the first image component comprises, when the intra prediction mode indicated by the intra prediction mode information regarding the prediction unit of the first image component is the planar mode, obtaining a prediction value of a current pixel included in a block in the prediction unit of the first image component using four neighboring pixels of the block in the prediction unit of the first image component,
wherein the current coding unit is a coding unit which is not split according to the split information,
wherein the prediction unit of the first image component is determined by splitting the current coding unit based on partition type information,
wherein the partition type information indicates whether a partition type of symmetrically splitting the current coding unit along at least one of a height and a width of the current coding unit, and
wherein the first image component is a luma component and the second image component is a chroma component.

US Pat. No. 11,070,810

MODIFYING BIT DEPTHS IN COLOR-SPACE TRANSFORM CODING

Qualcomm Incorporated, S...


1. A method of encoding video data, the method comprising:determining a bit-depth of a luma component of the video data and a bit-depth of a chroma component of the video data;
in response to the bit-depth of the luma component being different than the bit depth of the chroma component, modifying one or both of the bit depth of the luma component and the bit depth of the chroma component such that the bit depths are equal comprising performing a bitwise shift operation on the video data of one or both of the luma component and the chroma component before applying a color-space transform process; and
applying the color-space transform process to the modified video data.

US Pat. No. 11,070,809

SYSTEM AND METHOD FOR RESHAPING AND ADAPTATION OF HIGH DYNAMIC RANGE VIDEO DATA

ARRIS Enterprises LLC, S...


1. A method for generating High Dynamic Range (HDR) and/or Wide Color Gamut (WCG) video data from an encoded video data stream, the method comprising:extracting, by a non-HDR and/or non-WCG video decoder, a self-referential metadata structure signaled for a video data set in an encoded video data stream comprising encoded non-HDR and/or non-WCG video data;
wherein said signaled self-referential metadata structure includes reshaping parameters for a video data reshaping transfer function signaled in a supplemental enhancement information message of said encoded video data stream and/or video usability information message of said encoded video data stream;
wherein said signaled self-referential metadata structure includes data defining a plurality of segments of the video data reshaping transfer function;
wherein said signaled self-referential metadata structure includes a smoothness parameter for each of said plurality of segments, where the smoothness parameter selectively indicates one selected from the group comprising;(i) a first value for said smoothness parameter indicating all polynomial coefficients are provided in said signaled self-referential metadata structure for one of said plurality of segments;
(ii) a second value for said smoothness parameter indicating said one of said plurality of segments is contiguous with an immediately prior one of said plurality of segments;
(iii) a third value for said smoothness parameter indicating a slope of said video data reshaping transfer function at said one of said pivot points is the same for both a current segment associated with said one of said pivot points and an immediately prior segment associated with said one of said pivot points, and indicating said one of said plurality of segments is contiguous with an immediately prior segment related to said one of said plurality of segments;

wherein said reshaping parameters are relevant to the video data set signaled at a picture level in the encoded video data stream;
decoding, by the non-HDR and/or non-WCG video decoder, the encoded non-HDR and/or non-WCG video data to produce decoded non-HDR and/or non-WCG video data;
determining the video data reshaping transfer function based on the extracted metadata structure;
generating reshaped HDR and/or WCG video data as output data by applying the decoded non-HDR and/or non-WCG video data to the video data reshaping transfer function.

US Pat. No. 11,070,808

SPATIALLY ADAPTIVE QUANTIZATION-AWARE DEBLOCKING FILTER

GOOGLE LLC, Mountain Vie...


1. A method for decoding an encoded frame, the method comprising:decoding, from a bitstream to which the encoded frame is encoded, quantized transform coefficients of encoded blocks of the encoded frame and adaptive quantization field data used to encode the encoded blocks;
producing a reconstructed frame, wherein producing the reconstructed frame includes dequantizing and inverse transforming the quantized transform coefficients;
filtering the reconstructed frame according to the adaptive field data to produce a filtered frame based on the reconstructed frame, wherein filtering the reconstructed frame includes modulating, according to the adaptive quantization field data, one or more of a non-linearity selection filter parameter, a filter size parameter, or a directional sensitivity filter parameter; and
outputting the filtered frame for storage or display.

US Pat. No. 11,070,807

DATA ENCODING APPARATUS AND DATA ENCODING METHOD

Samsung Electronics Co., ...


1. A data encoding apparatus, comprising:a memory storing computer-readable instructions; and
one or more processors configured to execute the computer-readable instruction such that the one or more processors are configured to,receive a residual block including video data,
transform the video data from a spatial domain into a frequency domain to create a transformed coefficient,
quantize the transformed coefficient to create a first quantized coefficient,
based on the first quantized coefficient, quantize the transformed coefficient to create a second quantized coefficient different from the first quantized coefficient, and
perform an entropy coding for the second quantized coefficient to generate an output bit stream,

wherein,
the first quantized coefficient is created by applying a first rounding offset value to the transformed coefficient, and
the second quantized coefficient is created by applying a second rounding offset value to the transformed coefficient.

US Pat. No. 11,070,806

METHOD AND APPARATUS FOR PERFORMING LOW COMPLEXITY COMPUTATION IN TRANSFORM KERNEL FOR VIDEO COMPRESSION

LG ELECTRONICS INC., Seo...


1. A method for processing a video signal, comprising:determining, by a processor, a prediction mode of a current block;
parsing, by the processor, a transform combination index corresponding to one of a plurality of transform combinations from the video signal based on the prediction mode;
deriving, by the processor, a transform combination corresponding to the transform combination index, wherein the transform combination is composed of a horizontal transform and a vertical transform, and includes DST-7 or DCT-8;
performing, by the processor, an inverse-transform on the current block based on the transform combination; and
reconstructing, by the processor, the video signal by using the inverse-transformed current block.

US Pat. No. 11,070,805

CROSS-COMPONENT CODING ORDER DERIVATION

BEIJING BYTEDANCE NETWORK...


1. A method of coding video data, comprising:generating a luma parent coding block from a luma coding tree block (CTB) of a video, based on a first luma partition scheme, wherein first luma partition scheme includes recursive partition operations;
generating a chroma parent coding block from a chroma coding tree block (CTB) based on a first chroma partition scheme, wherein the first chroma partition scheme has same recursive partition operations with the first luma partition scheme;
determining to apply a further partition operation on the luma parent coding block based on a color format of the luma and chroma CTB, a coding mode and a dimension of the parent luma coding block meeting certain conditions to generate multiple luma coding blocks and not to apply the further partition operation on the chroma parent coding block; and
performing a conversion between the multiple luma coding blocks and a bitstream of the video and a conversion between the chroma parent coding block and the bitstream.

US Pat. No. 11,070,804

VIDEO ENCODER, A VIDEO DECODER AND CORRESPONDING METHODS WITH IMPROVED BLOCK PARTITIONING

HUAWEI TECHNOLOGIES CO., ...


1. A method for decoding a video bitstream implemented by a decoding device, wherein the video bitstream comprises an image region header of an image region and data representing the image region, the video bitstream further comprises a parameter set of the video bitstream, the method comprising:obtaining an override flag from the video bitstream, wherein the override flag indicates whether first partition constraint information from the image region header or second partition constraint information from the parameter set is for partitioning a block of the image region;
when a value of the override flag is an overriding value, obtaining first partition constraint information for the image region from the image region header; partitioning a block of the image region into a plurality of sub-blocks according to the first partition constraint information;
when the value of the override flag is not the overriding value, partitioning the block of the image region into a plurality of sub-blocks according to second partition constraint information; and
reconstructing the block of the image region by reconstructing the plurality of sub-blocks.

US Pat. No. 11,070,803

METHOD AND APPARATUS FOR DETERMINING CODING COST OF CODING UNIT AND COMPUTER-READABLE STORAGE MEDIUM

TENCENT TECHNOLOGY (SHENZ...


1. A method for determining a coding cost of a coding unit (CU), performed by a server, the method comprising:determining, with circuitry of the server, a CU subject to predictive coding in an intra-frame prediction mode;
determining, with the circuitry of the server, pixel gradient information corresponding to the CU, the pixel gradient information including a pixel gradient variance, and the pixel gradient variance being a variance of image gradients of at least some pixels in the CU;
making, with the circuitry of the server, a division predecision on the CU according to the pixel gradient information; and
determining, with the circuitry of the server, a first coding cost as a second coding cost of the CU in a case that a result of the division predecision on the CU is negative, the second CU being used for determining a division policy of a coding tree unit (CTU) corresponding to the CU, and the first coding cost being a coding cost obtained by performing predictive coding on the CU by using a current size of the CU as a size of a prediction unit (PU).

US Pat. No. 11,070,802

MOVING IMAGE CODING DEVICE, MOVING IMAGE DECODING DEVICE, MOVING IMAGE CODING/DECODING SYSTEM, MOVING IMAGE CODING METHOD AND MOVING IMAGE DECODING METHOD

SHARP KABUSHIKI KAISHA, ...


1. A moving image coding method for a moving image coding device, the moving image coding method comprising:receiving a frame of an input moving image;
dividing, via a central processing unit of the moving image coding device, the frame into a plurality of macro blocks;
further dividing, via the central processing unit, each macro block of the plurality of macro blocks into a plurality of partitions, the partitions including at least one partition of which size is different from another of the partitions;
dividing, via the central processing unit, each macro block of the plurality of macro blocks into a plurality of transformation target regions, at least one transformation target region of the plurality of transformation target regions crossing over a boundary between first and second adjacent partitions of the plurality of partitions (i) so as to include only a part of the first partition and only a part of the second partition and (ii) so as not to include a whole part of the first partition and a whole part of the second partition; wherein vertical and horizontal sizes of each of the plurality of transformation target regions are integer multiples of vertical and horizontal sizes of minimum frequency transformation target region,
applying, via the central processing unit, a frequency transformation to each of the plurality of transformation target regions to generate a transformation coefficient; and
outputting coded data obtained by variable-length coding the transformation coefficient.

US Pat. No. 11,070,801

IMAGE CODER/DECODER RESTRICTING BINARY TREE SPLITTING OF TARGET NODE OBTAINED BY TERNARY TREE SPLIT

SHARP KABUSHIKI KAISHA, ...


1. An image decoding apparatus for decoding a picture for each of coding tree units, the image decoding apparatus comprising:coding node (CN) information decoding circuitry configured to:perform a binary tree (BT) split possibility determination based on a width of a target node, a height of the target node, a first minimum size of a coding unit (CU), a first maximum size of the CU, and a level of hierarchy of the target node,
perform a ternary tree (TT) split possibility determination based on the width of the target node, the height of the target node, a second minimum size of the CU, a second maximum size of the CU, and the level of hierarchy of the target node,
determine whether or not decoding of a partition tree (PT) split flag is required, wherein the PT split flag indicates whether or not the target node is to be split by a binary tree split or a ternary tree split,
decode the PT split flag in a case that decoding of the PT split flag is required,
determine whether or not decoding of a split direction flag is required in a case that the PT split flag indicates that the target node is to be split by the binary tree split or the ternary tree split, wherein the split direction flag indicates a split direction,
decode the split direction flag in a case that decoding of the split direction flag is required,
determine whether or not decoding of a split mode selection flag is required, wherein the split mode selection flag indicates a split method which is the binary tree split or the ternary tree split,
decode the split mode selection flag in a case that decoding of the split mode selection flag is required, and
split the target node of the coding tree units by the binary tree split or the ternary tree split based on the split direction flag and the split mode selection flag, wherein

the CN information decoding circuitry further determines additional conditions of the BT split possibility determination,
in a case that the target node is a middle node of three nodes obtained by performing the ternary tree split on an immediately higher node, the CN information decoding circuitry prohibits split of the target node by the binary tree split in a direction identical to a direction of the ternary tree split performed on the immediately higher node, and
in a case that the binary tree split is prohibited, the CN information decoding circuitry does not decode the split mode selection flag and sets the split mode selection flag to a value indicating the ternary tree split.

US Pat. No. 11,070,800

VIDEO CODING METHOD AND APPARATUS USING ANY TYPES OF BLOCK PARTITIONING

Intellectual Discovery Co...


1. A video decoding method, comprising:acquiring quad-partitioning information of a block;
acquiring non-quad-partitioning information of the block, when the acquired quad-partitioning information of the block does not indicate that the block is divided into four partitions,
wherein the non-quad-partitioning information includes partitioning direction information indicating a partitioning direction of a bi-partitioning and a tri-partitioning for the block, and partitioning number information indicating either the bi-partitioning or the tri-partitioning,
wherein the partitioning direction indicates either a vertical direction or a horizontal direction of the bi-partitioning and the tri-partitioning,
wherein the partitioning number information is acquired after acquiring the partitioning direction information, and
wherein the partitioning direction information and the partitioning number information are both 1 bit flags; and
dividing, based on the non-quad-partitioning information, the block into a plurality of partitions,
wherein the bi-partitioning is a partitioning type of dividing the block into two partitions of a same size and partitioning depth and the tri-partitioning is a partitioning type of dividing the block into three partitions of a same partitioning depth, and
wherein a center partition of the three partitions has a size equal to a sum of a size of the other two of the three partitions, and the other two of the three partitions have a same size.

US Pat. No. 11,070,799

ENCODER, DECODER AND CORRESPONDING METHODS FOR INTRA PREDICTION

HUAWEI TECHNOLOGIES CO., ...


1. A method of intra prediction implemented by a decoding device, comprising:obtaining a value of a first indication information of a current block, the value of the first indication information indicating whether an intra prediction mode of the current block is comprised in a set of most probable modes, wherein the set of most probable modes comprises 5 candidate intra prediction modes and Planar mode;
obtaining a value of a reference index line of the current block; and
when the value of the first indication information indicates the intra prediction mode of the current block is comprised in the set of most probable modes, and when the value of the reference index line indicates a closest neighboring reference line to the current block is used, obtaining a value of a second indication information of the current block, the value of the second indication information of the current block indicating whether the intra prediction mode of the current block is Planar mode or not;
wherein when an intra prediction mode of a left neighbor block and an intra prediction mode of an above neighbor block are both non-angular modes, the candidate intra prediction modes comprise:
candModeList[0]=INTRA_DC,
candModeList[1]=INTRA_ANGULAR50,
candModeList[2]=INTRA_ANGULAR18,
candModeList[3]=INTRA_ANGULAR46, and
candModeList[4]=INTRA_ANGULAR54.

US Pat. No. 11,070,797

IMAGE DECODING METHOD AND APPARATUS BASED ON INTER PREDICTION IN IMAGE CODING SYSTEM

LG ELECTRONICS INC., Seo...


1. An image decoding method performed by a decoding device, the method comprising:constructing a merge candidate list based on a neighboring block of a current block;
deriving costs for merge candidates included in the merge candidate list;
deriving a modified merge candidate list based on the costs for the merge candidates;
deriving motion information of the current block based on the modified merge candidate list; and
performing prediction of the current block based on the motion information,
wherein the neighboring block includes a spatial neighboring block and a temporal neighboring block, and
wherein the deriving of the modified merge candidate list based on the costs for the merge candidates includes:
deriving a reordered merge candidate list by reordering the merge candidates in an order from small to large costs thereof;
deriving a refine merge candidate based on a predetermined merge candidate among the merge candidates; and
deriving the modified merge candidate list by adding the refine merge candidate to the reordered merge candidate list.

US Pat. No. 11,070,796

ULTIMATE MOTION VECTOR EXPRESSION BASED PRUNING FOR VIDEO CODING

Qualcomm Incorporated, S...


1. An apparatus configured to decode video data, the apparatus comprising:a memory configured to store video data; and
one or more processors implemented in circuitry and in communication with the memory, the one or more processors configured to:add one or more motion vector candidates to a merge candidate list for motion vector prediction for a current block of the video data;
determine whether to add a next motion vector candidate to the merge candidate list based on an ultimate motion vector expression (UMVE) candidate of a respective candidate of the one or more candidates;
perform pruning of the next motion vector candidate, where the next motion vector candidate is a history-based motion vector prediction (HMVP) candidate, from a history table, using the one or more candidates in the merge candidate list;
terminate the pruning if the HMVP candidate matches the UMVE candidate of the respective candidate of the one or more candidates;
discard the HMVP candidate; and
decode the current block of the video data using the merge candidate list.


US Pat. No. 11,070,795

INTERACTION BETWEEN LOOK UP TABLE AND SHARED MERGE LIST

BEIJING BYTEDANCE NETWORK...


1. A method of coding video data, comprising:maintaining a table, wherein the table includes one or more candidates and the table is updated due to coding of one or more video blocks, wherein each candidate in the table comprises intra mode information;
coding, based on at least one candidate of the table, a current video block coded with an intra mode; and
resetting the table at the beginning of coding a region which comprises at least one video block.

US Pat. No. 11,070,794

VIDEO QUALITY ASSESSMENT METHOD AND APPARATUS

HUAWEI TECHNOLOGIES CO., ...


1. A method comprising:obtaining a video parameter set of a video;
generating an assessment model, wherein the assessment model is based on a subjective assessment result of a user on each sample in a sample set and based on a sample parameter set of each sample, wherein the video parameter set and the sample parameter set have a same parameter type, and wherein the parameter type comprises at least one of a jitter, a delay, or a packet loss rate, wherein the assessment model is generated by:classifying N samples into a test set and a plurality of training sets; wherein the N samples correspond to N test videos, N an integer greater than or equal to 2;
classifying samples in each of the plurality of training sets based on the different parameter types in the parameter set, to generate a plurality of initial decision tree models, wherein the plurality of initial decision tree models comprise an initial decision tree model corresponding to each of the plurality of training sets;
testing the plurality of initial decision tree models based on at least one sample in the test set to obtain a plurality of groups of test results, wherein the plurality of groups of test results have a one-to-one correspondence with the plurality of initial decision tree models and each group of test results comprise at least one test result that has a one-to-one correspondence with the at least one sample; and
determining a decision tree model from the plurality of initial decision tree models as the assessment model based on the plurality of groups of test results, wherein the decision tree model is an initial decision tree model corresponding to a first group of test results, and a test result corresponding to each sample in the first group of test results is the same as a subjective assessment result of the sample; and

assessing a video quality of the video using the assessment model and the video parameter set to obtain an assessment result.

US Pat. No. 11,070,793

MACHINE VISION SYSTEM CALIBRATION

Cognex Corporation, Nati...


1. A machine vision system comprising:one or more interfaces configured to provide communication with a motion rendering device, a first image sensor, and a second image sensor, wherein:
the motion rendering device is configured to provide at least one of a translational movement and an in-plane rotational movement, and is associated with a first coordinate system;
the motion rendering device is configured to directly or indirectly carry a first calibration plate and a second calibration plate, and the first calibration plate and the second calibration plate comprise a first plurality of known features with known physical positions relative to the first calibration plate and a second plurality of known features with known physical positions relative to the second calibration plate, respectively; and
the first image sensor and the second image sensor are configured to capture an image of the first calibration plate and the second calibration plate, respectively, and the first image sensor and the second image sensor are associated with a second coordinate system and a third coordinate system, respectively, wherein each of the first coordinate system, second coordinate system, and third coordinate system are different from each other; and
a processor configured to run a computer program stored in memory that is configured to:
send, via the one or more interfaces to the motion rendering device, first data configured to cause the motion rendering device to move to a requested first pose;
receive, via the one or more interfaces from the motion rendering device, a reported first pose;
receive, via the one or more interfaces from the first image sensor, a first image of the first calibration plate for the reported first pose;
receive, via the one or more interfaces from the second image sensor, a second image of the second calibration plate for the reported first pose;
determine a first plurality of correspondences between the known physical positions of the first plurality of features relative to the first calibration plate and first positions of the first plurality of features detected in the first image;
determine a second plurality of correspondences between the known physical positions of the second plurality of features relative to the second calibration plate and second positions of the second plurality of features detected in the second image;
determine, based on the first plurality of correspondences, a first transformation that allows mapping between the first coordinate system associated with the motion rendering device and the second coordinate system associated with the first image sensor; and
determine, based on the second plurality of correspondences, a second transformation that allows mapping between the first coordinate system associated with the motion rendering device and the third coordinate system associated with the second image sensor,
wherein the first and second transformations allow the machine vision system to establish correspondences between features found in separate images taken by the first and second image sensors and the first coordinate system.

US Pat. No. 11,070,792

SURFACE SELECTION FOR PROJECTION-BASED AUGMENTED REALITY SYSTEM

Facebook, Inc., Menlo Pa...


1. A method comprising:by a computing device, retrieving a media content item for display to a user in an environment;
by the computing device, receiving, from one or more interaction devices each comprising a projector and a camera, one or more media objects captured by one or more cameras, wherein one or more of the media objects comprise images of the user;
by the computing device, determining a location of the user in the environment based on an analysis of the one or more of the media objects;
by the computing device, identifying a plurality of projectable surfaces in the environment based on the analysis of the one or more of the media objects, wherein the media content item is to be projected to one of the plurality of projectable surfaces;
by the computing device, selecting a projectable surface among the plurality of projectable surfaces based on one or more characteristics associated with the media content item and the determined location of the user;
by the computing device, identifying one of the one or more interaction devices based on the selected projectable surface; and
by the computing device, sending, to the identified interaction device, the media content item and instructions causing the projector associated with the identified interaction device to project the media content item on the selected projectable surface.

US Pat. No. 11,070,791

PRIVACY DISPLAY APPARATUS

RealD Spark, LLC, Beverl...


1. A display apparatus comprising:a display device arranged to display an image, the display device having a selectively operable luminance-privacy optical arrangement arranged on operation to reduce the luminance of the image to an off-axis viewer, and a selectively operable contrast-privacy optical arrangement arranged on operation to reduce the contrast of the image to an off-axis viewer; and
a control system arranged to control the display device, the control system being capable of selectively operating either one or both of the luminance-privacy arrangement and the contrast-privacy arrangement, wherein the display device comprises:
a spatial light modulator; and
a display polarizer arranged on a side of the spatial light modulator, and
the luminance-privacy optical arrangement comprises:
an additional polarizer arranged on the same side of the spatial light modulator as the display polarizer; and
at least one retarder arranged between the additional polarizer and the display polarizer, the at least one retarder including a switchable liquid crystal retarder comprising a layer of liquid crystal material and electrodes arranged to apply a voltage for switching the layer of liquid crystal material, wherein the at least one retarder further includes at least one passive compensation retarder.

US Pat. No. 11,070,790

PASSIVE THREE-DIMENSIONAL IMAGE SENSING BASED ON SPATIAL FILTERING

SHENZHEN GOODIX TECHNOLOG...


1. A passive three-dimensional imaging system comprising:a spatial filter disposed in a filter plane and comprising a filter pair having a mask element paired with a reference element;
an image sensor comprising a plurality of photodetector elements arranged in an array forming a detection plane substantially parallel to the filter plane, the plurality of photodetector elements comprising a signal pixel set of the photodetector elements that spatially corresponds to the mask element, and a reference pixel set of the photodetector elements that spatially corresponds to the reference element;
a lens configured to form, onto the image sensor through the spatial filter, an image of a scene object located an object distance away from the lens, such that object light from the scene object is focused by the lens onto the signal pixel set and the reference element, receipt of the object light by the signal pixel set is optically influenced by the mask element, and receipt of the object light by the reference pixel set is not optically influenced by the mask element;
a non-transient memory comprising a lookup table of calibration mappings, each calibration mapping indicating one of a plurality of pre-calibrated object distances associated with a respective one of a plurality of pre-calibrated ratios, each between a respective measured signal brightness and a respective measured reference brightness; and
a processor configured to determine a signal brightness according to an optical response by the signal pixel set to the object light, to determine a reference brightness according to an optical response by the reference pixel set to the object light, and to compute the object distance of the scene object as a function of the signal brightness and the reference brightness by computing a ratio of the signal brightness and the reference brightness, and obtaining the object distance as the one of the plurality of pre-calibrated object distances associated with the respective one of the plurality of pre-calibrated ratios closest to the computed ratio according to the calibration mapping.

US Pat. No. 11,070,789

SWITCHABLE FRINGE PATTERN ILLUMINATOR

Facebook Technologies, LL...


1. An illuminator comprising:an optical path switch configured to receive light and dynamically control an amount of the light that is provided to a first waveguide and an amount of the light that is provided to a second waveguide;
a first projector configured to generate a first interference fringe pattern using light from the first waveguide, the first projector comprising:a third waveguide,
a fourth waveguide, and an entrance of the fourth waveguide is coupled to an entrance of the third waveguide such that the received light from the optical path switch is split between the third waveguide and the fourth waveguide, and
a first phase delay device that is coupled to the fourth waveguide, the first phase delay device configured to introduce a phase shift in light propagating through the fourth waveguide relative to light in the third waveguide such that light exiting the third waveguide and light exiting the fourth waveguide combine to form the first interference fringe pattern, wherein the first interference fringe pattern illuminates a first portion of a target area; and

a second projector configured to generate a second interference fringe pattern using light from the second waveguide, wherein the second interference fringe pattern illuminates a second portion of the target area.

US Pat. No. 11,070,788

METHOD AND SYSTEM FOR DETERMINING OPTIMAL EXPOSURE TIME AND NUMBER OF EXPOSURES IN STRUCTURED LIGHT-BASED 3D CAMERA


1. A method for determining an exposure parameter for a structured light-based 3D camera system, the method comprising:capturing one or more images;
calculating a pixel distribution from the one or more images, wherein the calculating the pixel distribution further includes calculating an upper loss pixel distribution and an accumulation pixel distribution from a number of upper loss pixels and a number of accumulation pixels; and
determining the exposure parameter using the pixel distribution,
wherein the calculating the pixel distribution includes calculating one or more slopes related to a brightness and an exposure time for selected pixels of the one or more images, and categorizing the one or more slopes.

US Pat. No. 11,070,787

AUGMENTED OPTICAL IMAGING SYSTEM FOR USE IN MEDICAL PROCEDURES

Synaptive Medical Inc., ...


1. An optical imaging system for imaging a target during a medical procedure, the system comprising:a first camera for capturing a first image of the target;
a second wide-field camera;
at least one path folding mirror disposed in an optical path between the target and a lens of the second camera, the second camera configured to capture a second image of the target that is reflected by the at least one path folding mirror to the lens of the second camera; and
a processing unit for receiving the first image and the second image, the processing unit being configured to:apply an image transform to one of the first image and the second image; and
combine the transformed image with the other one of the images to produce a stereoscopic image of the target.


US Pat. No. 11,070,786

ILLUMINATION-BASED SYSTEM FOR DISTRIBUTING IMMERSIVE EXPERIENCE CONTENT IN A MULTI-USER ENVIRONMENT

DISNEY ENTERPRISES, INC.,...


1. An immersive experience system comprising:a processor communicatively coupled to a first optical emission device and a second optical emission device and located separately from a first head-mounted display and a second head-mounted display, the processor being configured to:determine a position of the first head-mounted display,
determine a position of the second head-mounted display,
generate a first image for a first immersive experience corresponding to the position of the first head-mounted display,
encode the first image into a first infrared spectrum illumination having a first wavelength,
generate a second image for a second immersive experience corresponding to the position of the second head-mounted display, and
encode the second image into a second infrared spectrum illumination having a second wavelength, the first wavelength being different from the second wavelength;

the first optical emission device, located separately from the first head-mounted display, configured to emit the first infrared spectrum illumination for reception by the first head-mounted display such that the first head-mounted display projects the first image onto one or more display portions of the first head-mounted display; and
the second optical emission device, located separately from the second head-mounted display, configured to emit the second infrared spectrum illumination for reception by the second head-mounted display such that the second head-mounted display projects the second image onto one or more display portions of the second head-mounted display.

US Pat. No. 11,070,785

DYNAMIC FOCUS 3D DISPLAY

Apple Inc., Cupertino, C...


1. An apparatus, comprising:one or more projectors configured to scan a frame of a scene to a subject's eyes, wherein the projector comprises a focusing mechanism, wherein the frame comprises a left image scanned to the subject's left eye and a right image scanned to the subject's right eye, wherein objects in the scene are shifted in the two images as a function of triangulation of distance where nearer objects are shifted more than more distant objects; and
a controller configured to:determine, from depth maps for the two images of the scene generated according to shift data obtained from the two images, a depth at each pixel in the scene; and
cause the focusing mechanism to focus each pixel at a focus distance that corresponds to the determined depth at the pixel as the frame is being scanned by the one or more projectors;

wherein focusing each pixel at a focus distance that corresponds to the determined depth at the pixel as the frame is being scanned by the one or more projectors causes the objects in the scene that are intended to appear at different depths to be projected to the subject's eyes at correct depths.

US Pat. No. 11,070,783

3D SYSTEM

VEFXi Corporation, Hills...


1. A method for conversion of a series of two dimensional images into a series of three dimensional images comprising:(a) receiving said series of two dimensional images having pixels where each of said series of two dimensional images are from the same viewpoint;
(b) processing said series of two dimensional images and an associated depth map to determine respective displacements for each eye-view associated with each pixel of said series of two dimensional images For a display, where said respective displacements is an output pixel line data structure for each eye-view that further provides a mapping that includes a left image view and a right image view, where a displacement first pixel for said left image view and a displacement first pixel for said right image view included in said output pixel line data structure are included at a location equal distant from a center line, and where a subsequent displacement second pixel for said left image view and a subsequent displacement second pixel for said right image view are included at a location equal distant from said center line after shifting said displacement first pixel for said left image view and said displacement first pixel for said right image view both in a first direction relative to said center line;
(c) processing said respective displacements to render said two dimensional images as said series of three dimensional images where each of said series of two dimensional images are from said same viewpoint;
(d) wherein said data structure is a queue;
(e) wherein said pixel is written to each said queues by direct memory accesses of the queue with a displacement offset relative to a middle the queue for the associated eye-view; and
(f) wherein a resulting output of each view queue is the pixel stream that represents the pixels for the associated eye-view.

US Pat. No. 11,070,782

METHOD OF OUTPUTTING THREE-DIMENSIONAL IMAGE AND ELECTRONIC DEVICE PERFORMING THE METHOD

SAMSUNG ELECTRONICS CO., ...


1. A method of outputting a three-dimensional (3D) image, the method comprising:obtaining eye position information of a user in a coordinate system of a display panel;
calculating virtual viewpoints of the user with respect to an optical system based on the eye position information and optical system information corresponding to the optical system;
transforming the coordinate system of the display panel into a graphic space coordinate system;
setting at least two viewpoints within the graphic space coordinate system based on the eye position information;
generating a stereo image with respect to an object to be output on a virtual screen based on the at least two viewpoints set within the graphic space coordinate system;
rendering a 3D image based on the virtual viewpoints and the stereo image to the user through the optical system; and
setting the virtual screen comprising:determining a difference between a preset first eye position and a target eye position;
determining a first weight with respect to first virtual screen information preset with respect to the preset first eye position based on the difference between the preset first eye position and the target eye position;
determining target virtual screen information with respect to the target eye position based on the first virtual screen information and the first weight; and
setting the virtual screen based on the target virtual screen information.


US Pat. No. 11,070,781

RENDERING EXTENDED VIDEO IN VIRTUAL REALITY

Warner Bros. Entertainmen...


1. A method for transforming extended video data for display in virtual reality, the method comprising:receiving digital extended video data for display on a center screen and two auxiliary screens of a real extended video cinema;
accessing, by a computer executing a rendering application, data that defines virtual screens including a center screen and auxiliary screens, wherein tangent lines to each of the auxiliary screens at their respective centers of area intersect with a tangent line to the center screen at its center of area at equal angles in a range of 75 to 105 degrees, each of the virtual screens is characterized by a having a cylindrical radius of curvature ‘R’ in a plane parallel to top and bottom edge of each virtual screen and a ratio R/z2 wherein ‘z2’ indicates height of each of the virtual screens is in a range of 1 to 4;
accessing further data that defines a virtual floor plane parallel to and located a distance ‘z2’ below the bottom edge of each virtual screen, wherein z1 is equal to or within 20% of 0.9*z2;
preparing virtual extended video data at least in part by rendering the digital extended video on corresponding ones of the virtual screens; and
saving the virtual extended video data in a computer memory.

US Pat. No. 11,070,780

COMMERCIALS ON MOBILE DEVICES

Penthera Partners, Inc., ...


1. A non-transitory recording medium bearing instructions to cause a wireless device to:receive a first signal indicative of a user input specifying at least one criterion for selecting one or more video items;
transmit, via a wide area network (WAN) to a remote server, a request for one or more video items satisfying said at least one user-specified criterion;
receive, via said WAN and responsive to said request, a download from one or more remote servers containing one or more selected video items satisfying said at least one user-specified criterion and encoded in a digital video encoding format;
store said downloaded one or more selected video items as one or more encoded video files in a non-volatile memory of said wireless device;
receive, via said WAN from one or more servers, first auxiliary content that is distinct from said content satisfying said at least one criterion,
store said received first auxiliary content in one or more auxiliary files in said non-volatile memory of said wireless device;
wherein said at least one of said one or more downloaded video items is received over said WAN from a first server, and said first auxiliary content is received via said WAN from a second server, said second server being distinct from said first server;
receive a second signal indicative of a user input requesting playback of one or more video items from said one or more selected video items;
retrieve, from said non-volatile memory, one or more encoded video files containing said requested one or more video items;
retrieve one or more of said auxiliary files containing said first auxiliary content from said non-volatile memory of said wireless device, and play back said at least one of said one or more requested video items in coordination with said first auxiliary content, using said retrieved one or more encoded video files and said retrieved one or more auxiliary files containing first auxiliary content.

US Pat. No. 11,070,779

YCBCR PULSED ILLUMINATION SCHEME IN A LIGHT DEFICIENT ENVIRONMENT

DePuy Synthes Products, I...


1. A system for digital imaging in an ambient light deficient environment comprising:an emitter configured to emit a pulse of electromagnetic radiation;
an imaging sensor comprising an array of pixels for sensing electromagnetic radiation, wherein the imaging sensor comprises pixels having a plurality of pixel sensitivities, and wherein the plurality of pixel sensitivities comprises a long exposure and a short exposure; and
a control unit comprising a processor and wherein the control unit is in electrical communication with the imaging sensor and the emitter;
wherein the control unit is configured to synchronize the emitter and the imaging sensor to produce a plurality of image reference frames;
wherein the imaging sensor is configured to produce a sequence of frames comprising:a luminance frame comprising luminance image data of both long exposure pixel data and short exposure pixel data in a single luminance frame; and
a chrominance frame comprising chrominance data;

wherein the luminance frame and the chrominance frame are combined to form a color image frame.

US Pat. No. 11,070,778

MULTI-PROJECTOR DISPLAY ARCHITECTURE

Facebook Technologies, LL...


1. A headset display device comprising a central processor and a plurality of projector integrated circuits for each eye of a wearer of the headset display device, wherein each projector integrated circuit is coupled to the central processor, wherein each projector integrated circuit is configured to process image data and generate output image data for display, and wherein each projector integrated circuit comprises:a plurality of first integrated circuits, each of the plurality of first integrated circuits comprising a light emitter array; and
a second integrated circuit coupled to the plurality of first integrated circuits, wherein the second integrated circuit (1) comprises a graphics processor configured to generate transformed image data correcting for geometrical or brightness distortions and (2) is configured to provide the transformed image data to the plurality of first integrated circuits, wherein the plurality of first integrated circuits are configured to output the transformed image data using the respective light emitter array.

US Pat. No. 11,070,777

PROJECTION APPARATUS AND OPERATION METHOD THEREOF

Coretronic Corporation, ...


1. A projection apparatus comprising a light-emitting device, a driving circuit, and a control circuit, whereinthe driving circuit is coupled to the light-emitting device and configured to drive the light-emitting device according to at least one control signal, so as to generate a projected beam, and
the control circuit is configured to receive at least one video frame and analyze color content of the at least one video frame, wherein the control circuit selects one of a highlight mode and a normal mode as a selected mode according to the color content and correspondingly sets the at least one control signal to the driving circuit according to the selected mode,
wherein a brightness of the projected beam of the light-emitting device in the highlight mode is greater than the brightness of the projected beam of the light-emitting device in the normal mode,
wherein a ratio of the number of red sub-pixels satisfying a condition that a gray scale of the red sub-pixel is greater than a red threshold in the at least one video frame to the number of all red sub-pixels in the at least one video frame is a first ratio,
a ratio of the number of green sub-pixels satisfying a condition that a gray scale of the green sub-pixel is greater than a green threshold in the at least one video frame to the number of all green sub-pixels in the at least one video frame is a second ratio,
a ratio of the number of blue sub-pixels satisfying a condition that a gray scale of the blue sub-pixel is greater than a blue threshold in the at least one video frame to the number of all blue sub-pixels in the at least one video frame is a third ratio,
when the first ratio is greater than a first ratio threshold, the second ratio is greater than a second ratio threshold, and the third ratio is greater than a third ratio threshold, the control circuit selects the highlight mode as the selected mode, and
when the first ratio is less than the first ratio threshold, the second ratio is less than the second ratio threshold, or the third ratio is less than the third ratio threshold, the control circuit selects the normal mode as the selected mode.

US Pat. No. 11,070,776

LIGHT SOURCE DRIVE DEVICE, LIGHT SOURCE DRIVE METHOD, AND DISPLAY APPARATUS

JVC KENWOOD Corporation, ...


1. A light source drive device comprising:a divided period production unit configured to produce divided periods as subframes by dividing a frame period of a video signal; and
a light source control unit configured to drive a light source for emitting light to display elements that are ON-OFF controlled in each of the subframes according to the video signal on a pixel-by-pixel basis to represent respective gradations of the video signal, wherein
a first subframe corresponding to a lowest gradation of the video signal except black display and second subframes being other than the first subframe, and
the light source control unit is further configured to control a first light amount emitted by the light source before being modulated by the display elements in the first subframe to become a light amount before being modulated by the display elements which is more than the light amount for black display and lower than a second light amount emitted by the light source before being modulated by the display elements in the second subframes, wherein
any gradation except the black display is generated by both of an ON-OFF control of the display elements in each of the subframes and a light amount control of the light amount emitted by the light source in each of the subframes such that the first light amount is emitted in all the subframes and the second light amount is emitted in the subframes except the first subframe, and which is controlled to be reduced uniformly at a predetermined reduction ratio in the first subframe, and
a minimum resolution at any gradation value except the black display is calculated by an expression:L?(n)=((b×n?(b?a))/(b×n))×L(n)

wherein a is the first light amount, b is the second light amount, n is a gradation value more than 0, and L(n) is the minimum resolution in a case in which the first light amount and the second light amount are the same and is assumed to be 1,
controlled to be less than 1 and to become smaller as the gradation value becomes smaller, and
controlled to change a range of the gradation value for which a predetermined improvement of a display quality in a darker image area can be obtained by changing a characteristic of a curve which shows a relationship between the gradation value and a rate L?(n)/L(n) by controlling the first light amount.

US Pat. No. 11,070,775

LIGHT MODULATING DEVICE AND ELECTRONIC APPARATUS INCLUDING THE SAME

SAMSUNG ELECTRONICS CO., ...


1. A light modulating device comprising:a metal layer;
a variable resistance material layer above the metal layer, wherein a resistance of the variable resistance material layer varies depending on a voltage applied thereto;
a meta surface layer above the variable resistance material layer, the meta surface layer comprising a plurality of conductive nano-antennas having a sub-wavelength dimension; and
a signal controller configured to apply the voltage between the metal layer and the meta surface layer comprising the plurality of conductive nano-antennas,
wherein the variable resistance material layer comprises a material that forms a number of conductive nano-filaments in the variable resistance material layer, which varies according to the voltage applied to the variable resistance material layer.

US Pat. No. 11,070,774

TEMPORAL MODELING OF PHASE MODULATORS IN MULTI-MODULATION PROJECTION

Dolby Laboratories Licens...


1. A method for generating a lightfield simulation, said method implemented in a controller for controlling a projection system, said method comprising:driving a phase modulating spatial light modulator (SLM) with a first set of drive values to place said phase modulating SLM in a first phase state at a first time;
generating a lightfield simulation of a lightfield generated by said phase modulating SLM at said first time based at least in part on said first phase state of said phase modulating SLM;
driving said phase modulating SLM with a second set of drive values that cause said phase modulating SLM to transition from said first phase state of said phase modulating SLM to a second phase state of said phase modulating SLM at a second time;
generating a lightfield simulation of a lightfield generated by said phase modulating SLM at said second time based at least in part on said second phase state of said phase modulating SLM;
modeling said transition of said phase modulating SLM from said first phase state to said second phase state by generating a transition function, said transition function being indicative of a relationship between time and a phase delay imparted on an incident lightfield by a pixel of said phase modulating SLM, and determining a value of said transition function at a third time;
determining a third phase state of said phase modulating SLM at said third time based at least in part on said model of said transition of said phase modulating SLM, said third time occurring between said first time and said second time; and
generating a lightfield simulation of a lightfield generated by said phase modulating SLM at said third time based at least in part on said third phase state of said phase modulating SLM.

US Pat. No. 11,070,773

SYSTEMS AND METHODS FOR CREATING FULL-COLOR IMAGE IN LOW LIGHT

Chromatra, LLC, Beverly,...


1. A color imaging system, comprising:a radiation sensitive sensor configured to receive multiple spectrums of wavelengths of electromagnetic radiation from a low-light scene and generate corresponding electrical signals, wherein the radiation sensitive sensor includes a first plurality of photosensitive pixels having clear, unfiltered pixels, and the radiation sensitive sensor includes a second plurality of photosensitive pixels having a filter, each of the photosensitive pixels configured to generate a corresponding electrical signal in response to receiving the electromagnetic radiation; and
an image processor coupled to the radiation sensitive sensor and having circuitry configured to:generate a full-color image of the low-light scene based on the electrical signals generated by both the first plurality of photosensitive pixels and the second plurality of photosensitive pixels; and
display the full-color image.


US Pat. No. 11,070,772

ENDOSCOPE IMAGE-CAPTURING DEVICE AND ENDOSCOPE DEVICE

SONY OLYMPUS MEDICAL SOLU...


1. An endoscope image-capturing device comprising:a first case having an outer surface to be held by a user when the user operates the endoscope image-capturing device, an inside of the first case being sealed;
an image sensor arranged inside the first case;
a second case that partially overlaps with the first case;
an electro-optic conversion element arranged inside the second case and configured to convert an electric image signal output from the image sensor into an optical signal, the image sensor inside the first case and the electro-optic conversion element inside the second case being electrically connected with each other when in operation; and
a sealing member sealing the electro-optic conversion element arranged inside the second case, wherein
the image sensor is not arranged inside the second case, and
the second case is not arranged entirely inside the first case.

US Pat. No. 11,070,771

ACTIVE/INACTIVE STATE DETECTION METHOD AND APPARATUS

Advanced New Technologies...


1. A computer-implemented method for state detection, comprising:monitoring, as a monitored distance, a distance between an object and a target object within a distance detection range;
when the monitored distance satisfies a first predetermined condition, sending a first instruction to an image acquisition system corresponding to the distance detection range, so as to activate the image acquisition system to obtain an image in an image acquisition area of the image acquisition system;
determining a state of the target object based on a recognition result obtained by performing object recognition on the image in the image acquisition area, wherein the state of the target object comprises an active state or an inactive state;
after determining the state of the target object, monitoring the monitored distance; and
when the monitored distance satisfies a second predetermined condition, sending a second instruction to the image acquisition system corresponding to the distance detection range, so as to shut down the image acquisition system or switch the image acquisition system to a standby mode, wherein the second predetermined condition comprises:a change in the monitored distance during a second predetermined time interval is less than a sixth predetermined threshold.


US Pat. No. 11,070,770

METHOD AND SYSTEM FOR AUTO-CALIBRATION OF MULTIPLE SENSORS FOR EVENT TRACKING

ANTS TECHNOLOGY (HK) LIMI...


1. A method for multiple sensor calibration and tracking, the method comprising:identifying a non-permanent object in a first field of detection (FOD) of a sensor, wherein the sensor is one of a plurality of sensors positioned in various locations at a site, each sensor has a corresponding FOD;
tracking the identified object by single-sensor tracking in the first FOD;
learning to predict, in which FOD from at least one other FOD, an object will reappear after disappearing from a section of the first FOD, by:
detecting when any non-permanent object disappears from a certain section of the first FOD;
after the disappearance, detecting appearance of the same or another non-permanent object in at least one other section of the same or another FOD, within a certain range of time after the disappearance;
counting, separately for each of the FOD sections, each time any non-permanent object appears in this FOD section within a certain range of time after any non-permanent object disappears from the certain section; and
based on the counting, in case of high counts in a specific FOD section, identifying the specific FOD section as having high probability that any non-permanent object will reappear in this specific FOD section within the certain range of time after disappearing from the certain section;
predicting based on the learning, in which FOD from the at least one other FOD, an object will reappear after disappearing from a section of the first FOD;
based on the predictions, initiating single-sensor tracking in the at least one other FOD where it is predicted that the object will reappear; and
outputting data captured from the tracked FODs.

US Pat. No. 11,070,769

COLLABORATIVE SECURITY CAMERA SYSTEM AND METHOD FOR USING


1. A collaborative security camera system comprising:a primary vehicle comprising:one or more sensors configured to obtain object data related to an object; and
a first controller configured to:classify the object as suspicious activity or non-suspicious activity based on the obtained object data related to the object;
determine a moving path of the object based on the obtained object data related to the object in response to the object being classified as suspicious activity; and
broadcast a signal including the moving path of the object in response to the object being classified as suspicious activity; and


a plurality of secondary vehicles, each secondary vehicle comprising:one or more sensors configured to obtain image data related to the object; and
a second controller configured to:receive the broadcasted signal;
determine if the secondary vehicle is within the moving path of the object;
capture image data of the object in response to a determination that the secondary vehicle is proximate the moving path of the object; and
transmit image data of the object captured by the secondary vehicle to a server,


wherein the server is configured to receive the image data from each of the secondary vehicles, compile the image data into a single video file, and transmit the video file to the primary vehicle.

US Pat. No. 11,070,768

VOLUME AREAS IN A THREE-DIMENSIONAL VIRTUAL CONFERENCE SPACE, AND APPLICATIONS THEREOF

Katmai Tech Holdings LLC,...


13. A system for providing audio for a virtual conference, comprising:a processor coupled to memory;
a renderer implemented on the processor and configured to, from a perspective of a virtual camera of a first user, render for display to the first user at least a portion of a three-dimensional virtual space, the three-dimensional virtual space including an avatar representing a second user, the virtual camera at a first position in the three-dimensional virtual space and the avatar at a second position in the three-dimensional virtual space, wherein the three-dimensional virtual space is segmented into a plurality of areas structured as a hierarchy;
a network interface configured to receive an audio stream from a microphone of a device of the second user, the microphone positioned to capture speech of the second user; and
an audio processor configured to determine whether the virtual camera and the avatar are located in a same area in the plurality of areas and whether the avatar is in a podium area in the plurality of areas, and, when the virtual camera and the avatar are determined not to be located in the same area and the avatar is determined not to be in the podium area, attenuate the audio stream, and output the audio stream to be played to the first user.

US Pat. No. 11,070,767

THIRD WITNESS VIDEO PARTICIPATION SYSTEM AND METHOD


1. A video conferencing and law enforcement corroboration system, comprising:a smartphone including a camera and a mobile application software having an application programming interface and control logic stored therein for causing said smartphone to:establish a video feed between actors including a law enforcement officer and a perpetrator, thereby forming a remote, real time communication between said law enforcement officer and said perpetrator;
locate and identify a location and proximity of nearby users of the same said mobile application software;

notice said perpetrator of said video feed;
allow for an exchange of pre-scripted auto greetings, wherein said perpetrator may remotely ask for a reason for a stop using impartial, prompted language, to thereby objectivize initial contact between said law enforcement officer and said perpetrator; prompt said law enforcement officer or said perpetrator to invite participation by a corroborating third party, wherein a two-way, real-time communication of said video feed can generate conference information;view an identity of said perpetrator, a driver document and a vehicle search remotely via said real time communication between said law enforcement officer and said perpetrator, wherein an interaction between said law enforcement officer and said perpetrator is documented without physical conflict and without exposing said perpetrator and said law enforcement officer to physical harm and violence;
share said conference information, wherein said interaction is corroborated without exposing said law enforcement officer and said perpetrator to physical harm and violence;

a use case and actor designation module as part of said mobile application software, wherein all said actors associated with said mobile application software are vetted for purposes of registration;
a contact management module as part of said mobile application software, wherein said law enforcement officer is the only of said actors who is able to initiate an initial contact with said perpetrator; and,
a recording module as part of said mobile application software, wherein said communication is automatically stored on said smartphone and activated upon touching an icon; wherein said communication is adapted to be transmitted to a central storage even upon an involuntary interruption.

US Pat. No. 11,070,766

METHOD AND APPARATUS FOR REDUCING ISOLATION IN A HOME NETWORK

PPC BROADBAND, INC., Eas...


1. A community access television (CATV) signal distribution system comprising:a CATV signal input port configured to connect to a CATV network;
one or more direct signal output ports;
a first plurality of broadband signal output ports configured to connect to a first plurality of in-home entertainment devices;
a second plurality of broadband signal output ports configured to connect to a second plurality of in-home entertainment devices;
a first multi-port signal splitter comprising a first common terminal and a first plurality of leg terminals, the first common terminal configured to connect to a CATV network;
a second multi-port signal splitter comprising a second common terminal and a second plurality of leg terminals;
a third multi-port signal splitter comprising a third common terminal and a third plurality of leg terminals;
a first diplexer comprising a first common node, a first low-pass node, and a first high-pass node;
a second diplexer comprising a second common node, a second low-pass node, and a second high-pass node; and
a third diplexer comprising a third common node, a third low-pass node, and a third high-pass node,
wherein:the CATV signal input port is configured to connect to a CATV network;
the one or more direct signal output ports connect to the first common node of the first diplexer and are configured to connect to one or more embedded multimedia terminal adapter (eMTA) devices;
the first plurality of broadband signal output ports connect to the second plurality of legs of the second multi-port splitter;
the second plurality of broadband signal output ports connect to the third plurality of legs of the third multi-port splitter;
the first common node of the first multi-port signal splitter connects to the signal input port and is configured to receive the CATV signal from the CATV network as an input;
a first leg of the first multi-port signal splitter connects to the first low-pass node of the first diplexer;
a second leg of the first multi-port signal splitter connects to the second low-pass node of the second diplexer;
a third leg of the first multi-port signal splitter connects to the third low-pass node of the third diplexer;
the first high-pass node of the first diplexer connects to the second high-pass node of the second diplexer, and connects to the third high-pass node of the third diplexer;
the first high-pass node of the first diplexer, the second high-pass node of the second diplexer, and the third high-pass node of the third diplexer are configured to communicate in-home entertainment band signals in a predetermined high-frequency band between the one or more direct signal output ports, the first plurality of broadband signal output ports, and the second plurality broadband signal output ports;
the first diplexer, the second diplexer, and the third diplexer are configured to prevent transmission of in-home entertainment band signals to the CATV signal input port from the one or more direct signal output ports, the first plurality of broadband signal output ports, and the second plurality broadband signal output ports; and
the second common node of the second diplexer is connected to the second common terminal of the second multiport splitter.


US Pat. No. 11,070,765

METHOD AND APPARATUS FOR IN-CAMERA NIGHT LAPSE VIDEO

GoPro, Inc., San Mateo, ...


1. An image capture device comprising:an image sensor configured to capture image data at night, wherein the image data includes a first image that is temporally precedent to a second image;
an image processor configured to:
determine a motion estimation based on a comparison of a portion of the first image and a portion of the second image, wherein the portion of the first image corresponds to the portion of the second image;
obtain a mask based on the motion estimation; and
subtract the mask from the second image to obtain a denoised image; and
a video encoder configured to:
dynamically select a bitrate based on an ISO setting of the image capture device prior to an image capture or during the image capture;
receive the denoised image from the image processor;
encode the denoised image in a video format; and
output a video file in the video format.

US Pat. No. 11,070,764

VIDEO COMMUNICATION DEVICE, VIDEO COMMUNICATION METHOD, VIDEO PROCESSING DEVICE, AND INFORMATION PROCESSING METHOD

SONY CORPORATION, Tokyo ...


1. A video communication device, comprising:a central processing unit (CPU) configured to:control reception, from at least one video processing device, of device information of each of the at least one video processing device;
determine, based on the received device information, an availability of a high image quality function for each of the at least one video processing device, wherein the high image quality function supports high image quality of 4K or more than 4K;
generate, based on a result of the determination of the availability of the high image quality function, a video-processing-device selection screen for selection of a first video processing device of the at least one video processing device, wherein the first video processing device has the high image quality function;
control reception, from the selected first video processing device, of medium information related to a first recording medium of at least one recording medium that is a recording destination of recording by the first video processing device;
determine, based on the medium information, recordability of video content of the high image quality on the first recording medium;
generate, based on the video content of the high image quality is recordable on the first recording medium, a selection screen for selection of the first recording medium that is the recording destination of recording of the video content; and
control transmission, to the first video processing device, of the video content and information that designates the selected first recording medium.


US Pat. No. 11,070,763

METHOD AND SYSTEM FOR DISPLAYING IMAGES CAPTURED BY A COMPUTING DEVICE INCLUDING A VISIBLE LIGHT CAMERA AND A THERMAL CAMERA

Snap-on Incorporated, Ke...


1. A method comprising:generating, by one or more processors, a first image file representing a thermal image captured by a thermal camera operatively coupled to the one or more processors;
generating, by the one or more processors, a second image file representing a visible light image captured by a visible light camera operatively coupled to the one or more processors;
generating, by the one or more processors, a first blended image based on the thermal image, the visible light image, and one or more user-selectable display settings;
displaying, on a display operatively coupled to the one or more processors, the first blended image according to the one or more user-selectable display settings;
generating, by the one or more processors, a tag file including a tag pertaining to both the first image file and the second image file, wherein the tag pertaining to both the first image file and the second image file includes one or more tags corresponding to at least a portion of a vehicle identifier; and
transmitting, by the one or more processors via a network interface to a server, an image file upload, wherein the image file upload includes: (i) the first image file, (ii) the second image file, (iii) the one or more user-selectable display settings, and (iv) the tag file, whereby the server can generate a second blended image based on the thermal image, the visible light image, and the one or more user-selectable display settings used to display the first blended image.

US Pat. No. 11,070,762

IMAGING APPARATUS FOR USE IN A ROBOTIC SURGERY SYSTEM

TITAN MEDICAL INC., Toro...


1. A stereoscopic imaging apparatus for use in a robotic surgery system, the apparatus comprising:an elongate sheath with a bore extending therethrough, the sheath terminating in a distal end sized for insertion into a body cavity of a patient;
first and second image sensors adjacently mounted at the distal end of the sheath and oriented to capture high definition images of an object field from different perspective viewpoints for generating three-dimensional image information, each of the first and second image sensors configured to produce an unprocessed digital data signal representing the captured images;
a wired signal line configured to transmit the unprocessed digital data signals from each of the first and second image sensors along the sheath to a proximal end thereof; and
processing circuitry disposed at the proximal end of the sheath and connected to the wired signal line to receive the unprocessed digital data signals from each of the first and second image sensors, the processing circuitry configured to perform processing operations on each of the unprocessed digital data signals to produce respective video signals for transmission to a host system and driving a display configured to display three-dimensional information, wherein at least one unprocessed digital data signal includes a signal in accordance with a Mobile Industry Processor Interface (MIPI) Camera Serial Interface protocol, and wherein the length of the sheath is greater than 30 millimeters.

US Pat. No. 11,070,761

SOLID-STATE IMAGING DEVICE, METHOD FOR DRIVING SOLID-STATE IMAGING DEVICE, AND ELECTRONIC APPARATUS HAVING AN AMPLIFIER WITH AN INVERTING INPUT TERMINAL HAVING AT LEAST A FIRST INVERTING INPUT CHANNEL AND A SECOND INVERTING INPUT CHANNEL

BRILLNICS SINGAPORE PTE. ...


1. A solid-state imaging device providing an extended dynamic range by combining a plurality of read-out signals each formed of a signal at a reference level and a signal at a signal level, the solid-state imaging device comprising:a pixel part including a pixel arranged therein; and
a reading part including a column signal processing part arranged so as to correspond to a column output of the pixel part,
wherein the column signal processing part includes an amplifying part for amplifying the plurality of read-out signals read out from the pixel, and
wherein the amplifying part includes:an amplifier including an inverting input terminal and a non-inverting input terminal, the inverting input terminal having at least a first inverting input channel and a second inverting input channel;
a first node to which the plurality of read-out signals are input;
a second node connected to the first inverting input channel;
a third node connected to the second inverting input channel;
a fourth node connected to an output terminal of the amplifier;
a fifth node connected to the fourth node via a first output switch;
a sixth node connected to the fourth node via a second output switch;
a first input switch and a first sampling capacitor connected in series between the first node and the second node;
a second input switch and a second sampling capacitor connected in series between the first node and the third node;
a first feedback capacitor connected between the second node and the fifth node; and
a second feedback capacitor connected between the third node and the sixth node.


US Pat. No. 11,070,760

SOLID-STATE IMAGING DEVICE AND METHOD OF OPERATING THE SAME, AND ELECTRONIC APPARATUS AND METHOD OF OPERATING THE SAME

Sony Semiconductor Soluti...


1. An imaging device, comprising:a plurality of pixels, respectively including:a photoelectric conversion unit,
a plurality of charge holding units arranged in series,
a first transistor disposed between the photoelectric conversion unit and a first charge holding unit of the plurality of charge holding units, and
a second transistor disposed adjacent to the photoelectric conversion unit.


US Pat. No. 11,070,759

IMAGE SENSING DEVICE COMPRISING DUMMY PIXEL ROW AND ACTIVE PIXEL ROWS COUPLED TO READ-OUT COLUMN LINE, AND USING A HIGH SPEED ACTIVE PIXEL ROW READ-OUT METHOD THAT REDUCES SETTLING TIME BY PRECHARGING THE COLUMN LINE DURING A DUMMY PIXEL ROW SELECTION TIM

SK hynix Inc., Gyeonggi-...


1. A read-out method of an image sending device, comprising:reading out a first pixel signal corresponding to a voltage level charged in a floating diffusion node of a first active pixel through a column line during a first row selection section;
precharging, by using a dummy pixel, a column line to a voltage level corresponding to a reset level of pixel signals during a dummy row selection section;
charging a floating diffusion node of a second active pixel with a predetermined voltage in advance during the dummy row selection section; and
reading out a second pixel signal corresponding to a voltage level charged in a floating diffusion node of the second active pixel through the column line during a second row selection section,
wherein the precharging of the column line includes:
charging a dummy floating diffusion node of the dummy pixel with the predetermined voltage; and
precharging the column line to the reset level of the pixel signals corresponding to a voltage level charged in the dummy floating diffusion node, and
wherein the charging of the dummy floating diffusion node of the dummy pixel is carried out from a latter portion of the dummy row selection section until an initial portion of the second row selection section.

US Pat. No. 11,070,758

SOLID STATE IMAGING DEVICE AND ELECTRONIC DEVICE

SONY CORPORATION, Tokyo ...


1. A solid state imaging device comprising:a plurality of pixels, wherein each pixel of the plurality of pixels comprises:a light receiving unit configured to receive light input to the respective pixel for photoelectrical conversion;
a dividing unit configured to receive a pixel signal from the light receiving unit and divide the pixel signal into a background light pixel signal and a reflection light pixel signal;
a first analog-digital converter configured to convert the background light pixel signal to a first digital signal; and
a second analog-digital converter configured to convert the reflection light pixel signal to a second digital signal,
wherein the first digital signal and the second digital signal are associated with the background light pixel signal and the reflection light pixel signal, respectively, and are sequentially output from each pixel of the plurality of pixels to a data bus.


US Pat. No. 11,070,757

IMAGE SENSOR WITH DISTANCE SENSING FUNCTION AND OPERATING METHOD THEREOF

Guangzhou Tyrafos Semicon...


1. An image sensor with a distance sensing function, comprising:a pixel array, comprising a plurality of sub-pixel groups arranged in an array, wherein the sub-pixel groups are spaced apart from each other by a circuit layout area;
a cluster analog to digital converter readout circuit, disposed in the circuit layout area of the pixel array, coupled to a distance sensing pixel of each of the sub-pixel groups, wherein the distance sensing pixel of each of the sub-pixel groups is configured to perform time-of-flight ranging; and
a column readout circuit, disposed adjacent to the pixel array, coupled to a plurality of image sensing pixels of each of the sub-pixel groups, wherein the image sensing pixels of each of the sub-pixel groups are configured to perform image sensing,
wherein the cluster analog to digital converter readout circuit completes a readout of a plurality of pieces of distance sensing data and the column readout circuit completes a readout of a plurality of pieces of digital image data in a same frame period.

US Pat. No. 11,070,756

METHOD, DEVICE AND SYSTEM FOR IMAGE ACQUISITION CONTROL

TUSIMPLE, INC., San Dieg...


1. A method for image capturing control, comprising:receiving environment information transmitted from one or more sensors;
extracting, based on the environment information, one or more environmental features;
determining, using at least one preconfigured neural network model, a current environment type based on the one or more environmental features;
determining whether the current environment type is a predetermined harsh environment type; and
capturing an image by controlling one or more Time-of-Flight (TOF) cameras to capture the image when it is determined that the current environment type is the harsh environment type; and
capturing an image by controlling one or more ordinary cameras when it is determined that the current environment type is not the harsh environment type,
wherein the controlling of the one or more TOF cameras includes:determining a target distance range corresponding to the harsh environment type;
estimating a first time length required for the one or more TOF cameras to receive infrared light reflected by an object at a lower limit distance of the target distance range;
estimating a second time length required for the one or more TOF cameras to receive infrared light reflected by the object at an upper limit distance of the target distance range; and
determining an exposure start time and an exposure end time for the one or more TOF cameras based on the first time length and the second time length.


US Pat. No. 11,070,755

IMAGING DEVICE AND SIGNAL PROCESSING DEVICE

CANON KABUSHIKI KAISHA, ...


1. An imaging device comprising:a pixel unit in which a plurality of unit pixels each including a plurality of photoelectric converters are arranged in a matrix; and
a signal processing unit that processes signals read out from the pixel unit,
wherein the pixel unit includesa first reference pixel region including the unit pixels each driven in a first mode to read out a signal in accordance with a combined charge obtained by combining charges generated by the plurality of photoelectric converters, and
a second reference pixel region including the unit pixels each driven in a second mode to read out signals, the number of which is greater than the number of the signal read out in the first mode, including at least a signal in accordance with charges generated by a part of the plurality of photoelectric converters and a signal in accordance with a combined charge obtained by combining charges generated by the plurality of photoelectric converters, and

wherein the signal processing unit is configured tocalculate a first correction value in accordance with an average value of a first data group read out from the first reference pixel region in the first mode, and
calculate a second correction value in accordance with an average value of a second data group read out from the second reference pixel region in the second mode by using the first correction value as an initial value.


US Pat. No. 11,070,754

SLEW RATE CONTROL CIRCUIT FOR AN IMAGE SENSOR

STMicroelectronics Asia P...


1. An image sensor comprising:first and second voltage rails;
a first regulator having an output coupled to the first voltage rail and configured to generate a first regulated voltage;
a second regulator having an output coupled to the second voltage rail and configured to generate a second regulated voltage lower than the first regulated voltage; and
a plurality of pixels coupled to the first and second voltage rails, wherein each pixel of the plurality of pixels comprises:first and second storage capacitors,
a first transistor having a current path coupled to the first storage capacitor,
a second transistor having a current path coupled to the second storage capacitor, and
a third transistor coupled between a control terminal of the first transistor and the first or second voltage rails, wherein the third transistor is configured to limit a slew rate of current flowing between the control terminal of the second transistor and the first or second voltage rails to a first slew rate when the image sensor operates in global shutter mode, and to a second slew rate when the image sensor operates in rolling mode, the first slew rate being smaller than the second slew rate.


US Pat. No. 11,070,753

IMAGING DEVICE AND METHOD OF DRIVING IMAGING DEVICE

CANON KABUSHIKI KAISHA, ...


1. An imaging device comprising:a plurality of pixels which are arranged to form a plurality of columns and each of which includes a photoelectric conversion unit that generates charges by photoelectric conversion;
a plurality of column circuits which are provided to the plurality of columns, respectively, and each of which receives a signal from a part of the plurality of pixels;
a first common control line connected to each of the plurality of column circuits; and
a control unit that controls the plurality of column circuits,
wherein each of the plurality of column circuits includes an amplifier circuit whose gain is switchable and a first transistor that controls a current flowing in the amplifier circuit, and
wherein the control unit controls the column circuit so that the first transistor supplies a current of a first current value when the amplifier circuit is at a first gain and the first transistor supplies a current of a second current value which is smaller than the first current value when the amplifier circuit is at a second gain which is larger than the first gain.

US Pat. No. 11,070,752

IMAGING DEVICE INCLUDING FIRST AND SECOND IMAGING CELLS AND CAMERA SYSTEM

Panasonic Intellectual Pr...


1. An imaging device comprising:a first imaging cell includinga first photoelectric converter that generates a first signal by photoelectric conversion, and
a first circuit that is electrically connected to the first photoelectric converter and detects the first signal; and

a second imaging cell includinga second photoelectric converter that generates a second signal by photoelectric conversion, and
a second circuit that is electrically connected to the second photoelectric converter and detects the second signal, wherein

sensitivity of the first imaging cell is higher than sensitivity of the second imaging cell,
a frame rate of the first circuit is lower than a frame rate of the second circuit, and
the first circuit is configured to reduce first random noise that occurs when the first signal is reset once, and an amount of reducing the first random noise by the first circuit is greater than an amount of reducing second random noise that occurs when the second signal is reset once by the second circuit.

US Pat. No. 11,070,751

ELECTRONIC DEVICE AND IMAGE UP-SAMPLING METHOD FOR ELECTRONIC DEVICE

Samsung Electronics Co., ...


1. An electronic device comprising:an image sensor; and
a processor,
wherein the image sensor comprises a micro lens and a light-receiving sensor pixel converting light passed through the micro lens to an electric signal, the light-receiving sensor pixel comprising a first floating diffusion region and a second floating diffusion region, and is configured to divide the light-receiving sensor pixel into a first area and a second area based on one of the first and second floating diffusion regions being activated, the first and second areas being different in size, and read out signals generated by the light-receiving sensor pixel, the signals comprising a first signal corresponding to the first area and a second signal corresponding to the second area; and
the processor is configured to control the image sensor to activate the first floating diffusion region to acquire a first image of an external object and activate the second floating diffusion region to acquire a second image of the external object and to synthesize at least part of the first image and at least part of the second image to generate a third image having a resolution higher than the resolution of the first image or the second image,
wherein the image sensor is configured to form the first area and the second area that is smaller than the first area based on the first floating diffusion region being activated and form the first area and the second area that is larger than the first area based on the second floating diffusion region being activated.