US Pat. No. 10,368,178

APPARATUS AND METHODS FOR MONITORING A MICROPHONE

Cirrus Logic, Inc., Aust...

20. A temperature sensor comprising: a microphone device; a spectrum peak detect module for processing a microphone signal produced by the microphone device to identify a Helmholtz resonance peak frequency and a quality factor of the resonance from the microphone signal; and a temperature estimation module for estimating air temperature within the microphone device based on said determined resonance Helmholtz resonance frequency and the quality factor.

US Pat. No. 10,368,177

ABNORMALITY DETECTING DEVICE, ABNORMALITY DETECTION METHOD, AND RECORDING MEDIUM STORING ABNORMALITY DETECTION COMPUTER PROGRAM

FUJITSU LIMITED, Kawasak...

1. An abnormality detecting device comprising:a memory; and
a processor coupled to the memory and configured to:
detect an envelope of an audio signal indicating a periodic sound emitted by a target object and a periodic sound emitted by another object;
execute time-to-frequency conversion on the envelope to calculate a frequency spectrum of the audio signal; and
determine whether or not the target object has an abnormality, based on a frequency component included in the frequency spectrum and corresponding to a time interval between time points when the sound is emitted by the target object,
wherein the target object is a rotating device having a predetermined number of blades, and
wherein the processor is further configured to detect multiple peaks of the frequency spectrum, calculate, for each of combinations, each of which includes two peaks among the multiple peaks, the ratio of a frequency corresponding to one of two peaks included in the combination to a frequency corresponding to the other of the two peaks included in the combination, and estimate, as a frequency corresponding to the time interval between the time points when the sound is emitted by the target object, lower one of frequencies corresponding to two peaks included in a combination that is among the combinations and causes the difference between the ratio of the frequencies corresponding to the peaks of the combination and the predetermined number of blades to be the smallest among differences between the ratios calculated for the combinations and the predetermined number of blades.

US Pat. No. 10,368,176

EARPIECE FOR COUPLING A HEARING AID TO A USERS'S EAR CANAL AND A METHOD FOR MANUFACTURING SUCH AN EARPIECE

1. An earpiece for a hearing aid, the earpiece comprising:a seal configured to deform to a shape of a human ear canal,
wherein the seal is dome-shaped,
wherein the seal has a first section with a sound output bore and the sound output bore is configured to physically couple to a receiver for a hearing aid,
wherein the seal has a second section and the second section includes 50 to 2000 opening,
wherein the openings are laser-cut; and
wherein the openings have diameters from 80 to 150 micrometers.

US Pat. No. 10,368,175

HEARING DEVICE COMPRISING A FEEDBACK DETECTION UNIT

1. A hearing device comprising a forward path for processing an electric signal representing sound, the forward path comprisingan input unit for receiving or providing an electric input signal representing sound,
a signal processing unit for applying a frequency- and/or level-dependent gain to an input signal of the forward path and providing a processed output signal, and
an output transducer for generating stimuli perceivable as sound to a user;
the hearing device further comprising
a feedback detection unit configured to detect feedback or evaluate a risk of feedback via an acoustic or mechanical or electrical feedback path from said output transducer to said input unit,
a loop consisting of said forward path and said feedback path being defined, the loop exhibiting a loop delay D,
wherein said feedback detection unit comprises
a magnitude and phase analysis unit for repeatedly determining magnitude and phase of said electric input signal or a processed version thereof, and further configured to determine values of loop magnitude, loop phase, loop magnitude difference, and loop phase difference signals, respectively, based thereon and on said loop delay D, where said loop magnitude difference and said loop phase difference are the differences between values of the parameters, loop magnitude and loop phase, respectively, at a given time instant, m, and a time instant, mD, one feedback loop delay D earlier;
a feedback conditions and detection unit configured to check criteria for magnitude and phase feedback condition, respectively, based on said values of loop magnitude, loop phase, loop magnitude difference, and loop phase difference signals, respectively, and to provide feedback detection signal indicative of feedback or a risk of feedback.

US Pat. No. 10,368,174

DISTRIBUTED PHASE LOCKED LOOP IN HEARING INSTRUMENTS

SEMICONDUCTOR COMPONENTS ...

1. A system, comprising:a receiver configured to receive wireless signals from an electronic device, the receiver comprising:
receiver logic operable to receive an input signal at a source clock frequency from the electronic device;
a phase detector coupled to the receiver logic and operable to:
receive a data sampling clock; and
compute an error signal indicating a difference between the data sampling clock and the source clock; and
a first communication interface coupled to the phase detector and operable to transmit the input signal; and
a signal processor (SP) coupled to the receiver and comprising:
a second communication interface operable to couple to the first communication interface to communicatively couple the SP to the receiver;
a digitally-controlled oscillator (DCO) coupled to the second communication interface and operable to generate a system clock;
a clock divider coupled to the DCO and the phase detector and operable to generate the data sampling clock based at least partially on the system clock; and
digital signal processing logic coupled to the DCO and the clock divider and operable to process the input signal at a frequency specified by the data sampling clock.

US Pat. No. 10,368,173

SYSTEMS AND METHODS FOR MINIMIZING AN EFFECT OF SYSTEM NOISE GENERATED BY A COCHLEAR IMPLANT SYSTEM

Advanced Bionics AG, Sta...

1. A sound processor included in a cochlear implant system used by a patient, the sound processor comprising:at least one physical computing component that
generates a spectral input signal, the spectral input signal representative of spectral energy contained within a frequency band in a plurality of frequency bands of an audio signal presented to the patient,
receives a predetermined system noise threshold that is determined prior to the audio signal being presented to the patient and that is based on a predicted or measured spectral energy level of system noise generated by a theoretical or test cochlear implant system associated with, but distinct from, the cochlear implant system,
determines whether a spectral energy level of the spectral input signal exceeds the predetermined system noise threshold, and
generates, based on the determination of whether the spectral energy level of the spectral input signal exceeds the predetermined system noise threshold, a spectral output signal by
including the spectral input signal in the spectral output signal if the spectral energy level of the spectral input signal exceeds the predetermined system noise threshold, and
excluding the spectral input signal from the spectral output signal if the spectral energy level of the spectral input signal does not exceed the predetermined system noise threshold.

US Pat. No. 10,368,172

DIAPHRAGM SUSPENSION FOR A LOUDSPEAKER

PSS BELGIUM N.V., Dender...

1. A loudspeaker including a chassis, a drive unit and a diaphragm;wherein the drive unit has a stationary part secured to the chassis and a translatable part secured to the diaphragm;
wherein an outer edge of the diaphragm is suspended from the chassis by an edge suspension;
wherein the edge suspension has a plurality of straight portions, each straight portion having a respective first surface and a respective second surface which meet along an edge to provide a spring which permits the diaphragm to be moved relative to the chassis by the drive unit;
wherein the edge suspension has at least one corner portion, wherein the/each corner portion joins two of the straight portions together and includes at least one geometrical interruption formed therein;
wherein the/each geometrical interruption formed in the/each corner portion includes a first corrugation which varies in height along a first path which extends circumferentially around the edge suspension, and a second corrugation formed within the first corrugation which varies in height along a second path which extends across the first path;
wherein each straight portion includes one or more stiffening elements;
wherein the one or more stiffening elements include one or more geometrical interruptions formed in each straight portion;
wherein the/each geometrical interruption formed in each straight portion includes a first corrugation which varies in height along a first path which extends circumferentially around the edge suspension, and a second corrugation formed within the first corrugation which varies in height along a second path which extends across the first path.

US Pat. No. 10,368,171

AUDIO APPARATUS AND AUDIO OUTPUT PORT

RADSONE INC., Sungnam (K...

1. An audio apparatus comprising:an analog module configured to receive a digital left (L) channel signal and a digital right (R) channel signal, and output first and second analog L signals and first and second analog R signals;
a first output port including first to fifth conductors, to which the first and second analog L signals, the first and second analog R signals, and a ground voltage are provided, respectively; and
a second output port including sixth to ninth conductors, to which the first and second analog L signals and the first and second analog R signals are provided, respectively,
wherein, while an audio jack including first to third terminals remains inserted in the first output port, the first and second conductors remain connected to the first terminal, the third and fourth conductors remain connected to the second terminal, and the fifth conductor remains connected to the third terminal, and
wherein, while the audio jack remains inserted in the first output port, the first and second analog L signals and the first and second analog R signals are operated in a single-ended mode, and during at least a portion of a time during which the audio jack is not inserted into the first output port, the first and second analog L signals and the first and second analog R signals are operated in a differential mode.

US Pat. No. 10,368,170

SMART HEADSET AND METHOD OF ROUTING SERVICE IN RESPONSE TO PROXIMITY TO MOBILE DEVICE

Kyocera Corporation, Kyo...

1. A smart headset comprising:a speaker configured to generate output sounds based on received information received at the smart headset;
a microphone configured to generate microphone signals, based on input sounds, for forming transmission information transmitted from the smart headset;
a cellular transceiver configured to wirelessly communicate with a cellular communication network;
a Bluetooth transceiver configured to wirelessly communicate with a smart handset; and
a controller configured to place the smart headset in a selected operation mode of at least two modes comprising a first mode and a second mode,
the smart headset communicating through the cellular transceiver with the cellular communication network when the smart headset is in the first mode such that the received information is received from the cellular communication network through the cellular transceiver and the transmission information is transmitted to the cellular communication network through the cellular transceiver,
the smart headset communicating through the Bluetooth transceiver with the smart handset when the smart headset is in the second mode such that the received information is received from the smart handset through the Bluetooth transceiver and the transmission information is transmitted to the smart handset through the Bluetooth transceiver,
the controller configured to place the smart headset in the first mode when the smart handset is determined to be farther than a maximum proximity from the smart headset and configured to place the smart headset in the second mode when the smart handset is determined to be within the maximum proximity to the smart headset.

US Pat. No. 10,368,169

POWER AND BANDWIDTH EFFICIENT TRUE WIRELESS STEREO SPEAKERS

QUALCOMM Incorporated, S...

1. A method for wireless communication at a first speaker, comprising:establishing a control communication link with a second speaker over a first piconet;
receiving, in a first slot of an extended synchronous connection-oriented (eSCO) window on a second piconet, a first null signal sent from a wireless device to the second speaker;
switching, in a second slot of the eSCO window that is consecutive to the first slot, and based at least in part on the first null signal, from the second piconet to the first piconet;
transmitting to the second speaker, on the first piconet, a second null signal based at least in part on the first null signal; and
closing the eSCO window based at least in part on the second null signal.

US Pat. No. 10,368,168

METHOD OF DYNAMICALLY MODIFYING AN AUDIO OUTPUT

Skullcandy, Inc., Park C...

1. A method of dynamically modifying an audio output, comprising:receiving image data depicting an audio reproduction device and a user wearing the audio reproduction device;
determining one or more deteriorating factors that deteriorate a sound quality of the audio reproduction device as worn by the user based on the image data;
estimating a sound leakage caused by the one or more deteriorating factors;
determining one or more sound profiles based on at least the sound leakage; and
generating tuning data based on the one or more sound profiles, the tuning data configured to sonically customize the audio reproduction device.

US Pat. No. 10,368,167

AUDIO POWER CIRCUIT AND METHOD

MOTOROLA SOLUTIONS, INC.,...

1. An audio power circuit comprising:an audio amplifier having a power input;
a speaker connected to the audio amplifier;
a load switch coupled between a battery and the power input to selectively provide a supply of power from the battery to the power input;
a second load switch coupled between the battery and the power input in series with the load switch to selectively provide the supply of power from the battery to the power input;
a control circuit configured to be connected to the battery and the power input and to control the supply of power from the battery to the power input by controlling the load switch;
a thermal protection circuit connected between the audio amplifier and the speaker, the thermal protection circuit configured to generate a thermal protection signal and provide the same directly to the load switch to control the supply of power from the battery to the power input,
wherein the thermal protection circuit includes:
a comparator having a reference voltage input to receive a reference voltage, and a speaker voltage input to receive a speaker voltage, wherein the thermal protection circuit disables the supply of power from the battery to the power input by opening the load switch when the comparator indicates that the speaker voltage exceeds the reference voltage;
a rectifier connected to the speaker voltage input and configured to convert an alternating current (AC) voltage across the internal speaker to a direct current (DC) voltage, wherein the AC voltage is provided by the audio amplifier; and
a second thermal protection circuit connected between the audio amplifier and the speaker and in parallel to the thermal protection circuit, the second thermal protection circuit configured to generate a second thermal protection signal and provide the same directly to the second load switch to control the supply of power from the battery to the power input.

US Pat. No. 10,368,166

VOLTAGE REGULATOR AND CONTROL CIRCUIT FOR SILVER-ZINC BATTERIES IN HEARING INSTRUMENTS

ZPower, LLC, Camarillo, ...

1. An apparatus for managing power within a voltage regulating circuit of a battery-powered hearing aid device, comprising:an input terminal of a voltage regulator receiving an input voltage (VIN) supplied by a battery;
an output terminal of the voltage regulator providing an output voltage (VOUT) to a hearing aid terminal electrically connected to one or more electrical components of the hearing aid device, the output voltage (VOUT) based on the input voltage (VIN);
a sensing terminal of the voltage regulator for sensing a charging current (VSENSE) between a charging device and charging contacts of the voltage regulating circuit; and
a switch device configured to:
transition to an ON state to allow the charging device to charge the battery based on the sensing terminal of the voltage regulator sensing the charging current (VSENSE) between the charging device and the charging contacts; and
transition to an OFF state to block the charging contacts from receiving voltage from the battery when the output voltage (VOUT) is present;
wherein the voltage regulator is configured to reduce a magnitude of the input voltage (VIN) when the magnitude of the input voltage (VIN) exceeds an input voltage threshold (Vin_thresh) to generate the output voltage (VOUT) having a magnitude that is less than a maximum output voltage (Vout_max) and is further configured not to downregulate the input voltage (VIN) when the magnitude of the input voltage (VIN) is not greater than the input voltage threshold (Vin_thresh).

US Pat. No. 10,368,165

METHOD FOR ELIMINATING MOTOR VEHICLE AND WATER CRAFT HORN EMC INTERFERENCE AND HORN

11. A method of eliminating electromagnetic interference of an electronic device in a motor vehicle or a water craft, comprising:coupling at least one capacitor of about 220-10000 ?F in parallel with input terminals of a power supply of the electronic device to eliminate outbound electrical interference caused by electromagnetic radiation, conduction, or coupling that is generated by the electronic device.

US Pat. No. 10,368,164

APPROACH FOR PARTIALLY PRESERVING MUSIC IN THE PRESENCE OF INTELLIGIBLE SPEECH

HARMAN INTERNATIONAL INDU...

1. An audio processing system, comprising:an input device that receives a first audio signal;
a voice activity detector that:
receives a first control signal from a voice separator;
determines that a signal of interest is present in the first audio signal based on the first control signal exceeding a ducker threshold; and
generates a second control signal in response to the first audio signal; and
a ratio-based attenuator that:
receives the second control signal from the voice activity detector,
determines whether a first signal level associated with the first audio signal exceeds a second signal level associated with a second audio signal received from an audio playback device, and
if the first signal level exceeds the second signal, then maintains an audio level of the second audio signal, or
if the first signal level does not exceed the second signal level difference, then causes the audio level of the second audio signal to be adjusted from a first value to a second value.

US Pat. No. 10,368,163

HEADSET POWER SUPPLY AND INPUT VOLTAGE RECOGNITION

QUALCOMM Incorporated, S...

1. An electronic apparatus, comprising:a detection circuit configured to recognize a user input voltage generated by an electronic user input, the detection circuit comprising a reference voltage generator including a multiplexer configured to generate a set of reference voltages in a sequence for comparing with the user input voltage generated by the electronic user input, the multiplexer controlled by multiplexer selection signals from a counter to cause the multiplexer to multiplex the set of reference voltages, the detection circuit including a comparator comprising an auto-zero amplifier configured to compare the set of reference voltages and the user input voltage, the auto-zero amplifier comprising:
an operational amplifier;
a first switch coupled between an output of the operational amplifier and an input of the operational amplifier;
a capacitor having a first lead coupled to the input of the operational amplifier and coupled to the first switch;
a second switch configured to selectively couple the reference voltages to a second lead of the capacitor; and
a third switch configured to selectively couple a user input voltage to the second lead of the capacitor; and
a power supply configured to supply power to the detection circuit.

US Pat. No. 10,368,162

METHOD AND APPARATUS FOR RECREATING DIRECTIONAL CUES IN BEAMFORMED AUDIO

GOOGLE LLC, Mountain Vie...

1. A method for recreating directional cues in beamformed audio, the method comprising:receiving at least one first audio signal via a microphone array;
receiving at least one second audio signal via the microphone array;
receiving at least one third audio signal via at least one reference microphone;
transforming the at least one first audio signal, the at least one second audio signal and the at least one third audio signal to a frequency domain representation;
beamforming amplitude data of the at least one transformed first audio signal, the at least one transformed second audio signal and the at least one transformed third audio signal to generate a beamformed monophonic audio signal;
deriving phase offset information based on a frequency extracted during the transforming of the at least one third audio signal and the beamformed monophonic audio signal; and
generating a multi-channel audio signal with directional cues by applying the derived phase offset information to the beamformed monophonic audio signal.

US Pat. No. 10,368,161

AMPLIFIER AND ELECTRONIC DEVICE USING THE SAME

ACER INCORPORATED, New T...

1. An amplifier, comprising:a first speaker;
a second speaker;
a third speaker;
an acoustic box, wherein the first speaker, the second speaker and the third speaker are disposed on a continuous surface inside the acoustic box; and
three partitions disposed on the continuous surface, one of the partitions is interposed between the first speaker and the second speaker, another one of the partitions is interposed between the second speaker and the third speaker, and another one of the partitions is interposed between the third speaker and the first speaker;
wherein one end of each partition is connected to the continuous surface where the first speaker, the second speaker and the third speaker are disposed, and another end of the partition is separated from an opposite surface facing the continuous surface by a gap;
wherein three connection lines of the first speaker, the second speaker and the third speaker form a triangle, and the partitions are connected to form a Y-shaped structure.

US Pat. No. 10,368,160

SPEAKER BOX

AAC TECHNOLOGIES PTE. LTD...

1. A speaker box, comprising:a lower cover;
an upper cover engaging with the lower cover for forming an accommodating space;
a speaker accommodated in the accommodating space, and including a diaphragm with a dome attached to the diaphragm;
a front sound cavity formed by the diaphragm and the upper cover;
air adsorbent particles received in the front sound cavity for absorbing the high frequency harmonic and noises; wherein
the dome forms at least one recess communicating with the front sound cavity, and the air adsorbent particles are received in the recesses.

US Pat. No. 10,368,159

WATER RESISTANT LOUDSPEAKER

18. A water resistant loudspeaker comprising:a. a water-impermeable spider having a first water-impermeable attachment to a former;
b. a spider support supported by said basket, wherein said spider support has at least one vent;
c. a water impermeable diaphragm having a second water-impermeable attachment to said former;
d. an acoustic chamber housing enclosing a rear acoustic chamber attached and sealed to a basket of said water resistant loudspeaker behind said spider and having a size adapted to maintain speaker performance at low frequencies; and
e. wherein said first attachment is adjacent said second attachment.

US Pat. No. 10,368,158

EARPHONE DEVICE THAT SWITCHED TO AN OPEN-TYPE OR A CLOSED-TYPE EARPHONE DEVICE

COOLER MASTER TECHNOLOGY ...

1. An earphone device, comprising:a casing including an opening and an accommodating space, the opening communicating with the accommodating space and an outside of the casing, wherein a groove is disposed at a periphery of the casing, and wherein the casing further includes at least one positioning pillar located in the opening;
an earphone body disposed in the accommodating space, wherein a plurality of through holes are disposed outside the earphone body; and
a detachable cover having a protruding bar along a periphery of the detachable cover, wherein the detachable cover includes at least one fixing portion;
wherein the detachable cover and the casing are combined together, the protruding bar is embedded into the groove, and the at least one positioning pillar is inserted into the at least one fixing portion such that the detachable cover covers the earphone body and the through holes,
wherein the through holes are disposed outside the earphone body to make an inside and an outside of the earphone body communicate with each other, and the detachable cover includes a plurality of protruding pillars respectively sealing the through holes.

US Pat. No. 10,368,157

ADJUSTABLE EARCUP IN CONTINUOUS HEADBAND-SPRING HEADPHONE SYSTEM

BOSE CORPORATION, Framin...

1. A headphone system comprising:a pair of earcups;
a continuous headband spring connecting the pair of earcups, the continuous headband spring having an internal slot with an opening along an inner surface thereof; and
an adjustment apparatus coupled with one of the pair of earcups and the continuous headband spring, the adjustment apparatus comprising:
a shoe coupled with the one of the pair of earcups and positioned in the internal slot;
a tongue coupled with the shoe and extending at least partially along the continuous headband spring; and
a resistance member coupled with the tongue for resisting movement of the tongue relative to the continuous headband spring, wherein the resistance member comprises a friction box, and wherein the friction box comprises:
a housing coupled to the continuous headband spring; and
at least a set of damping pads for engaging the tongue as the tongue moves relative to the continuous headband spring.

US Pat. No. 10,368,156

CONFIGURABLE EARBUD RETENTION AND STABILIZATION SYSTEM

1. A configurable earbud retention and stabilization system, comprising:a bendable earloop configured to fit over an ear of a user, the bendable earloop having a first end and a second end oppositely disposed relative to the first end, the bendable earloop defining a clip aperture proximate to the first end, the bendable earloop configured to be deformed into a plurality of different positions so as to provide a customized fit for the user; and
an earbud clip, the earbud clip forming a cavity configured to receive an earbud headphone therein, the earbud clip including a connecting member received within the clip aperture of the bendable earloop so as to detachably couple the earbud clip to the bendable earloop, the connecting member of the earbud clip comprising a shaft portion and a cap portion, the shaft portion of the earbud clip received within the clip aperture of the bendable earloop, and the cap portion of the earbud clip retaining the earbud clip in engagement with the bendable earloop while restricting the earbud clip to rotation about a single rotational axis which extends longitudinally through the connector member.

US Pat. No. 10,368,155

SYSTEM WITH WIRELESS EARPHONES

Koss Corporation, Milwau...

1. A wireless headphone assembly comprising:first and second earphones, wherein each of the first and second earphones comprises an acoustic transducer;
an antenna for receiving wireless signals;
a wireless communication circuit connected to the antenna, wherein the wireless communication circuit is for receiving and transmitting wireless signals to and from the wireless headphone assembly;
a processor in communication with the wireless communication circuit; and
a rechargeable battery for powering the wireless headphone assembly,
wherein the headphone assembly is configured, with the processor, to transition automatically from playing digital audio content received wirelessly by the headphone assembly via a first wireless network to playing digital audio content received wirelessly by the headphone assembly via a second wireless network.

US Pat. No. 10,368,154

SYSTEMS, DEVICES AND METHODS FOR EXECUTING A DIGITAL AUDIOGRAM

Listening Applications Lt...

1. A system for providing audio output, the system comprising:a remote computing device including a processor and an audio unit configured to generate one or more output signals of arbitrary amplitude;
earphones connectable to the computing device, configured to output said output signals; and
a microphone configured to record the power level of said output signals and calculate a proportionality constant for each frequency of said output signals; wherein said processor is further configured to:
analyze the proportionality constant for each frequency of one or more feedback signals from said one or more earphones to yield calibration data;
adjust the amplitude or frequency based at least on the calibration data to calibrate the device;
generate one or more audiograms resulting from a hearing test using the calibrated device; and
adjust a device power level according to said one or more audiograms.

US Pat. No. 10,368,153

WATERPROOF SOUND-TRANSMITTING MEMBRANE AND WATERPROOF SOUND-TRANSMITTING STRUCTURE USING THE SAME

NITTO DENKO CORPORATION, ...

1. A waterproof sound-transmitting membrane comprising a sound-transmitting region consisting of a single porous membrane of polytetrafluoroethylene, the porous membrane having a through-thickness air permeability of 2 cm3/cm2/s or more as measured by Method A (Frazier method) for air permeability measurement according to Japanese Industrial Standards (JIS) L 1096 and a water entry pressure of 3 kPa or more as measured by Method B (high hydraulic pressure method) for waterproofness testing according to JIS L 1092,wherein the waterproof sound-transmitting membrane has a sound distortion of 60.2% or less, and
wherein the porous membrane has a water entry pressure of 20 kPa or more and 50 kPa or less.

US Pat. No. 10,368,152

MICROPHONE ARRANGEMENT

1. A microphone arrangement comprising at least three groups of microphones that are mounted on a head-wearable support structure including first and second earpads to be respectively worn next to first and second ears of a user, the at least three groups of microphones comprising a first group of microphones with one or more microphones, a second group of microphones with one or more microphones, and a third group of microphones with one or more microphones, wherein the first group is mounted to a first casing that accommodates signal transmission circuitry, the first casing being formed as part of the first earpad, the second group is mounted to slide with respect to the first casing, and the third group comprisesa first microphone mounted on the first casing,
a second microphone mounted on a second casing, the second casing being formed as part of the second earpad,
wherein the first and second microphones are arranged symmetrically with respect to a user's head when the microphone arrangement is head-worn, and provide for a directionality that is orientated to the direction of a user's vision.

US Pat. No. 10,368,151

EXTERIOR COVER WITH SPEAKER

Samsung Electronics Co., ...

1. An exterior cover for protecting a display of an electronic device, the exterior cover comprising:a cover portion configured to foldably connect to the electronic device in a first end of the cover portion, the cover portion comprising:
segments comprising:
a first segment located at a first end of the cover portion,
a second segment located at a second end of the cover portion opposite the first end, and
a third segment located between the first segment and the second segment; and
a folding axis allowing the segments to fold with respect to each other such that the folded segments support the electronic device in a cradle position when the electronic device is inclined at a first angle,
wherein the first segment comprises a first speaker, the second segment comprises a second speaker, and the third segment comprises a third speaker,
wherein the first speaker comprises a vibration plate and the first segment is in contact with a ground when the electronic device is in the cradle position, and
wherein the vibration plate reproduces sound at a low frequency band, and the second speaker and the third speaker reproduce sound at higher frequency bands.

US Pat. No. 10,368,150

CARRYING HIGH CAPACITY BIT TRANSPARENT LEASED LINE SERVICES OVER INTERNET PROTOCOL/MULTIPROTOCOL LABEL SWITCHING NETWORKS

Fujitsu Limited, Kawasak...

1. A leased line appliance (LLA) network switching system comprising:an Internet Protocol (IP) switch fabric including M parallel paths;
a first LLA coupled to a first set of leased lines and coupled to the IP switch fabric to:
receive first leased line circuits (LLCs) over a first leased line of the first set of leased lines;
convert the first LLCs to first optical data unit (ODU) cells;
map the first ODU cells into first Internet Protocol (IP) packets using user datagram protocol (UDP), each of the first IP packets having a corresponding header comprising a UDP source port number including a PCS code block ID source, a UDP destination port number including a PCS code block ID destination, a sequence number, and a timestamp; and
transmit each of the first IP packets over the IP switch fabric via a respective parallel path of the M parallel paths corresponding to each of the first IP packets; and
a second LLA coupled to a second set of leased lines and coupled to the IP switch fabric to:
receive each of the first IP packets over the IP switch fabric via the parallel path of the M parallel paths corresponding to each of the first IP packets;
de-map the first IP packets into the first ODU cells based on each corresponding header of each of the first IP packets;
convert the first ODU cells to the first LLCs; and
transmit the first LLCs over a second leased line of the second set of leased lines.

US Pat. No. 10,368,149

METHODS AND APPARATUS FOR A COLORLESS DIRECTIONLESS AND SUPER-CHANNEL CONTENTIONLESS (CDSC) OPTICAL NETWORK ARCHITECTURE

Juniper Networks, Inc., ...

1. A system, comprising:a super-channel multiplexer configured to multiplex a plurality of optical signals into a super-channel optical signal, each optical signal from the plurality of optical signals having a wavelength from a plurality of wavelengths, the plurality of wavelengths having a first wavelength bandwidth, the super-channel optical signal being a multi-carrier optical signal having a second wavelength bandwidth;
a first optical cross connect switch configured to be operatively coupled to the super-channel multiplexer and a reconfigurable optical add-drop multiplexer (ROADM) degree,
the first optical cross connect switch configured to be located between the super-channel multiplexer and the ROADM degree, the first optical cross connect switch, the super-channel multiplexer, and the ROADM degree configured to be included in a colorless, directionless, and contentionless optical network,
the first optical cross connect switch configured to switch, based on the second wavelength bandwidth, the super-channel optical signal to an output port from a plurality of output ports of the first optical cross connect switch,
the first optical cross connect switch configured to transmit the super-channel optical signal from the output port to the ROADM degree,
a second optical cross connect switch configured to be operatively coupled to the super-channel multiplexer and the ROADM degree, the second optical cross connect switch configured to receive the super-channel optical signal and transmit the super-channel optical signal to the ROADM degree for redundancy protection.

US Pat. No. 10,368,148

CONFIGURABLE COMPUTING RESOURCE PHYSICAL LOCATION DETERMINATION

Intel Corporation, Santa...

1. A system comprising:a rack comprising a plurality of sled spaces, each of the plurality of sled spaces arranged to receive a sled having a beacon sensor coupled to the sled;
a plurality of sleds, each of the plurality of sleds disposed within a respective one of the sled spaces and having a beacon sensor, each of the plurality of sleds comprising at least one physical resource;
a sled controller, the sled controller communicatively coupled to the beacon sensors of the plurality of sleds, the sled controller to:
receive information elements from the beacon sensors of the plurality of sleds, the information elements from the beacon sensors of the plurality of sleds to include indication of signals exchanged between the beacon and the beacon sensors; and
determine a location of a one of the sleds within the sled spaces of the rack; and
one or more beacons coupled to the rack, the one or more beacons to emit a signal to be received by one of the beacon sensors of the plurality of sleds to be used to determine a location of the corresponding sled within the rack, wherein the one or more beacons are further to send, to a remote management entity, an information element including an indication of the location of the sled within the rack and an indication of an operating condition of the at least one physical resource.

US Pat. No. 10,368,147

SERVICE AVAILABILITY MONITOR

Schweitzer Engineering La...

1. A service availability monitor comprising:a monitoring subsystem configured to interface with a plurality of monitored services and to determine an availability of the plurality of monitored services provided at a service location at a plurality of times;
a logging subsystem configured to create a log representing the availability of the plurality of monitored services;
an alert subsystem configured to generate a first user notification of an interruption of the plurality of monitored services;
a redundant communication subsystem comprising:
a first communication interface in communication with a first communication channel, and
a second communication interface in communication with a second communication channel, each of the first communication interface and the second communication interface comprising a wired connection capable of propagating electronic signals,
wherein the redundant communication subsystem is configured to transmit the first user notification from the service location to a remote location using the first communication interface when the monitoring subsystem determines that the second communication channel is unavailable, and to transmit the first user notification from the service location to the remote location using the second communication interface when the monitoring subsystem determines that the first communication channel is unavailable, the service location being physically separated from the remote location;
a redundant power source, comprising:
a primary power subsystem configured to draw power from a primary power source to power the service availability monitor; and
a backup power subsystem configured to draw power from the primary power subsystem and to provide power to the service availability monitor when power is unavailable through the primary power subsystem.

US Pat. No. 10,368,146

SYSTEMS AND METHODS FOR ENVIRONMENT SENSING

GENERAL ELECTRIC COMPANY,...

1. A sensor system, comprising:a sensor node having a sensor and an environmental sensor, the sensor includes a sensing material configured to be in contact with an ambient environment, the environmental sensor configured to acquire one or more ambient parameters of the ambient environment proximate the sensor node; and
a remote system having a communication circuit and a controller circuit, the communication circuit is configured to be wirelessly communicatively coupled to the sensor node, the controller circuit electrically coupled to the communication circuit, the controller circuit configured to:
receive an impedance response of the sensing material;
receive the one or more ambient parameters; and
analyze the impedance response of the sensing material using the one or more ambient parameters and at frequencies that provide a linear response of the sensing material to an analyte of interest and at least partially reject effects of interferences.

US Pat. No. 10,368,145

ORIGINATION AND DESTINATION BASED ROUTING

Comcast Cable Communicati...

1. A system comprising:a first computing device configured to be in communication with a network, the first computing device comprising:
one or more processors; and
memory storing instructions that, when executed by the one or more processors, cause the first computing device to:
determine, for a call session that has been routed via the network using origin-based information associated with an origin of the call session, whether a next destination for the call session is an origin-based destination;
after determining that the next destination for the call session is an origin-based destination, determine a database corresponding to the origin-based information, wherein the database maps a plurality of origins, comprising the origin of the call session, with a plurality of next destinations, comprising the next destination for the call session;
send a request associated with the database;
determine, based at least in part on a response to the request, the next destination for the call session; and
cause the call session to be routed to the next destination for the call session; and
a second computing device configured to cause a search of the database and to provide the response to the request, based at least in part on a result of the search, to the first computing device.

US Pat. No. 10,368,144

METHOD AND DEVICE FOR TRANSMITTING AND RECEIVING BROADCAST SIGNAL

LG ELECTRONICS INC., Seo...

1. A method for transmitting a broadcast signal by a broadcast transmitter, the method comprising:generating video data that correspond to at least one of SD resolution, HD resolution, and UHD resolution;
generating Extensible Mark-up Language (XML) subtitle data of XML subtitles that is associated with a single resolution of the video data, the XML subtitle data including subtitle text and subtitle metadata, the subtitle metadata including base dynamic range information of the XML subtitles and supplementary dynamic range information of the XML subtitles for modifying a brightness of the XML subtitles when a resolution of the video data, that is displayed with the subtitle text, is different from the single resolution,
wherein the supplementary dynamic range information further includes contrast ratio information indicating a ratio of maximum luminance to minimum luminance;
multiplexing the video data and XML subtitle data; and
transmitting the generated broadcast signal including the multiplexed video data and XML subtitle data,
wherein the subtitle metadata further includes a base Electro-Optical Transfer Function (EOTF) element and a supplementary EOTF element for a dynamic range mapping of an original luminance value of the XML subtitles to a transferred luminance value, and
the subtitle metadata further includes base bitdepth information representing a first bitdepth and supplemental bitdepth information representing a second bitdepth which is different from the first bitdepth.

US Pat. No. 10,368,142

DELIVERY OF CONTENT AND PRIORITY MESSAGES

Comcast Cable Communicati...

1. A method comprising:based on receiving, by a computing device, from a user device based on the user device having started to receive a high priority communication message, a first signal that indicates that content being sent to the user device is to be paused, determining, from a plurality of servers, a first server that is sending the content to the user device;
sending, to the first server, a second signal to cause the first server to, for the user device, pause sending of the content at a location in the content;
determining, based on an indication that the user device has completed processing of the high priority communication message, to resume sending of the content to the user device; and
based on determining to resume sending of the content to the user device, sending, by the computing device to a second server of the plurality of servers, a third signal to cause the second server to, for the user device, resume sending of the content to the user device from the location in the content.

US Pat. No. 10,368,141

SYSTEM AND METHOD FOR ENGAGEMENT AND DISTRIBUTION OF MEDIA CONTENT

Dooreme Inc., Atlanta, G...

1. A system for distributing media content, the system comprising:a processor; and
a memory coupled to the processor, wherein the memory stores executable instructions for causing the processor to
present a mall interface on a display of a user system, the mall interface comprising a plurality of storefronts and a search function,
detect a user selection of the search function,
display a customized search result storefront based on the user selection,
present a selectable icon on the display of the user system, wherein the selectable icon is configured to be selected by a user through a control device,
subsequent to a user selecting the selectable icon, present on the display of the user system an engagement ad interface, wherein the engagement ad interface displays an engagement ad,
monitor whether the user watched the full duration of the engagement ad, and
provide a user a smart control comprising a selectable icon displayed on a mobile device and the display of the user system, wherein initiating the smart control on the display automatically launches a mobile application on the mobile device, and wherein initiating the smart control on the mobile application on the mobile device automatically launches the mall interface on the display.

US Pat. No. 10,368,140

INFORMATION PROCESSING METHOD AND APPARATUS, TERMINAL AND STORAGE MEDIUM

TENCENT TECHNOLOGY (SHENZ...

1. An information processing method performed at a terminal having one or more processors and memory for storing programs to be executed by the one or more processors, the method comprising:while playing a video on a display device of the terminal, obtaining a video pause operation from a user and pausing the video at a current frame according to the video pause input operation;
receiving a first operation on selecting a specific location of the current frame from the user;
in response to the first operation:
selecting, within the current frame, an object occupying the specific location of the current frame corresponding to the first operation of the user;
obtaining, from the current frame played in the video, identification information of the user-selected object;
obtaining attribute information of the user-selected object according to the identification information of the user-selected object;
identifying, from the video, a preset video segment including a sequence of video frames containing the user-selected object and ending with the current frame; and
repeatedly replaying the video segment on the display device while displaying the attribute information of the user-selected object on top of the video segment until the first operation is terminated.

US Pat. No. 10,368,139

RECEIVED PATH DELAY MECHANISM

NXP B.V., Eindhoven (NL)...

1. A reception device comprising:at least one receiver and a corresponding delay buffer configured to receive portions of data for at least one data stream;
the at least one delay buffer comprising at least part of a host device and configured to store the portions of data received by a respective receiver; and
a memory configured to maintain indexing information for the or each of the delay buffers, the indexing information providing at least an indication of time points in the content of each of the portions of data; and
a controller configured to request delayed data portions from one or more of the at least one delay buffer of the host device for decoding of the delayed data portions;
the reception device configured such that each delayed data portion received by the reception device from the host device in response to each request of the controller is received as a plurality of consecutively received sub-portions, each sub-portion comprising a subset of the content of the delayed data portion, the receipt of each sub-portion taking at least a predetermined transfer time and each delayed data portion configured such that all of its constituent sub-portions are required for decoding said delayed data portion;
wherein based on a request to switch from decoding delayed data portions from a first time point to decoding delayed data portions from a different, second time point, the request received while the controller is configured to receive the delayed data portion from said first time point for decoding;
the controller is configured to perform the following;
(a) identify one or more second-time-point delayed data portions to request from the host device based on the second time point and the indexing information;
(b) request the one or more identified second-time-point delayed data portions from the host device; and
(c) while one or both of requesting and receiving the one or more second-time-point delayed data portions, request one or more delayed data portions corresponding to the first time or at least corresponding to a time subsequent thereto for decoding for at least some of a total transfer time for receiving the one or more second-time-point delayed data portions, the total transfer time comprising the sum of the predetermined transfer times for each sub-portion of the one or more identified second-time-point delayed data portions.

US Pat. No. 10,368,138

SYSTEMS AND METHODS FOR MANAGING A STATUS CHANGE OF A MULTIMEDIA ASSET IN MULTIMEDIA DELIVERY SYSTEMS

Rovi Guides, Inc., San J...

1. A method for monitoring a status of a media asset, the method comprising:receiving, from a data source, status data for the media asset
retrieving, from a database, media asset metadata associated with the media asset;
retrieving, from the media asset metadata, a play flag associated with the media asset;
determining, using control circuitry, based on the play flag and the media asset metadata, that a user has begun viewing the media asset and that the user has not finished viewing the media asset;
based on the determining, retrieving, using the control circuitry, the status data that is indicative of whether the status of the media asset is scheduled to change;
determining, from the status data for the media asset, whether the status of the media asset is scheduled to change;
in response to determining that the status of the media asset is scheduled to change, generating for display, at an electronic display device and using the control circuitry, alerting information to the user of the upcoming change in status;
providing, via the electronic display device, selectable options for the user to view or record at least part of the media asset before the status changes; and
in response to receiving a selection of at least one of the selectable options:
receiving, via signal access circuitry that is adapted to receive a media source of the media asset, media data corresponding to the media asset; and
performing at least one of generating for display, via the electronic display device, the received media data and storing the received media data to a storage device.

US Pat. No. 10,368,137

SYSTEM FOR PRESENTING VIDEO INFORMATION AND METHOD THEREFOR

VUDU, INC., Sunnyvale, C...

1. A system comprising:one or more processors; and
one or more non-transitory computer readable media storing computing instructions configured to run on the one or more processors and perform acts of:
initiating a playback of an active media file on a screen of an electronic device;
during the playback of the active media file on the screen of the electronic device, receiving an instruction to enter an expanded view mode comprising an upward swiping motion on an expander displayed on a touch-sensitive display when the expander is closed;
responsive to receiving the instruction to enter the expanded view mode, and during the playback of the active media file on the screen of the electronic device, opening the expander displayed on the screen during the playback of the active media file, wherein:
the expander is configured, when open, to display an information mode and a poster mode on the screen at different times during the playback of the active media file;
the information mode is displayed, in response to the expander being open, as a default mode; and
the poster mode is displayed, in response to the expander being open, when a user selects an option in a preferences menu to set the poster mode as the default mode;
displaying the information mode on the screen at the expander during the playback of the active media file;
displaying the poster mode on the screen at the expander during the playback of the active media file;
receiving an instruction to end the expanded view mode; and
responsive to receiving the instruction to end the expanded view mode, and during the playback of the active media file on the screen of the electronic device, closing the expander on the screen, wherein:
the poster mode comprises a display of one or more graphical images representing one or more media files; and
displaying the information mode on the screen during the playback of the active media file comprises displaying at least one of synopsis information of the active media file, cast information of the active media file, rating information of the active media file, genre information of the active media file, length information of the active media file, or a year of release of the active media file.

US Pat. No. 10,368,136

RESOURCE MANAGEMENT FOR VIDEO PLAYBACK AND CHAT

Amazon Technologies, Inc....

1. A computer implemented method, comprising:receiving, by a resource controller of a viewer device, an indication that performance of a live streaming video playback of media content has experienced: dropped frames and a quality level of fragments of the media content requested from a media server has been downgraded;
reducing, by the resource controller, central processing unit (CPU) capacity allocated to a live chat concurrently provided with the live streaming video playback of the media content in response to the performance experiencing dropped frames; and
reducing, by the resource controller, bandwidth of a communication connection of the viewer device allocated to the live chat in response to the quality level being downgraded.

US Pat. No. 10,368,135

METHOD AND SYSTEM FOR IMPROVED INTERACTIVE TELEVISION PROCESSING

HSNi, LLC, St. Petersbur...

1. A system for transmitting information to a user of a step-top box, the system comprising:a database configured to store user registration data and product information data relating to a plurality of products;
a set-top box communicatively coupled to a remote control device having a touch screen;
a server configured to:
broadcast the product information data of at least one of the plurality of products to a plurality of set-top boxes, including the set-top box, to be displayed on touch screens of a plurality of remote control devices communicatively coupled to the plurality of set-top boxes, respectively, without the product information data being designated for a specific user of one of the plurality of set-top boxes, and
transmit updated product information data that replaces at least a portion of the previously transmitted product information data of the at least one of the plurality of products to the plurality of set-top boxes to be displayed on the touch screens of the plurality of remote control devices, respectively; and
a processor configured to:
compare data relating to a transaction request from a user of the set-top box with user registration data of the user stored in the database to identify the user, and
generate personalized user transaction data and personalized product information for the identified user of the set-top box,
wherein the server is further configured to transmit programming instructions that include the personalized user transaction data and the personalized product information to the set-top box, such that the programming instructions configure the set-top box to control the remote control device communicatively coupled to the set-top box to display the personalized product information in a customized manner on the touch screen of the remote control device, with the customized manner of display being different than the display of the product information data that is broadcast to the plurality of set-top boxes.

US Pat. No. 10,368,134

LIVE CONTENT STREAMING SYSTEM AND METHOD

Placement Labs, LLC, Sou...

1. A dynamic live media streaming method comprising:broadcasting, via a broadcast management system, a live media stream from at least one broadcasting device over at least one network to a plurality of content viewing devices, the broadcast management system comprising a computer processor, memory, storage device and at least one network communication device for communication between the at least one broadcasting device and the plurality of content viewing devices,
determining a jump zone based upon at least one predetermined jump zone criteria, the jump zone comprising a physical geographic region where a next broadcasting device must be located,
prior to stopping the live media stream from the at least one broadcasting device:
receiving a live vote from at least one of the plurality of content viewing devices for the live media stream, and
determining if additional broadcasting time is warranted based upon a number of positive votes received and attributed to the live media stream from the at least one broadcasting device,
stopping the live media stream from the at least one broadcasting device,
selecting the next broadcasting device in the jump zone, and
broadcasting a live media stream from the next broadcasting device.

US Pat. No. 10,368,133

MEDIA RECOMMENDATION SYSTEM AND METHOD

1. A system comprising:a memory; and
at least one processor to:
transmit information associated with a plurality of talents to a client computing device;
receive a selection of at least one talent of the plurality of talents from the client computing device associated with a user profile;
determine a list of shows featuring the at least one talent, each show in the list of shows having one of an IMDb rating greater than or equal to a particular threshold and a Rotten Tomatoes rating greater than or equal to a particular threshold;
transmit the list of shows featuring the at least one talent to the client computing device;
receive feedback for each show of the list of shows from the client computing device; and
transmit a list of recommended shows available from subscribed media providers to the client computing device based on the feedback.

US Pat. No. 10,368,132

RECOMMENDATION SYSTEM TO ENHANCE VIDEO CONTENT RECOMMENDATION

Facebook, Inc., Menlo Pa...

1. An online system for generating content recommendations for a target user of the system, comprising:a processor; and
a non-transitory computer readable medium configured to store instructions that, when executed by the processor, cause the processor to perform steps comprising:
maintaining, by the online system, a collection of publicly available videos;
generating a plurality of sets of video candidates selected from the collection of publicly available videos by:
accessing a plurality of recommendation functions that each apply different types of selection criteria to uniquely select and rank the video candidates for the set that corresponds to that recommendation function, the video candidates each having a ranking score for ranking relative to other video candidates in the set; and
receiving, from each recommendation function, the set of video candidates selected and ranked by the recommendation function, each set of video candidates representing video content that is likely to be of interest to the target user, the sets of video candidates selected from the collection of publicly available videos to supplement a display for the target user of other video content posted by the target user's connections in the online system;
filtering the video candidates from the sets from each of the recommendation functions to remove one or more video candidates that violate a video content policy of the online system;
performing a second ranking of the filtered video candidates as a combined group from the sets by:
extracting features from the filtered video candidates;
assigning weights to the features associated with the filtered video candidates, a weight of a feature generated by a ranking model trained on the features of the video candidates, and indicating a relative importance of the feature to the target user;
generating ranking scores for the filtered video candidates based on the weights of the features associated with the filtered video candidates; and
selecting a plurality of videos from the filtered video candidates as recommendations to the target user based on the ranking scores associated with the video candidates; and
providing for display to the target user the selected videos along with other video content posted by the target user's connections in the online system.

US Pat. No. 10,368,131

METHODS AND APPARATUS FOR PROVIDING AUDIO-VIDEO CONTENT RECOMMENDATIONS BASED ON PROMOTION FREQUENCY

SLING MEDIA LLC, Foster ...

1. A method for providing recommendations for audio/video content, the method comprising:identifying highly-promoted sets of audio/video content, by:
obtaining, by a computing device, a plurality of promotion frequencies, each of the plurality of promotion frequencies comprising a rate of presentation of promotional advertisements for one respective television program via one respective television broadcast network;
identifying, by the computing device, a subset of the plurality of promotion frequencies indicative of the highly-promoted sets of audio/video content, by:
determining a typical frequency associated with the promotional advertisements presented by the broadcast network;
comparing the frequencies to the typical frequency; and
when a first one of the frequencies is greater than the typical frequency, determining that the first one of the frequencies indicates a highly-promoted set of audio/video content comprising a television program, wherein the subset includes the first one;
identifying potential highly-promoted viewing options of interest to a user, by:
comparing, by the computing device, the subset to viewing habits of a user to identify corresponding data between the television program and the viewing habits;
determining, by the computing device, recommendations for potential viewing by the user, based on the corresponding data, wherein the recommendations indicate the highly-promoted set of audio/video content comprising the television program; and presenting the potential highly-promoted viewing options of interest to the user, by:
displaying the recommendations for potential viewing, via a display device communicatively coupled to the computing device.

US Pat. No. 10,368,130

METHODS AND APPARATUS TO CORRECT ERRORS IN AUDIENCE MEASUREMENTS FOR MEDIA ACCESSED USING OVER THE TOP DEVICES

THE NIELSEN COMPANY (US),...

3. An apparatus comprising:a demographic corrector to:
identify first impression data received from a computer, the first impression data including demographic data of users, the computer producing a misattribution error in the first impression data, the misattribution error corresponding to a difference between reported demographics in the first impression data and actual demographics corresponding to the first impression data;
generate a model based on a difference between the first impression data and second data, the model to determine a demographic dependency between two demographic categories; and
generate corrected demographic data based on the demographic dependency by applying the model to the first impression data; and
a viewership assigner to correct the misattribution error produced by the computer by assigning viewership to an impression associated with the first impression data using the corrected demographic data, at least one of the demographic corrector or the viewership assigner is a logic circuit.

US Pat. No. 10,368,129

METHOD OF PROCESSING VIDEO DATA, DEVICE, COMPUTER PROGRAM PRODUCT, AND DATA CONSTRUCT

1. A video data processing device comprising:a processor, a non-transitory computer readable medium communicatively connected to the processor, and at least one sensor device communicatively connected to at least one of the processor and the non-transitory computer readable medium, the at least one sensor device configured to collect information relating to motions and gestures of at least one object;
the video data processing device configured to identify undesirable image contents contained in first video data based on a result of motion and gesture recognition that is based on the information relating to motions and gestures obtained via the at least one sensor device, said undesirable image contents including inappropriate body expression and provide content information relating to any identified undesirable image contents;
the video data processing device configured to identify indicators in a situation or scene recorded in the first video data that increase the likelihood of undesirable image contents to be contained in said first video data in the future based on recognized motions and gestures and set an alert state for the at least one sensor device in which a scanning rate for the at least one sensor device is increased and/or a scanning resolution of the at least one sensor device is increased in response to detection of the indicators.

US Pat. No. 10,368,128

MEMORY ALLOCATION TYPE FOR MEDIA BUFFER

Microsoft Technology Lice...

1. A computer device, comprising:a memory to store data and instructions;
a processor in communication with the memory;
an operating system in communication with the memory and the processor, wherein the operating system is operable to:
receive a plurality of camera resource requests from a plurality of applications to use a camera resource;
determine a memory type to allocate to the plurality of applications for the camera resource in response to the plurality of camera resource requests and compatibility information of the camera resource;
determine a buffer and a buffer type to provide each of the plurality of applications in response to an access mode of the camera resource, wherein the buffer type comprises one or more of a shared type, a copy type, or a secure type; and
provide each of the plurality of applications access to a respective determined buffer.

US Pat. No. 10,368,127

METHODS AND APPARATUS TO IDENTIFY AND CREDIT MEDIA USING RATIOS OF MEDIA CHARACTERISTICS

The Nielsen Company (US),...

1. An apparatus to identify media, comprising:a delta calculator to:
determine a first ratio based on a first time interval and a second time interval of a monitored media signal; and
determine a second ratio based on the second time interval and a third time interval of the monitored media signal; and
a signature generator to:
generate a monitored media ratio signature based on the first and second ratios; and
initiate transmission of the monitored media ratio signature to a recipient that is to compare the monitored media ratio signature with a reference ratio signature to identify the media;
at least one of the delta calculator or the signal generator implemented by at least one of a processor or hardware.

US Pat. No. 10,368,126

METHOD AND SYSTEM FOR DISPLAYING CONTENT OR CONFLICTS FROM MULTIPLE RECEIVING DEVICES ON A SECOND SCREEN DEVICE

The DIRECTV Group, Inc., ...

1. A method comprising:wirelessly requesting, at a second screen device, scheduled recording data from a set top box;
receiving, at the second screen device, scheduled recording data from the set top box at the second screen device, said scheduled recording data comprising a plurality of scheduled recording events;
displaying the scheduled recording events on a calendar screen display that simultaneously displays at least three consecutive days, each day having a plurality of timeslots, said scheduled recording events displayed in multiple timeslots for the at least three days;
determining conflicting scheduled recording events between at least two scheduled recording events based on the scheduled recording data;
displaying a screen indicator at the second screen device indicative of the conflict by displaying at least one of changing a color of a font of the conflicting scheduled recording events, underlining the conflicting scheduled recording events, placing a box around the conflicting scheduled recording events, and placing an indicator next to the conflicting scheduled recording events; and
selecting the screen indicator to initiate removing the conflict.

US Pat. No. 10,368,125

METHOD AND SYSTEM FOR EFFICIENT COMMUNICATION

Innovation Science LLC, ...

1. A mobile terminal with a device identifier for processing information through multiple communications comprising:a network interface configured to receive a multimedia signal through a wireless communication network;
a WiFi communication interface configured for communications through a WiFi network, the device identifier is associated with a network address corresponding to the WiFi network, wherein the mobile terminal is configured to connect to Internet via the WiFi network;
wherein the mobile terminal is further configured to be paired to another mobile terminal to communicate a wireless communication directed to a wireless network via the other mobile terminal;
a buffer;
a decoder;
an encoder; and
a high definition digital output interface,
wherein the mobile terminal is configured to perform a conversion of the multimedia signal, the multimedia signal comprises a compressed signal; wherein the compressed signal is a compressed high definition digital video signal;
wherein the conversion comprises decompressing the compressed signal;
wherein the decoder is configured to decompress the compressed signal to a decompressed signal;
wherein the encoder is configured to encode the decompressed signal to produce an encoded signal, the encoded signal comprising a decompressed high definition digital video signal;
wherein the high definition digital output interface is configured to transmit the encoded signal;
wherein the conversion comprises said decompressing, by the decoder, further followed by encoding, by the encoder, the decompressed signal produced by the decoder to produce the encoded signal for transmission through the high definition digital output interface; and
wherein the buffer is configured to accommodate a buffering and processing rate sufficient for said processing in support of the production of a corresponding multimedia content on a high definition digital display.

US Pat. No. 10,368,124

REAL-TIME AUDIENCE MEASUREMENT SYSTEM

TiVo Solutions Inc., San...

1. A method comprising:receiving, at a first server, an instant message from a client system comprising television viewer data, wherein the television viewer data includes:
a user client operational command input
and an identity of media content and
the television viewer data specifies the first server as recipient of the instant message, wherein the instant message is sent over an SSL connection that is maintained between the first server and the client system, and wherein the SSL connection is automatically reconnected if the connection is dropped;
analyzing data from the television viewer data;
generating output information using the analyzed data; and
transmitting the output information using the analyzed data to a second server.

US Pat. No. 10,368,123

INFORMATION PUSHING METHOD, TERMINAL AND SERVER

TENCENT TECHNOLOGY (SHENZ...

1. An information pushing method, comprising:acquiring, by a terminal, a key frame of a currently-played video;
acquiring, by the terminal, a characteristic value of the key frame according to picture information of the key frame, wherein the characteristic value of the key frame is an integer comprising a plurality of first data bits that are different bits of two hash values;
acquiring, by a server according to the characteristic value of the key frame, pushing information corresponding to the characteristic value, wherein the acquiring, according to the characteristic value of the key frame, pushing information corresponding to the characteristic value comprises:
determining, for each piece of to-be-pushed information in pre-stored multiple pieces of to-be-pushed information, whether a quantity of second data bits comprised in a characteristic value of the to-be-pushed information is less than a first preset value, wherein each of the second data bits has a different value from a corresponding first data bit comprised in the characteristic value of the key frame, and
if the quantity of the second data bits is less than the first preset value, determining the to-be-pushed information as the pushing information; or,
determining, for each piece of to-be-pushed information in the pre-stored multiple pieces of to-be-pushed information, whether a percentage accounted for by the second data bits in the characteristic value of the to-be-pushed information is less than a second preset value, and
if the percentage accounted for by the second data bits is less than the second preset value, determining the to-be-pushed information as the pushing information; and
displaying, by the terminal, the pushing information in a process of playing the currently-played video,
wherein the acquiring a characteristic value of the key frame according to picture information of the key frame comprises:
zooming out the key frame, to obtain a first processed picture;
performing color simplification processing on the first processed picture, to obtain a second processed picture;
calculating a grayscale value of each pixel point in the second processed picture;
calculating an average grayscale value of all pixel points in the second processed picture;
comparing the grayscale value of each pixel point with the average grayscale value, to obtain multiple comparison results; and
obtaining the characteristic value of the key frame according to the multiple comparison results, and
wherein the characteristic value of the to-be-pushed information is acquired in a same manner as the characteristic value of the key frame.

US Pat. No. 10,368,122

MEDIA SHARING AND COMMUNICATION SYSTEM

1. A media sharing and communication system, consisting of:a recording mechanism that records a desired portion of media upon activation by a first individual user who is not a content provider, the portion of media being less than a full media episode;
a friend request mechanism for sending and receiving friend requests between users to be approved to receive and share media and wherein the friend request mechanism suggests friends who have similar interests of the first individual user;
a first user transmitter/receiver included in a first user system that transmits the portion of media and a message generated by the first individual user regarding the portion of media to a second individual user who is not a content provider, the first user system including a first user interface having an input device and screen view that is generated by software stored on a memory device of the first user transmitter/receiver, the first user system including a user profile generator interface;
a confirmation mechanism that confirms that the second individual user is authorized to view the portion of media and a notification mechanism that notifies the first individual user if the second individual user is not authorized to receive the portion of media and notifies the second individual user that the portion of media cannot be received due to programming configuration subscribed to by the second individual user; and
a second user transmitter/receiver included in a second user system that receives the portion of media upon authorization of the second individual user; the second user system including a second user interface having an input device and screen view that is generated by software stored on a memory device of the second user transmitter/receiver, the second user system including a user profile generator interface.

US Pat. No. 10,368,121

SYSTEM AND METHOD FOR COLLECTING DATA

ROKU, INC., Los Gatos, C...

1. A method for collecting viewing data comprising:running, on a client device in an open development environment, a third-party channel application located in a first sandboxed virtual operating environment on a first layer of an operating system of a user device, wherein the third-party channel application comprises channel metadata not available from outside the third-party channel application;
receiving an application interface (API) call, from the third-party channel application, for a graphic rendering module located in a second sandboxed virtual operating environment on a second layer of the operating system, wherein the graphic rendering module is a non-playback module, wherein the second layer is an application layer below the first layer, and wherein the second sandboxed virtual operating environment is configured to prevent direct access from the third-party channel application located in the first sandboxed virtual operating environment on the first layer of the operating system of the user device;
performing passive data collection on the client device, wherein the passive data collection comprises intercepting, between the third-party channel application and the graphic rendering module, the channel metadata sent from the third-party channel application to the graphic rendering module;
determining identifying information of a content based on the intercepted channel metadata;
generating an entertainment profile for a user by associating the identifying information with a user profile of the user device; and
storing the determined identifying information of the content; the entertainment profile for the user, and a relation of the determined identifying information of the content to the entertainment profile for the user.

US Pat. No. 10,368,120

AVATAR INTEGRATED SHARED MEDIA EXPERIENCE

MICROSOFT TECHNOLOGY LICE...

1. A method for facilitating shared media consumption among two or more users associated with respective computing devices communicatively coupled via a network, the method comprising:receiving data indicative of two or more users associated with respective computing devices;
receiving a selection of a media content item to be concurrently rendered on the respective computing devices;
sending, to the respective computing devices of the two or more users, data of the media content item selected;
allowing an identified remote holder of the two or more users to send information pertaining to content playback of the media content item selected, the information comprising playback status, identification of the media content item, and a current time code associated with the media content item;
based on the information, allowing control of content playback on the respective computing devices of other users of the two or more users based at least in part on differences in the information pertaining to the content playback between the respective computing devices;
receiving a request from another user other than the identified remote holder requesting for remote holder status;
in response to receiving the request, allowing the identified remote holder to select to retain the remote holder status or transfer the remote holder status to the other user; and
when the identified remote holder selects to transfer the remote holder status, then facilitating the transfer of the remote holder status to the other user to allow the other user to obtain the control of the content playback.

US Pat. No. 10,368,119

METHOD, DEVICE AND COMPUTER PROGRAM PRODUCT FOR OUTPUTTING A TRANSPORT STREAM

Sony Corporation, Tokyo ...

1. A method of controlling video and/or audio stream playback, comprising:detecting a control input to a touch sensitive screen of a portable computing device; and
responsive to said detecting the control input to the touch sensitive screen, transitioning from outputting to the touch sensitive screen of the portable computing device a first stream of video and/or audio data provided by a first source to outputting to a second display device a second stream of video and/or audio data provided by a second source,
wherein content of the first stream of video and/or audio data is the same as content of the second stream of video and/or audio data, and
wherein the second source is different from the portable computing device.

US Pat. No. 10,368,118

SYSTEM AND APPARATUS FOR MANAGING VIDEO CONTENT RECORDINGS

1. A computing device, comprising:a processing system including a processor; and
a memory resource that stores executable instructions that, when executed by the processing system, facilitate performance of operations, comprising:
detecting a utilization of the memory resource that exceeds a threshold, wherein the threshold is less than a capacity of the memory resource;
obtaining a video recording schedule for the computing device;
determining a viewing threshold;
identifying a most recent viewing time for each video recording listed in the video recording schedule resulting in a group of most recent viewing times;
identifying a video recording from the video recording schedule that does not satisfy the viewing threshold based on a most recent viewing for the video recording resulting in an identified video recording;
identifying a mitigation option for the identified video recording to mitigate the utilization of the memory resource, wherein the mitigation option includes a change in the video recording schedule identifying candidate scheduled recordings based on a frequency of presentation of scheduled recordings;
generating a notice describing the utilization of the memory resource responsive to the detecting of the utilization of the memory resource exceeding the threshold, wherein the notice includes the mitigation option;
transmitting the notice over a wireless network to a portable communication device, wherein the notice is not provided to any device with a wired connection to the computing device; and
receiving a response message from the portable communication device that includes instructions to manage the memory resource of the computing device.

US Pat. No. 10,368,117

METHOD AND SYSTEM FOR ADDRESSABLE AND PROGRAM INDEPENDENT ADVERTISING DURING RECORDED PROGRAMS

PRIME RESEARCH ALLIANCE E...

1. A method of a subscriber video recording device inserting at least one targeted advertisement into a video program, the method comprising:receiving, at the subscriber video recording device, at least one targeted advertisement and storing the at least one targeted advertisement;
receiving, at the subscriber video recording device, at least one video program;
recording, at the subscriber video recording device, the at least one video program;
determining, at the subscriber video recording device, if the recorded video program is interrupted by one or more existing advertising avails;
if the recorded video program is not interrupted by one or more advertising avails, creating, at the subscriber video recording device, one or more customized avails within the at least one video program, the one or more customized avails each having a duration; and
retrieving the stored at least one targeted advertisement and inserting at least one advertisement into the one or more customized avails.

US Pat. No. 10,368,116

ROLL-OFF PARAMETER DETERMINING METHOD AND MODULE

MSTAR SEMICONDUCTOR, INC....

1. A roll-off parameter determining device, disposed at a receiving terminal, the receiving terminal comprising an analyzing module, the analyzing module analyzing a first frame and a second frame to identify first roll-off information in the first frame and second roll-off information in the second frame, the first frame being adjacent to the second frame, the roll-off parameter determining device comprising:a register unit, storing the first roll-off information;
a first determining unit, determining whether one of the first roll-off information and the second roll-off information comprises a first data type according to the first roll-off information received from the register unit and the second roll-off information received from the analyzing module to generate a first roll-off parameter indicator;
a second determining unit, determining whether at least one of the first roll-off information and the second roll-off information comprises a second data type according to the first roll-off information received from the register unit and the second roll-off information received from the analyzing module to generate a second roll-off parameter indicator; and
a look-up table (LUT) unit, looking up an LUT according to the first roll-off parameter indicator and the second roll-off parameter indicator to output a roll-off parameter.

US Pat. No. 10,368,115

TRANSMITTING METHOD, RECEIVING METHOD, TRANSMITTING DEVICE, AND RECEIVING DEVICE

PANASONIC INTELLECTUAL PR...

1. A transmitting method for transmitting a first stream related to content of an image or audio, the method comprising:transmitting the first stream, the first stream including:
timing update identification information indicating whether or not a correspondence relationship between a first reference clock and a second reference clock has been updated, the first reference clock being used to transmit and receive the first stream, and the second reference clock being used to transmit and receive a second stream related to another content to be reproduced in synchronization with the content related to the first stream;
a first time according to the first reference clock; and
a second time according to the second reference clock, the second time being associated with the first time based on the updated correspondence relationship,
wherein the timing update identification information is a flag indicating whether or not the correspondence relationship between the first reference clock and the second reference clock has been updated,
wherein the second stream is transmitted independently of the first stream,
wherein the first stream is adapted for being reproduced in synchronization with the second stream which is transmitted independently of the first stream, and
wherein the transmitting method further comprises storing the timing update identification information and the second time in a TEMI (Timeline and External Media Information) access unit of the first stream.

US Pat. No. 10,368,114

MEDIA CHANNEL CREATION BASED ON FREE-FORM MEDIA INPUT SEEDS

Pandora Media, LLC, Oakl...

1. A computer-implemented method for generating a media channel including a plurality of media items, comprising:storing a set of media items, each media item stored with a predetermined plurality of media item scores, each media item score for the media item describing a magnitude of the media item's relevance to a distinct ambiguous entity term from a set of ambiguous entity terms;
storing a plurality of predetermined entity scores, each entity score associated with a corresponding ambiguous entity term from the set of ambiguous entity terms and describing an importance of the corresponding ambiguous entity term in media input seeds;
receiving a request for a media channel from a client device of a user, the request including a textual free-form media input seed that comprises a plurality of ambiguous entity terms input by the user, wherein an ambiguous entity term of the plurality of ambiguous entity terms does not identify an entity within a set of valid entity types related to the set of media items, the valid set of entity types having meaning within a music context and the valid set of entity types including a musicological feature;
identifying, for each ambiguous entity term from the plurality of ambiguous entity terms input by the user, a plurality of media items that are correlated with the ambiguous entity term from the set of media items, the correlation of a media item to an ambiguous entity term based on the media item's predetermined media item score indicating a magnitude of the media item's relevance to the ambiguous entity term;
generating, for each ambiguous entity term, a media playlist to generate a plurality of media playlists, each media playlist including the identified plurality of media items that are correlated with the ambiguous entity term;
calculating ranking scores for the identified plurality of media items included in the plurality of media playlists, a ranking score for a media item from the identified plurality of media items based on a stored entity score for the ambiguous entity term that is associated with the media playlist that includes the media item and the media item's media item score indicating a magnitude of the media item's relevance to the ambiguous entity term;
ranking the identified plurality of media items based on the calculated ranking scores; and
combining the plurality of media playlists for the plurality of ambiguous entity terms included in the textual free-form media input seed into the media channel by selecting a subset of the identified plurality of media items to include in the media channel based on the ranking; and
providing the media channel to the client device.

US Pat. No. 10,368,112

TECHNOLOGIES FOR IMMERSIVE USER SENSORY EXPERIENCE SHARING

Intel Corporation, Santa...

1. A computing device for sensory experience sharing, the computing device comprising:a crowdsourcing aggregation module to receive sensor data from a plurality of experience computing devices, wherein the sensor data is indicative of a local sensory experience associated with each of the experience computing devices, and wherein the sensor data received from a first experience computing device of the plurality of experience computing devices is captured by the first experience computing device from an unmanned aerial vehicle;
an initial settings module to initialize one or more user preferences associated with a user of the distance computing device based on a search of one or more social media databases for user preferences of other users meeting one or more demographic attributes of the user of the distance computing device;
an experience analysis module to analyze the sensor data to generate combined sensory experience data, wherein the combined sensory experience data is indicative of the local sensory experiences associated with the experience computing devices; and
a distance module to (i) transmit the combined sensory experience data to a distance computing device, wherein the distance computing device is distant from the plurality of experience computing devices, (ii) receive user preferences associated with the user of the distance computing device from the distance computing device in response to transmission of the combined sensory experience data to the distance computing device, wherein the user preferences are based on biometric feedback data associated with the user of the distance computing device that is indicative of a physical response of the user of the distance computing device, wherein the physical response is indicative of a state of mind of the user of the distance computing device, and (iii) adjust the combined sensory experience data based on the user preferences.

US Pat. No. 10,368,111

DIGITAL TELEVISION CHANNEL TRENDING

1. A method, comprising:maintaining a first portion of a multimedia program in a multimedia cache of a network edge device;
receiving a multicast join request associated with:
a set-top box; and
the multimedia program;
sending a first portion of the multimedia program to the set-top box from the multimedia cache of the network edge device, wherein sending the first portion comprises sending the first portion to the set-top box at an accelerated rate, wherein the accelerated rate is greater than a normal playback rate for the multimedia program;
directing the set-top box to a multicast replicator for a second portion of the multimedia program;
receiving an indication of the set-top box receiving the second portion;
causing a viewership statistic server to increment a particular counter indicative of a number of set-top boxes tuned to the multimedia program, wherein causing the viewership statistic server to increment the particular counter includes:
detecting a handoff, wherein the handoff comprises a transition of a source of the multimedia program from the network edge device to the multicast replicator;
responsive to detecting the handoff from the network edge device to the multicast replicator, causing a viewership statistic server to increment a particular counter indicative of a number of set-top boxes tuned to the multimedia program;
recording time-stamped information indicative of the set-top box receiving the second portion; and
sending the time-stamped information to a viewership statistic data.

US Pat. No. 10,368,110

SMOOTH MEDIA DATA SWITCHING FOR MEDIA PLAYERS

VisualOn, Inc., San Jose...

1. A system for smooth media data switching, comprising:an interface to:
receive a potential change indication from a user of a potential change to a media player parameter of a current video stream the user is watching, wherein the potential change indication comprises zooming in on a particular video area of the current video stream, wherein the media player parameter comprises a viewing angle selection type; and
receive a change indication from the user to change the media player parameter of the current video stream; and
a processor to:
determine a plurality of predicted pre-buffer streams in response to the potential change indication from the user, wherein a plurality of viewing angle streams are related to the particular video area of the current video stream zoomed in on, wherein the plurality of predicted pre-buffer streams corresponds to a top N predicted potential changes to the media player parameter;
provide the plurality of predicted pre-buffer streams in addition to the current video stream to a user device associated with the user, wherein the plurality of predicted pre-buffer streams are provided to the user device for pre-buffering, wherein providing the plurality of predicted pre-buffer streams begins after the potential change indication from the user, wherein the plurality of predicted pre-buffer streams comprises a plurality of viewing angle streams; and
in response to the change indication to change the media player parameter, determine whether the change to the media player parameter of the current video stream corresponds to one of the plurality of predicted pre-buffer streams.

US Pat. No. 10,368,109

DYNAMIC CONTENT DELIVERY ROUTING AND RELATED METHODS AND SYSTEMS

DISH Technologies L.L.C.,...

1. A method of streaming media content over a network, the method comprising:transmitting one or more portions of the media content from a remote storage digital video recorder (RS-DVR) system to a client device using a first delivery route from a networking component to the client device via a first backbone provider network of a plurality of different backbone provider networks coupled to the networking component, wherein the networking component is coupled between the RS-DVR system and the plurality of different backbone provider networks and the one or more portions of the media content are marked as non-cacheable;
determining a performance metric associated with the transmitting of the one or more portions via the first backbone provider network;
identifying, by the RS-DVR system, an alternative backbone provider network of the plurality of different backbone provider networks when the first backbone provider network fails to achieve a desired level of performance based on the performance metric, wherein the alternative backbone provider network is different from the first backbone provider network; and
instructing, by the RS-DVR system, the networking component coupled between the RS-DVR system and the plurality of different backbone provider networks to transmit a subsequent portion of the media content from the RS-DVR system to the client device using a different delivery route from the networking component to the client device via the alternative backbone provider network instead of the first backbone provider network, wherein the subsequent portion of the media content is marked as non-cacheable.

US Pat. No. 10,368,108

DOWNSTREAM VIDEO COMPOSITION

ATI Technologies ULC, Ma...

1. A method of multi-layered video processing video at a video source and a display device, said video source comprising a composition engine having a composition buffer, said method comprising:at said video source:
forming decoded video images by decoding a first input video bitstream comprising compressed video;
forming decoded overlay images by decoding a second input graphics bitstream received as an auxiliary bitstream in addition to the first input video bitstream and comprising graphics overlay data, said decoded overlay images associated with said decoded video images; and
forming a plurality of additional decoded overlay images from a plurality of additional input bitstreams comprising additional graphics overlay data;
compositing, via said composition engine, said additional decoded overlay images with said decoded overlay images formed from said second input graphics bitstream to form composited overlay images;
configuring said composition engine, capable of compositing images, to not composite said decoded overlay images with said decoded video images;
concurrently transmitting to said display device capable of compositing images, a first stream comprising said decoded video images and a second stream comprising said composited overlay images;
at said display device:
receiving said first and second streams;
selectively performing image processing on said received decoded video images, without having composited said overlay images with said received video images, to form enhanced video images;
compositing, pixels from each of said composited overlay images with respective pixels from each of said enhanced video images to form output images for display at said display device.

US Pat. No. 10,368,107

INTRA VIDEO CODING USING A DECOUPLED TREE STRUCTURE

QUALCOMM Incorporated, S...

1. A method of coding video data, the method comprising:forming a most probable mode (MPM) candidate list for a chroma block of the video data, at least in part by:
adding, to the MPM candidate list, one or more derived modes (DMs) associated with a luma block of the video data, the luma block corresponding to the chroma block, and a plurality of luma prediction modes that can be used for coding luminance components of the video data;
adding one or more linear model (LM) modes to the MPM candidate list;
determining whether the one or more LM modes comprise a first instance of a first LM mode and one or more additional instances of the first LM mode; and
omitting the one or more additional instances of the LM mode from the MPM candidate list in response to a determination that the first LM mode was used to predict one or more neighboring chroma blocks of the chroma block;
selecting a mode from the MPM candidate list; and
coding the chroma block according to the mode selected from the MPM candidate list.

US Pat. No. 10,368,106

METHOD AND DEVICE FOR OPTIMIZING ENCODING/DECODING OF COMPENSATION OFFSETS FOR A SET OF RECONSTRUCTED SAMPLES OF AN IMAGE

Canon Kabushiki Kaisha, ...

1. A method of encoding at least a part of an image comprising a plurality of samples, each sample comprising at least two components, the method comprising:encoding a first chroma component and a second chroma component of at least one sample of the image to provide at least one encoded sample;
determining a set of filtering parameters for performing sample adaptive offset loop filtering on the image part, the set of filtering parameters comprising:
a sample adaptive offset type parameter indicating whether edge-type, band-type or no sample adaptive offset loop filtering is used for the at least one sample, said sample adaptive offset type parameter being a common filtering parameter for filtering both the first and second chroma components; and
at least one further filtering parameter, which is a dedicated filtering parameter for filtering an individual one of the first and second chroma components.

US Pat. No. 10,368,105

METADATA DESCRIBING NOMINAL LIGHTING CONDITIONS OF A REFERENCE VIEWING ENVIRONMENT FOR VIDEO PLAYBACK

Microsoft Technology Lice...

1. A computing system comprising:a buffer configured to receive video;
a pre-processor configured to master content of the video by selectively adjusting at least some sample values of the video based on one or more of (a) characteristics of a reference display device, and (b) one or more nominal lighting conditions of a reference viewing environment in place when mastering the content of the video;
a metadata generator configured to generate metadata that describes the one or more nominal lighting conditions of the reference viewing environment in place when mastering the content of the video, wherein the metadata includes:
one or more parameters that specify a nominal level of ambient light in the reference viewing environment, the one or more parameters that specify the nominal level of ambient light including an indicator of light per unit of area in units of lux; and
one or more parameters that specify a nominal color characteristic of the ambient light in the reference viewing environment, the one or more parameters that specify the nominal color characteristic of ambient light including normalized x and y chromaticity coordinates in a multi-dimensional mapping of values in a color space; and
a buffer configured to store the metadata for output with encoded data for the video, wherein the metadata is organized for output as part of a video elementary bitstream for the video, the video elementary bitstream also including the encoded data for the video.

US Pat. No. 10,368,104

SYSTEMS AND METHODS FOR TRANSMISSION OF SYNCHRONIZED PHYSICAL AND VISIBLE IMAGES FOR THREE DIMENSIONAL DISPLAY

Rockwell Collins, Inc., ...

1. A method, comprising:obtaining physical data and visual data of at least one portion of an object, the physical data comprising vector quantized data in the form of hidden Markov model derived vectors;
encoding the physical data into a sequence of object frames, wherein each object frame represents a set of time-specific physical attributes of the at least one portion of the object;
encoding the visual data into a sequence of image frames, wherein each image frame represents a time-specific visual representation of the at least one portion of the object;
synchronizing and interlacing the sequence of object frames and the sequence of image frames to produce an interlaced data stream;
transmitting the interlaced data stream via a communication channel to a display device;
decoding the interlaced data stream by bypassing every object frame in the interlaced data stream to produce a visual data stream;
decoding the interlaced data by bypassing every image frame in the interlaced data stream to produce a physical data stream;
configuring the display device according to the decoded physical data to form a three-dimensional surface by deflecting each pin in an electro-mechanical pin field to a z-depth defined by the physical data; and
presenting the decoded visual data on the three-dimensional surface.

US Pat. No. 10,368,103

METHOD AND APPARATUS FOR IMAGE ENCODING/DECODING

1. A method of decoding an image, comprising:generating a prediction block for a current block;
receiving information of a block size, wherein the information of the block size is determined by an encoder, and wherein whether a skip of a transform process is applicable is determined based on a comparison of the information of the block size with a current block size;
decoding information indicating whether the skip of the transform process is applied, in response to the comparison of the information of the block size with the current block size representing that the skip of the transform process is applicable;
determining whether to perform an inverse transform on the current block based on the decoded information indicating whether the skip of the transform process is applied;
performing the inverse transform on the current block to generate a residual block for the current block;
reconstructing the current block based on the prediction block and the residual block; and
applying filtering on the reconstructed current block,
wherein the prediction block is generated by performing intra prediction.

US Pat. No. 10,368,102

METHOD AND APPARATUS FOR IMAGE ENCODING/DECODING

1. A method of decoding an image, comprising:generating a prediction block for a current block;
receiving information of a block size, wherein the information of the block size is determined by an encoder, and wherein whether a skip of a transform process is applicable is determined based on a comparison of the information of the block size with a current block size;
decoding information indicating whether the skip of the transform process is applied, in response to the comparison of the information of the block size with the current block size representing that the skip of the transform process is applicable;
determining whether to perform an inverse transform on the current block based on the decoded information indicating whether the skip of the transform process is applied;
performing the inverse transform on the current block to generate a residual block for the current block; and
reconstructing the current block based on the prediction block and the residual block,
wherein the prediction block is generated by performing intra prediction.

US Pat. No. 10,368,101

METHOD AND APPARATUS FOR IMAGE ENCODING/DECODING

1. A method of decoding an image, comprising:generating a prediction block for a current block;
receiving information of a block size, wherein the information of the block size is determined by an encoder, and wherein whether a skip of a transform process is applicable is determined based on a comparison of the information of the block size with a current block size;
decoding information indicating whether the skip of the transform process is applied, in response to the comparison of the information of the block size with the current block size representing that the skip of the transform process is applicable;
determining whether to perform an inverse transform on the current block based on the decoded information indicating whether the skip of the transform process is applied;
performing the inverse transform on the current block to generate a residual block for the current block; and
reconstructing the current block based on the prediction block and the residual block.

US Pat. No. 10,368,099

COLOR REMAPPING INFORMATION SEI MESSAGE SIGNALING FOR DISPLAY ADAPTATION

Qualcomm Incorporated, S...

1. A method of processing decoded video data, the method comprising:determining, by a video decoding unit, a peak brightness value of a current display;
obtaining, by the video decoding unit and for a picture of video data, one or more colour remapping information (CRI) supplemental enhancement information (SEI) messages that each correspond to a respective peak brightness value of a set of peak brightness values, wherein each respective CRI SEI message of the CRI SEI messages includes a respective colour_remap_id syntax element that indicates the respective peak brightness value;
determining, for each respective CRI SEI message of the CRI SEI messages, the respective peak brightness value based on a value of the respective colour_remap_id syntax element included in the respective CRI SEI message;
selecting, by the video decoding unit and based on the peak brightness value of the current display, a CRI SEI message of the one or more CRI SEI messages;
colour remapping, by the video decoding unit and based on the selected CRI SEI message, samples of the picture of video data; and
outputting, by the video decoding unit and for display at the current display, the colour remapped samples of the picture of video data.

US Pat. No. 10,368,098

METHOD AND DEVICE FOR TRANSMITTING PREDICTION MODE OF DEPTH IMAGE FOR INTERLAYER VIDEO ENCODING AND DECODING

SAMSUNG ELECTRONICS CO., ...

1. An interlayer video decoding method comprising:obtaining prediction-mode information of a current block of a depth image from a bitstream;
generating a prediction block of the current block based on the prediction-mode information; and
decoding the depth image by using the prediction block,
wherein the obtaining of the prediction-mode information of the current block from the bitstream comprises:
receiving a first flag, a second flag, and a third flag, wherein the first flag indicates whether prediction of the current block by dividing the current block into two or more partitions according to a pattern is permitted, the second flag indicates whether the depth image permits blocks of the depth image to be predicted by dividing the blocks into two or more partitions by using a wedgelet, and the third flag indicates whether the depth image permits the blocks of the depth image to be predicted by dividing the blocks into two or more partitions by using a contour; and
receiving a fourth flag from the bitstream when predetermined conditions determined based on the first to third flags are satisfied, wherein the fourth flag represents information regarding a type of a method of dividing the current block into two or more partitions according to the pattern.

US Pat. No. 10,368,097

APPARATUS, A METHOD AND A COMPUTER PROGRAM PRODUCT FOR CODING AND DECODING CHROMA COMPONENTS OF TEXTURE PICTURES FOR SAMPLE PREDICTION OF DEPTH PICTURES

NOKIA TECHNOLOGIES OY, E...

1. A method comprising:obtaining a depth view component map;
decoding one or both chroma components of a coded texture picture into one or two decoded chroma sample arrays;
obtaining a reference sample array on the basis of said one or two decoded chroma sample arrays by modifying said one or two decoded chroma sample arrays, said modifying comprising:
segmenting said one or two decoded chroma sample arrays, wherein said segmenting comprises creating a histogram of sample values of said one or two decoded chroma sample arrays;
replacing the sample values in each segment by a representative value of said each segment wherein the representative value is derived from comparing segments of the histogram sample values of said one or two decoded chroma sample arrays with sample values of one or more regions of the depth map and using the sample values of the depth region spatially overlapping a majority of samples of a chroma sample array segment as the values of respective samples in each segment; and
forming a decoded depth view component on the basis of said reference sample array.

US Pat. No. 10,368,096

ADAPTIVE STREAMING SYSTEMS AND METHODS FOR PERFORMING TRICK PLAY

DIVX, LLC, San Diego, CA...

1. A playback device, comprising:a set of one or more processors; and
a non-volatile storage containing an application for causing the set of one or more processors to perform the steps of:
obtaining top level index information identifying a plurality of alternative streams of video, an audio stream, and at least one trick play stream that are each stored in a separate container file, where:
each video container file containing a given stream from the plurality of alternative streams of video comprises:
portions of the given video stream within the video container file, where the portions of the given video stream comprise an encoded group of pictures that commences with a picture encoded without reference to another picture in the given video stream; and
a video container index, where entries in the video container index indicate sizes of portions of the given video stream within the video container file;
each trick play container file containing a given trick play stream from the at least one trick play stream comprises:
frames of the given trick play stream, where each frame of the given trick play stream is a picture encoded without reference to another picture in the trick play stream; and
a trick play container index, where entries in the trick play container index comprise a timecode and a location of a frame in the given track play stream;
requesting a video container index from a video container file containing a video stream from the plurality of alternative streams of video;
requesting at least one portion of the video stream from the plurality of alternative streams of video using at least one entry from the video container index;
decoding the at least one portion of the video stream from the plurality of alternative streams of video;
receiving at least one user instruction to perform a visual search of the media;
requesting a trick play container index from a trick play container file containing a trick play stream from the at least one trick play stream;
requesting at least one frame of video from the at least one trick play stream using at least one entry from the trick play container index; and
decoding and displaying the at least one frame of video from the at least one trick play stream.

US Pat. No. 10,368,095

METHOD AND APPARATUS FOR INTRA MODE CODING

HFI Innovation Inc., Zhu...

1. A method for predictive Intra coding, the method comprising:determining a set of Intra prediction modes that is used for prediction unit (PU) blocks with a plurality of different block sizes comprising a 4×4 block size and at least two other block sizes;
applying predictive Intra coding to a first PU block of an image having a first block size that matches a block size in the plurality of different block sizes based on one or more neighboring PU blocks according to a first current Intra prediction mode selected from the set of Intra prediction modes; and
applying predictive Intra coding to a second PU block of the image having a second block size that is different from the first block size and matches a block size in the plurality of different block sizes according to a second current Intra prediction mode selected from the set of Intra prediction modes.

US Pat. No. 10,368,094

LUMA-BASED CHROMA INTRA-PREDICTION FOR VIDEO CODING

TEXAS INSTRUMENTS INCORPO...

1. A method comprising:filtering reconstructed neighboring samples of a reconstructed down sampled luma block of a digital video frame;
computing parameters ? and ? of a linear model using the filtered, reconstructed neighboring samples of the reconstructed down sampled luma block and reconstructed neighboring samples of a corresponding chroma block, wherein the linear model is PredC[x,y]=?·RecL?[x,y]+?, wherein x and y are sample coordinates, PredC is predicted chroma samples, and RecL? is samples of the reconstructed down sampled luma block; and
computing samples of a predicted chroma block from corresponding samples of the reconstructed down sampled luma block using the linear model and the parameters.

US Pat. No. 10,368,093

LINE-BASED COMPRESSION FOR DIGITAL IMAGE DATA

TEXAS INSTRUMENTS INCORPO...

1. A method of compressing digital image data comprising:computing a minimum absolute sample difference (MASD) for a pixel in a line of pixels based on neighboring pixels, wherein the neighboring pixels comprise a left neighboring pixel in the line of pixels, a top left neighboring pixel in a previous line of pixels, and a top neighboring pixel in the previous line of pixels, and wherein computing the MASD comprises computing an absolute sample difference (ASD) between the top left neighboring pixel and the top neighboring pixel, an ASD between the top left neighboring pixel and the left neighboring pixel, and an ASD between the top left neighboring pixel and an interpolated pixel value computed from the top neighboring pixel and the left neighboring pixel;
computing a pixel predictor and a pixel residual for the pixel based on the MASD; and
selectively encoding the pixel residual using one of an entropy code or run mode encoding.

US Pat. No. 10,368,092

ENCODER-SIDE DECISIONS FOR BLOCK FLIPPING AND SKIP MODE IN INTRA BLOCK COPY PREDICTION

Microsoft Technology Lice...

1. In a computing device with a video encoder or an image encoder, a method comprising:deciding whether a current block in a current picture is to be encoded using an intra block copy (“BC”) prediction in a skip mode, including:
performing a hash-based block matching for the current block;
determining that the hash-based block matching fails for the current block;
determining a reference region in the current picture, including selecting a predicted block vector (“BV”) value for the current block from among multiple available BV values in a neighborhood around the current block, wherein the predicted BV value indicates a displacement to the reference region in the current picture; and
determining that input sample values of the reference region identically match corresponding input sample values of the current block, a quantization parameter (“QP”) value for the current block being greater than or equal to a QP value for the reference region; and
encoding the current block using the intra BC prediction in the skip mode, the skip mode using the predicted BV value for the current block; and
outputting in a bitstream encoded data, wherein the bitstream lacks a BV differential value for the current block and lacks residual data for the current block.

US Pat. No. 10,368,091

BLOCK FLIPPING AND SKIP MODE IN INTRA BLOCK COPY PREDICTION

Microsoft Technology Lice...

1. A computing device comprising:one or more buffers configured to store a picture of screen capture content from a sequence of pictures of screen capture content; and
a video encoder configured to encode screen capture content, wherein the video encoder is configured to perform operations comprising:
determining an intra block copy (“BC”) prediction region for a current block in the picture based on a reference region in the picture, wherein the intra BC prediction region is flipped relative to the reference region, including:
determining the reference region; and
performing one of:
(a) flipping the reference region and assigning sample values at respective positions of the flipped reference region to sample values at respective positions of the intra BC prediction region;
(b) assigning sample values at respective positions of the reference region to the sample values at the respective positions of the intra BC prediction region, and flipping the intra BC prediction region; and
(c) assigning the sample values at the respective positions of the reference region to sample values at corresponding positions of the intra BC prediction region, wherein the corresponding positions account for the flipping of the intra BC prediction region relative to the reference region;
encoding the current block using the intra BC prediction region; and
outputting encoded data in a bitstream, the encoded data including an indication of how the intra BC prediction region is flipped relative to the reference region, wherein the indication of how the intra BC prediction region is flipped relative to the reference region is one or more syntax elements in the bitstream.

US Pat. No. 10,368,090

INTRA-PREDICTION METHOD, AND ENCODER AND DECODER USING SAME

LG Electronics Inc., Seo...

1. A method for intra-prediction, performed by a decoding apparatus, the method comprising:receiving prediction mode information;
deriving an intra prediction mode for a current block based on the prediction mode information; and
generating a predicted block which includes prediction samples by deriving a prediction sample in the current block based on the intra prediction mode,
wherein when the intra prediction mode is a vertical prediction mode having a vertical prediction direction:
the prediction sample in the current block is derived by using a first reference sample located along the vertical prediction direction with regard to a location of the prediction sample, and
the step of generating the predicted block further includes filtering on the prediction sample when the prediction sample is adjacent to a left boundary of the current block,
wherein the filtering on the prediction sample is performed based on a second reference sample located along a horizontal direction with regard to the location of the prediction sample,
wherein the second reference sample is adjacent to a left side of the prediction sample, and
wherein for the filtering, a filtering coefficient applied to a value of the prediction sample is larger than a filtering coefficient applied to a value of the second reference sample.

US Pat. No. 10,368,089

VIDEO ENCODING METHOD AND APPARATUS, AND VIDEO DECODING METHOD AND APPARATUS

SAMSUNG ELECTRONICS CO., ...

1. A method of encoding a multilayer video, the method comprising:performing inter-layer prediction on a picture of each layer in the multilayer video;
determining a reference layer which the picture of the each layer refers to based on a result of performing the inter-layer prediction; and
adding reference layer information of each layer to a parameter set including information commonly applied to layers in the multilayer video,
wherein the reference layer information is added to at least one of a video parameter set (VPS), a sequence parameter set (SPS), and a picture parameter set (PPS),
wherein the reference layer information includes a flag layer_dependency_present_flag indicating whether a current layer is a dependent layer including a picture predicted by referring to other picture included in other layer or is an independent layer without referring to the other picture included in the other layer, and
when the flag layer_dependency_present_flag indicates that the current layer is the dependent layer, the reference layer information further includes a flag direct_reference_flag[i][j] indicating whether the current layer i having an index i refers to a layer j having a different index j, where i and j are integers, and
when the flag layer_dependency_present_flag indicates that the current layer is the independent layer, the flag direct_reference_flag[i][j] for the current layer is not obtained from the bitstream.

US Pat. No. 10,368,088

REFERENCE PICTURE SIGNALING

Telefonaktiebolaget LM Er...

1. A method of decoding an encoded representation of a picture in a video stream of multiple pictures, the method comprising:retrieving, from the encoded representation of the picture, buffer description information defining a plurality of reference pictures;
determining, based on the buffer description information, a reference picture identifier that identifies a respective reference picture among the plurality of reference pictures, the respective reference picture comprising a decoding reference for the picture, wherein determining the reference picture identifier comprises:
(i) retrieving, based on the buffer description information, a delta identifier; and
(ii) calculating the reference picture identifier based on a picture identifier identifying the picture and the delta identifier; and
updating a decoded picture buffer based on the reference picture identifier, wherein updating the decoded picture buffer comprises marking, prior to decoding the picture, reference pictures that are present in the decoded picture buffer and that are not associated with the reference picture identifier determined based on the buffer description information as unused for reference.

US Pat. No. 10,368,087

DYNAMIC RELOAD OF VIDEO ENCODER MOTION ESTIMATION SEARCH WINDOW UNDER PERFORMANCE/POWER CONSTRAINTS

ATI Technologies ULC, Ma...

1. A processing apparatus comprising:an encoder configured to:
encode current frames of a plurality of frames of video data using previously encoded reference frames of the plurality of frames of video data; and
perform a plurality of motion searches within a motion search window about each of a plurality of co-located portions of one or more of the previously encoded reference frames; and
a processor configured to:
prior to performing each of the plurality of motion searches:
determine a threshold number of search window reloads for the co-located portions; and
determine which of a plurality of locations, each corresponding to one of the co-located portions of the one or more reference frames to reload the motion search window according to the determined threshold number of search window reloads and predicted motions at each location of the co-located portions; and
perform the motion searches by causing the encoder to:
reload the motion search window at the determined locations of the one or more reference frames; and
for each of the remaining locations of the one or more reference frames, slide the motion search window in a first direction indicated by the location of the next co-located portion of the one or more reference frames.

US Pat. No. 10,368,086

IMAGE CODING/DECODING METHOD, DEVICE, AND SYSTEM

Huawei Technologies Co., ...

1. An image decoding method, comprising:performing, by a processor included in an image decoding device, singular vector decomposition on a prediction block corresponding to a to-be-decoded image block, to obtain eigenvector matrices U and V of the prediction block;
obtaining, by the processor, a transform coefficient;
decoding, by the processor, difference information of the eigenvector matrix U and difference information of the eigenvector matrix V, wherein the difference information of the eigenvector matrix U is difference information between an eigenvector matrix U of residual data and the eigenvector matrix U of the prediction block obtained after the singular vector decomposition, and wherein the difference information of the eigenvector matrix V is difference information between an eigenvector matrix V of the residual data and the eigenvector matrix V of the prediction block obtained after the singular vector decomposition;
acquiring, by the processor, the eigenvector matrices U and V of the residual data according to the eigenvector matrices U and V of the prediction block and the decoded difference information of the eigenvector matrix U and the decoded difference information of the eigenvector matrix V;
performing, by the processor, an inverse transformation on the transform coefficient based on the eigenvector matrices U and V of the residual data to obtain the residual data; and
obtaining an image block based on the residual data.

US Pat. No. 10,368,085

METHOD OF PERFORMING MOTION VECTOR PREDICTION, AND APPARATUS THEREOF

SUN PATENT TRUST, New Yo...

1. An encoding and decoding apparatus, comprising:an encoding apparatus for encoding a first current block of a first picture to generate a first encoded bitstream; and
a decoding apparatus for decoding a second current block of a second picture from a second encoded bitstream,
wherein the encoding apparatus includes:
an encoding unit; and
a first storage, the encoding unit is configured to execute the steps of:
deriving a first candidate for a first motion vector predictor to encode a first current motion vector of the first current block, from a first motion vector of a first block which is (i) a neighboring block that is stored in the first storage, is included in a first current picture including the first current block, and is adjacent to the first current block or (ii) a co-located block included in a picture different from the first current picture;
adding the derived first candidate to a first candidate list;
deriving at least one first motion vector predictor based on a candidate selected from the first candidate list; and
encoding the first current motion vector using the derived at least one first motion vector predictor, and encoding the first current block using the first current motion vector,
the deriving of the first candidate includes determining whether to derive the first candidate from the first motion vector, based on a type of a first current reference picture and a type of a first reference picture, the first current reference picture being referred to from the first current block using the first current motion vector, and the first reference picture being referred to from the first block using the first motion vector,
each of the type of the first current reference picture and the type of the first reference picture is one of a long term reference picture and a short term reference picture, and
in the determining of whether to derive the first candidate from the first motion vector, the first candidate is determined to be derived from the first motion vector when the type of the first current reference picture and the type of the first reference picture are the same,
the decoding apparatus includes:
a decoding unit; and
a second storage,
the decoding unit is configured to execute the steps of:
deriving a second candidate for a second motion vector predictor to decode a second current motion vector of the second current block, from a second motion vector of a second block which is (i) a neighboring block that is stored in the second storage, is included in a second current picture including the second current block, and is adjacent to the second current block or (ii) a co-located block included in a picture different from the second current picture;
adding the derived second candidate to a second candidate list;
deriving at least one second motion vector predictor based on a candidate selected from the second candidate list; and
decoding the second current motion vector from the derived at least one second motion vector predictor, and decoding the second current block using the decoded second current motion vector,
the deriving of the second candidate includes determining whether to derive the second candidate from the second motion vector, based on a type of a second current reference picture and a type of a second reference picture, the second current reference picture being referred to from the second current block using the second current motion vector, and the second reference picture being referred to from the second block using the second motion vector,
each of the type of the second current reference picture and the type of the second reference picture is one of a long term reference picture and a short term reference picture, and
in the determining of whether to derive the second candidate from the second motion vector, the second candidate is determined to be derived from the second motion vector when the type of the second current reference picture and the type of the second reference picture are the same.

US Pat. No. 10,368,084

VIDEO SIGNAL PROCESSING METHOD AND DEVICE

KT CORPORATION, Gyeonggi...

1. A method of decoding a video signal, the method comprising:generating a reference picture list based on a current picture reference flag for a current picture;
obtaining motion information about a current block in the current picture, the motion information including at least one of a motion vector and a reference picture index; and
restoring the current block using the motion information of the current block and the reference picture list relating to the current picture,
wherein the current picture reference flag indicates whether at least one block belonging to the current picture is predicted by referring to a pre-reconstructed block in the current picture, and
wherein the current picture is added to the reference picture list when the current picture reference flag indicates that at least one block belonging to the current picture is predicted by referring to a pre-reconstructed block in the current picture.

US Pat. No. 10,368,083

PICTURE ORDER COUNT BASED MOTION VECTOR PRUNING

QUALCOMM Incorporated, S...

1. A method of decoding video data, the method comprising:generating a list of motion vector prediction candidates for a prediction unit (PU) of video data, wherein a first motion vector prediction candidate and a second motion vector prediction candidate from the list of motion vector prediction candidates are bi-directional motion vector prediction candidates;
determining, for the first motion vector prediction candidate from the list of motion vector prediction candidates, a first picture order count (POC) value for a first reference picture identified by the first motion vector prediction candidate;
determining, for the second motion vector prediction candidate from the list of motion vector prediction candidates, a second POC value for a second reference picture identified by the second motion vector prediction candidate;
determining, for the first motion vector prediction candidate, a third POC value for a third reference picture identified by the first motion vector prediction candidate;
determining, for the second motion vector prediction candidate, a fourth POC value for a fourth reference picture identified by the second motion vector prediction candidate;
at least one of determining that a pair-wise equality condition is satisfied by the first motion vector prediction candidate and the second motion vector prediction candidate or determining that a cross-equality condition is satisfied by the first motion vector prediction candidate and the second motion vector prediction candidate,
wherein determining that the pair-wise equality condition is satisfied comprises:
determining that the first POC value is equal to the second POC value;
determining that a first motion vector of the first motion vector prediction candidate is equal to a first motion vector of the second motion vector prediction candidate;
determining that the third POC value is equal to the fourth POC value;
determining that a second motion vector of the first motion vector prediction candidate is equal to a second motion vector of the second motion vector prediction candidate; and
in response to determining that the first POC value is equal to the second POC value, the first motion vector of the first motion vector prediction candidate is equal to the first motion vector of the second motion vector prediction candidate, the third POC value is equal to the fourth POC value, and the second motion vector of the first motion vector prediction candidate is equal to the second motion vector of the second motion vector prediction candidate, determining that the pair-wise equality condition is satisfied, and
wherein determining that the cross-equality condition is satisfied comprises:
determining that the first POC value is equal to the second POC value;
determining that the first motion vector of the first motion vector prediction candidate is equal to the second motion vector of the second motion vector prediction candidate;
determining that the third POC value is equal to the fourth POC value;
determining that the second motion vector of the first motion vector prediction candidate is equal to the first motion vector of the second motion vector prediction candidate; and
in response to determining that the first POC value is equal to the second POC value, the first motion vector of the first motion vector prediction candidate is equal to the second motion vector of the second motion vector prediction candidate, the third POC value is equal to the fourth POC value, and the second motion vector of the first motion vector prediction candidate is equal to the first motion vector of the second motion vector prediction candidate, determining that the cross-equality condition is satisfied;
in response to determining that one of the pair-wise equality condition or the cross-equality condition is satisfied by the first motion vector prediction candidate and the second motion vector prediction candidate, pruning the first motion vector prediction candidate from the list of motion vector prediction candidates to create a pruned list; and
decoding the PU using the pruned list.

US Pat. No. 10,368,082

IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD

SONY CORPORATION, Tokyo ...

1. An image processing device comprising:a control section configured to set, based on a first scaling list for a first layer, a second scaling list for a second layer decoded with reference to the first layer; and
an inverse quantization section configured to inversely quantize transform coefficient data of the second layer using the second scaling list set by the control section,
wherein the control section selects a setting technique for setting the second scaling list according to a setting technique flag decoded from an encoded stream, and
wherein the control section and the inverse quantization section are each implemented via at least one processor.

US Pat. No. 10,368,081

SYSTEM AND METHOD FOR ELECTRONIC DATA COMMUNICATION

Samsung Display Co., Ltd....

1. A method for transmitting video for a display panel between a transmitter in electronic communication with a receiver over a wireless communication channel, the method comprising:receiving, by a transmitter, a data signal from a data source;
receiving, by the transmitter, a return signal from a receiver;
encoding, by the transmitter based on the return signal, the data signal utilizing a plurality of encoder blocks to generate a layered encoded data stream, wherein a first encoder block encodes the data signal and each subsequent encoder block encodes a difference between an input of a preceding encoder block and an output of a quantizer of a preceding encoder block; and
transmitting, by the transmitter, the layered encoded data stream to the receiver for decoding and display on the display panel.

US Pat. No. 10,368,080

SELECTIVE UPSAMPLING OR REFRESH OF CHROMA SAMPLE VALUES

Microsoft Technology Lice...

1. A computer system comprising one or more processing units and memory, wherein the computer system implements a video processing tool configured to perform operations comprising:receiving a current picture in a first chroma sampling format that has a first chroma sampling rate, wherein the current picture includes one or more regions;
for each of the one or more regions, determining whether the region is stationary or non-stationary relative to a previous picture in display order;
outputting region change metadata that indicates whether the one or more regions, respectively, are stationary or non-stationary relative to the previous picture; and
converting the current picture to a second chroma sampling format that has a second chroma sampling rate lower than the first chroma sampling rate, including:
retaining chroma sample values of the current picture in the first chroma sampling format that are at selected positions among positions of the first chroma sampling format, wherein the selected positions vary according to a refresh pattern that facilitates recovery of stationary content at the first chroma sampling rate; and
discarding chroma sample values of the current picture in the first chroma sampling format that are at other, non-selected positions among the positions of the first chroma sampling format.

US Pat. No. 10,368,079

METHOD AND APPARATUS FOR IMAGE COMPRESSION THAT EMPLOYS MULTIPLE INDEXED COLOR HISTORY BUFFERS

ATI Technologies ULC, Ma...

1. A method for encoding a source image that is segmented into a plurality of sub-images each having a sub-image pixel width, comprising:encoding, by an encoder, different horizontal pixel slices of the plurality of sub-images using a plurality of indexed color history (ICH) buffers associated with the source image to produce encoded pixel data, each of the plurality of indexed color history (ICH) buffers configured to correspond to a single horizontal pixel slice of each sub-image; and
outputting, by the encoder, the encoded pixel data for the different horizontal pixel slices of the plurality of sub-images.

US Pat. No. 10,368,078

EXTENSIONS OF MOTION-CONSTRAINED TILE SETS SEI MESSAGE FOR INTERACTIVITY

SONY CORPORATION, Tokyo ...

1. An apparatus, comprising:a memory configured to store instructions; and
a processor coupled with the memory, the processor configured to execute the instructions to:
receive a bitstream that includes a supplemental enhancement information message, wherein the bitstream comprises a plurality of first tiles of a picture, and
the supplemental enhancement information message indicates motion-constrained tile sets in the picture, and the supplemental enhancement information message includes flag information indicating that one of:
a first value of a sample in the plurality of first tiles in the motion-constrained tile sets is different from a second value of the sample, wherein the first value is obtained when a plurality of coding tree units of the picture are decoded, and the second value is obtained when coding tree units of the plurality of coding tree units that are excluded from the plurality of first tiles in the motion-constrained tile sets are not decoded, or
the first value of the sample in the plurality of first tiles in the motion-constrained tile sets is equal to the second value of the sample, wherein the second value is obtained when the coding tree units of the plurality of coding tree units that are excluded from the plurality of first tiles in the motion-constrained tile sets are not decoded;
decode the bitstream based on the supplemental enhancement information message; and
generate an image based on the decoded bitstream.

US Pat. No. 10,368,077

METHOD AND DEVICE FOR INTRA PREDICTION

LG Electronics Inc., Seo...

1. A video decoding method, comprising:receiving, by a decoding apparatus, index information indicating a candidate mode of an intra prediction mode of a current block;
constructing, by the decoding apparatus, a candidate mode list including a plurality of candidate modes for the current block;
determining, by the decoding apparatus, the intra prediction mode of the current block based on the candidate mode indicated by the index information from the plurality of candidate modes constituting the candidate mode list;
deriving, by the decoding apparatus, a predicted sample in the current block based on the determined intra prediction mode; and
generating, by the decoding apparatus, a reconstructed sample based on the derived predicted sample,
wherein the constructing the candidate mode list includes:
setting a firstly ordered candidate mode and a secondly ordered candidate mode, among the candidate modes of the candidate mode list, based on a first block adjacent to a left side of the current block and a second block adjacent to an upper side of the current block,
wherein the firstly ordered candidate mode is set equal to a planar mode and the secondly ordered candidate mode is set equal to a DC mode when both the first block and the second block are not available, and
wherein the firstly ordered candidate mode is set equal to the planar mode and the secondly ordered candidate mode is set equal to the DC mode when an intra prediction mode of the first block is the planar mode and the second block is unavailable.

US Pat. No. 10,368,076

METHODS AND APPARATUSES OF ENCODING/DECODING INTRA PREDICTION MODE USING CANDIDATE INTRA PREDICTION MODES

INTELLECTUAL DISCOVERY CO...

1. A decoding method, comprising:determining whether a first neighboring block located at a left side of a current block or a second neighboring block located at an upper side of the current block is available for deriving an intra prediction mode of a current block;
deriving a first intra prediction mode based on the determined availability for the first neighboring block or a second intra prediction mode based on the determined availability for the second neighboring block,
wherein the first intra prediction mode is derived as DC mode when the first neighboring block is not available and is derived as an intra prediction mode of the first neighboring block when the first neighboring block is available, and
wherein the second intra prediction mode is derived as DC mode when the second neighboring block is not available and is derived as an intra prediction mode of the second neighboring block when the second neighboring block is available;
determining whether the first intra prediction mode is identical to the second intra prediction mode;
obtaining an intra prediction mode of the current block based on a plurality of candidate intra prediction modes; and
obtaining prediction samples of the current block based on the intra prediction mode of the current block,
wherein when the first intra prediction mode is not identical to the second intra prediction mode, the method comprises:
deriving a first candidate intra prediction mode from the first intra prediction mode and a second candidate intra prediction mode from the second intra prediction mode;
setting a third candidate intra prediction mode equal to a planar mode when none of the first candidate intra prediction mode and the second candidate intra prediction mode is the planar mode;
setting the third candidate intra prediction mode equal to a DC mode when one of the first candidate intra prediction mode and the second candidate intra prediction mode is the planar mode and the other is not the DC mode; and
setting the third candidate intra prediction mode equal to a vertical mode when one of the first candidate intra prediction mode and the second candidate intra prediction mode is the planar mode and the other is the DC mode, and
wherein when the first intra prediction mode is identical to the second intra prediction mode and both the first intra prediction mode and the second intra prediction mode are the planar mode or the DC mode, the method comprises:
setting the first candidate intra prediction mode equal to the planar mode;
setting the second candidate intra prediction mode equal to the DC mode; and
setting the third candidate intra prediction mode equal to the vertical mode.

US Pat. No. 10,368,075

CLIP GENERATION BASED ON MULTIPLE ENCODINGS OF A MEDIA STREAM

WOWZA MEDIA SYSTEMS, LLC,...

1. A method comprising:generating, at a server, a first encoded version of a media stream and a second encoded version of the media stream, wherein, for a portion of the second encoded version that includes at least two intracoded frames (i-frames), a corresponding portion of the first encoded version includes more than two i-frames;
receiving, at the server from a destination device, a request to generate a media clip of the media stream, wherein the request identifies a start point of the media clip;
generating the media clip at the server responsive to the request, the media clip based on a first sequence of frames of the first encoded version and a second sequence of frames of the second encoded version in response to the start point not corresponding to an i-frame of the second encoded version and an end frame corresponding to a stop point of the media clip not being in the first encoded version, wherein the first sequence begins at a first i-frame of the first encoded version corresponding to the start point and ends at a second i-frame of the first encoded version corresponding to a particular i-frame of the second encoded version, and wherein the second sequence begins at a third frame of the second encoded version following the particular i-frame of the second encoded version and ends at a fourth frame corresponding to the stop point of the media clip; and
sending, from the server to the destination device, the media clip or a link to the media clip.

US Pat. No. 10,368,074

OPPORTUNISTIC FRAME DROPPING FOR VARIABLE-FRAME-RATE ENCODING

Microsoft Technology Lice...

1. A computing system implemented at least in part with computer hardware, the computing system comprising:a video encoder configured to encode any non-dropped frames, among multiple frames of a video sequence, at a variable frame rate, thereby producing a bitstream;
a control frame buffer, outside the video encoder, configured to store a control frame, the control frame including sample values of a version of a previous frame from prior to encoding of the previous frame; and
a frame dropping module implemented with software compiled to execute on a general purpose computer or specialized computer hardware, positioned before the video encoder to, for each given frame among one or more of the multiple frames:
based at least in part on a comparison of at least some portion of the given frame to at least some portion of the control frame, detect whether there is significant change in the given frame relative to the control frame;
if significant change is detected, store the given frame in the control frame buffer, thereby replacing the control frame, and pass the given frame to the video encoder, such that the bitstream includes coded data for the given frame; and
if significant change is not detected, drop the given frame without replacing the control frame in the control frame buffer and without passing the given frame to the video encoder, such that the bitstream lacks any coded data for the given frame.

US Pat. No. 10,368,073

MULTI-REGION SEARCH RANGE FOR BLOCK PREDICTION MODE FOR DISPLAY STREAM COMPRESSION (DSC)

Qualcomm Incorporated, S...

1. A method for coding a block of video data in simplified block prediction mode of a constant bitrate video coding scheme, the method comprising:determining a candidate block to be used to predict a current block in a current slice, the candidate block being within a range of pixel positions that each correspond to a reconstructed pixel in the current slice, the range of pixel positions comprising at least (i) a first region including one or more first pixel positions in a first line of pixels in the current slice, the first line of pixels including at least one pixel in the current block and spanning an entire width of the current slice, and (ii) a second region including one or more second pixel positions in a second line of pixels in the current slice, the second line of pixels not including any pixel in the current block but spanning the entire width of the current slice;
determining a cost associated with coding the current block based on each potential candidate block of a plurality of potential candidate blocks, the plurality of potential candidate blocks each corresponding to one of the first and second pixel positions in the first and second regions;
identifying one of the plurality of potential candidate blocks in the first and second regions having a lowest cost as the candidate block;
determining a prediction vector indicative of a pixel position of the candidate block within the range of pixel positions, the pixel position of the candidate block being in one of the first region or the second region; and
coding the current block in simplified block prediction mode at least in part via signaling the prediction vector.

US Pat. No. 10,368,072

ADVANCED ARITHMETIC CODER

QUALCOMM Incorporated, S...

1. A method of entropy coding video data, the method comprising:obtaining a pre-defined initialization value for a context of a plurality of contexts used in a context-adaptive entropy coding process to entropy code a value for a syntax element in a slice of the video data, wherein the pre-defined initialization value is stored with N-bit precision;
determining, based on the pre-defined initialization value, an initial probability state of the context for the slice of the video data, wherein a number of possible probability states for the context is greater than two raised to the power of N; and
entropy coding, based on the initial probability state of the context, a bin of the value for the syntax element.

US Pat. No. 10,368,071

ENCODING DATA ARRAYS

Arm Limited, Cambridge, ...

1. A method of encoding an array of data elements of a stream of arrays of data elements, in which the array of data elements is to be encoded as respective sets of luminance and chrominance data values, and is to be encoded as respective blocks of data elements making up the array of data elements, the method comprising:when encoding the chrominance data values for a source block of data elements of a data array that is to be encoded:
generating an array of chrominance difference values that represents the difference between the chrominance values of the source block of data elements of the array of data elements being encoded and the chrominance values for a reference block of data elements derived from the chrominance values of one or more arrays of data elements in the stream of arrays of data elements;
generating an array of chrominance value frequency domain coefficients for the array of chrominance difference values by applying a forward transformation process to the array of chrominance difference values;
generating an array of quantized chrominance value frequency domain coefficients by applying a quantization process to the array of chrominance value frequency domain coefficients;
determining whether the encoding of the quantized frequency domain coefficients for the luma data values for the block of the array of data elements being encoded is indicated as to be omitted; and
when it is determined that the encoding of the quantized frequency domain coefficients for the luma data for the block of data elements of the data array being encoded is indicated as to be omitted:
determining whether to also omit the encoding of the generated array of quantized chrominance value frequency domain coefficients for the block of data elements being encoded in the output encoded bit stream representing the array of data elements based on a property or properties of the determined quantized chrominance value frequency domain coefficients; and
when it is determined based on the property or properties of the determined quantized chrominance value frequency domain coefficients to omit the encoding of the quantized chrominance value frequency domain coefficients in the output encoded bit stream representing the array of data elements, not including data for the array of quantized chrominance value frequency domain coefficients in the output bit stream representing the encoded array of data elements.

US Pat. No. 10,368,070

IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD

Velos Media, LLC, Dallas...

1. An image processing device comprising:a buffer for receiving encoded image data, and
a processor to execute instructions that cause the processor to:
decode the encoded image data from the buffer to generate quantized transform coefficient data;
inversely quantize the quantized transform coefficient data using a 32×32 quantization matrix to generate predicted error data, the 32×32 quantization matrix includes a duplicate of at least one of two elements adjacent to each other from an 8×8 quantization matrix; and
combine the predicted error data with a predicted image to generate decoded image data.

US Pat. No. 10,368,068

ENCODER AND METHOD FOR VIDEO PROCESSING

Telefonaktiebolaget LM Er...

1. A method for encoding a bitstream representing a sequence of pictures of a video stream comprising:dividing the pictures into coding blocks of a first type, each coding block being either of a first block size or a second block size, wherein said second block size is smaller than said first block size, and wherein each coding block of the first type is associated with at least one coding block of a second type of either said first block size or said second block size, and wherein each coding block of the first type is also associated with at least one coding block of a third type of either said first block size or said second block size;
defining for a part of the picture a first combination of block sizes where the coding block of the first type, its at least one associated coding block of the second type and its at least one associated coding block of the third type are all of said first block size;
defining for said part of the picture a second combination of block sizes where the coding block of the first type and its at least one associated coding block of the second type are both of said first block size, and where all of said at least one coding block of the third type associated with the coding block of the first type are of said second block size;
comparing said first combination against said second combination and, based on said comparing, determining whether to select said first combination of block sizes for encoding the bitstream without further evaluation or select from between the second combination and a third combination of block sizes, wherein said determining comprises determining to select the first combination for encoding the bitstream with further evaluation in an event that said first combination is better than said second combination with respect to number of bits for encoding or coding error, or both and otherwise determining to select between the second and third combinations of block sizes for encoding the bitstream.

US Pat. No. 10,368,067

METHOD AND APPARATUS FOR SELECTIVE FILTERING OF CUBIC-FACE FRAMES

MEDIATEK INC., Hsin-Chu ...

1. A method of processing cube face images, the method comprising:receiving sets of six cubic faces converted from spherical images in a 360-degree panoramic video sequence, wherein each set of six cubic faces corresponds to one spherical image projected onto a cube for rendering 360-degree virtual reality;
assembling each set of cubic faces into one assembled cubic frame according to a selected cubic face format;
determining one or more discontinuous boundaries within each assembled cubic frame; and
processing the assembled cubic frames according to information related to said one or more discontinuous boundaries, wherein said processing the assembled cubic frames comprises:
skipping filtering process at said one or more discontinuous boundaries within each assembled cubic frame when the filtering process is enabled.

US Pat. No. 10,368,066

METHODS AND SYSTEMS FOR IMAGE INTRA-PREDICTION MODE MANAGEMENT

Dolby Laboratories Licens...

1. An apparatus for decoding a current block of image, the apparatus comprising:a decoder comprising one or more processing devices, the decoder configured to:
select an intra prediction mode, and
predict pixel values of the current block using the selected intra prediction mode,
wherein to select the intra prediction mode, the decoder is configured to:
a) determine an estimated prediction mode based on prediction modes of a first block adjacent and above the current block and a second block adjacent and left of the current block,
b) receive a first information indicating whether the estimated prediction mode is to be selected as the intra prediction mode of the current block,
c) receive a second information indicating an actual best prediction mode to be selected as the intra prediction mode of the current block when the estimated prediction mode is different from the actual best prediction mode, and
d) select either the estimated prediction mode or the actual best prediction mode in a set of prediction modes as the intra prediction mode, based on the first and second information,
wherein, if both the first block and the second block are not available, the estimated prediction mode is determined to be DC prediction mode regardless of the prediction mode of the second block,
when the selected intra prediction mode is the DC prediction mode and the first block is not available, all pixels of the current block are predicted to have a value equal to (I+J+K+L+2) right shifted by two bits, and
wherein I, J, K, and L are pixel values in an adjacent block immediately to the left of the current block.

US Pat. No. 10,368,065

SKIP MACROBLOCK CODING

Microsoft Technology Lice...

1. One or more computer-readable media storing computer-executable instructions for causing a computing system, when programmed thereby, to perform operations, wherein the one or more computer-readable media are selected from the group consisting of volatile memory, non-volatile memory, magnetic disk, CD-ROM, and DVD, the operations comprising:encoding plural video pictures of a video sequence to produce encoded data, the plural video pictures including plural predicted macroblocks, wherein each block of the plural predicted macroblocks is predicted from no more than one reference video picture, including:
selecting a coding mode from among plural available coding modes;
processing one or more skipped macroblocks among the plural predicted macroblocks, wherein each of the one or more skipped macroblocks uses causally predicted motion for the skipped macroblock based upon motion of one or more other predicted macroblocks around the skipped macroblock, and wherein each of the one or more skipped macroblocks lacks residual information; and
encoding skipped macroblock information for the one or more skipped macroblocks for signaling at a layer in bitstream syntax, wherein the skipped macroblock information indicates skipped/not skipped status, and wherein the encoding includes encoding the skipped macroblock information according to the coding mode selected from among the plural available coding modes; and
outputting the encoded data in a bitstream.

US Pat. No. 10,368,063

OPTICAL TEST DEVICE FOR A VEHICLE CAMERA AND TESTING METHOD

MAGNA ELECTRONICS INC., ...

1. A method of testing a camera for vision system for a vehicle, said method comprising:providing a camera configured for mounting and use on a vehicle, said camera having a field of view, wherein said camera is operable at selected ones of a plurality of register settings;
providing a test pattern in the field of view of the camera;
capturing image data with said camera;
wherein capturing image data comprises capturing at least two frames of image data using different register settings having noise filtering at a respective one of at least two levels between a maximum noise filtering and a minimum noise filtering;
measuring the signal to noise ratio for each of the at least two frames of captured image data;
measuring a texture value for each of the at least two frames of captured image data; and
selecting a compromise between noise reduction and texture preservation.

US Pat. No. 10,368,062

PANORAMIC CAMERA SYSTEMS

Facebook, Inc., Menlo Pa...

1. A method comprising:receiving, at an image processing system comprising a long term storage and a memory, a request to generate depth maps for each frame of a sequence of frames, wherein each frame comprises image content from a plurality of cameras of an image capture system stored in the long term storage of the image processing system;
wherein a pipeline comprising a series of steps is used to generate a depth map from image content, each step of the pipeline generating a result component based on set of input components, wherein the pipeline comprises:
an image content processing step generating processed image content for a frame based on input image content associated with the frame; and
a depth generation step generating a depth map for a frame based on input processed image content associated with the frame;
and wherein at least one step of the pipeline uses input components from a plurality of frames;
generating a first depth map for a current frame using the pipeline, wherein components used to generate the first depth map are marked in the memory;
determining a set of unmarked components stored in the memory and removing the set of unmarked components from memory;
determining a set of marked components stored in the memory and unmarking each component of the set of unmarked components; and
advancing the current frame to the next frame in the sequence of frames.

US Pat. No. 10,368,061

METHOD AND APPARATUS FOR HOLOGRAPHIC IMAGE PROJECTION

BAE Systems plc, London ...

1. A holographic projector apparatus comprising an electromagnetic radiation source communicably coupled to a control system and a three dimensional image projector for outputting a plurality of image signals representative of a generated image, the control system being configured to cause electromagnetic radiation to be applied to a plurality of sets of selected three-dimensional portions of a gaseous volume so as to heat and/or ionise gas within said selected portions of the gaseous volume, thereby to generate each set of selected three-dimensional portions of said gaseous volume such that at least one set of said selected three-dimensional portions of said gaseous volume is selectively orientated relative to and arranged to intersect a path of at least one of said image signals, and wherein said selected portions of each set of selected three-dimensional portions are spatially located together in a substantially unbroken three-dimensional configuration and configured to generate an electromagnetic radiation path modifying element for modifying the path of a respective image signal incident thereon to direct said image signal to a selected location within a viewing region for viewing by at least one viewer.

US Pat. No. 10,368,060

HEAD MOUNTED DISPLAY

DELTA ELECTRONICS, INC., ...

1. A head mounted display, comprising:a first light source configured to emit a first light;
a second light source configured to emit a second light;
an image output module configured to receive the first light and the second light, and to respectively generate a first image light and a second image light with corresponding image information;
a light turning prism configured to vary a propagating direction of the first light from the first light source to the image output module and vary a propagating direction of the second light from the second light source to the image output module, wherein the light turning prism has a first light-redirecting surface and a second light-redirecting surface extending to a region between the first and second light sources, a distance between the first and second light-redirecting surfaces decreases as the first and second light-redirecting surface are further away from the image output module, the first light-redirecting surface is configured to redirect the propagating direction of the second light in a reflecting manner and to allow the second image light to pass therethrough, the second light-redirecting surface is configured to redirect the propagating direction of the first light in the reflecting manner and to allow the first image light to pass therethrough, where the first light-redirecting surface is more proximal to the first light source than the second light-redirecting surface, and the second light-redirecting surface is more proximal to the second light source than the first light-redirecting surface;
a first eyepiece module configured to make the second image light image to a first target position; and
a second eyepiece module configured to make the first image light image to a second target position;
wherein the first light source is disposed between the light turning prism and the first eyepiece module, and the second light source is disposed between the light turning prism and the second eyepiece module.

US Pat. No. 10,368,059

METHOD AND APPARATUS FOR INDIVIDUALIZED THREE DIMENSIONAL DISPLAY CALIBRATION

Atheer, Inc., Mountain V...

39. An apparatus comprising:a first three-dimensional (3D) display operable to output a virtual object at a first coordinate in three-dimensional (3D) space to a first eye of a viewer;
a second 3D display operable to output the virtual object at a second coordinate in the 3D space to a second eye of the viewer, wherein the first coordinate is different than the first coordinate;
a first sensor configured to measure the first coordinate of an end-effector interacting with the virtual object relative to the first eye when the virtual object is displayed at the first coordinate;
a second sensor configured to measure the second coordinate of the end-effector interacting with the virtual object relative to the second eye when the virtual object is displayed at the second coordinate;
a processor coupled to the first 3D display, the second 3D display, the first sensor, and the second sensor, wherein the processor is configured to:
determine that the end-effector is pointing to a third coordinate on the first 3D display that is different than the first coordinate where the virtual object is displayed;
determine that the end-effector is pointing to a fourth coordinate on the second 3D display that is different than the second coordinate where the virtual object is displayed;
determine a first offset value between the first coordinate where the virtual object is located and the third coordinate where the end-effector is pointing to on the first display, wherein the first offset value indicates a difference in the first coordinate and the third coordinate;
determine a second offset value between the second coordinate where the virtual object is located and the fourth coordinate where the end-effector is pointing to on the second display, wherein the second offset value indicates a difference in the second coordinate and the fourth coordinate;
determine a fifth coordinate in the 3D space, wherein:
the fifth coordinate is the first coordinate adjusted by the first offset value so that the viewer perceives the virtual object as being located at the first coordinate on the first 3D display;
the first 3D display is to display the virtual object at the fifth coordinate on the first 3D display; and
determine a sixth coordinate in the 3D space, wherein:
the sixth coordinate is the second coordinate adjusted by the second offset value so that the viewer perceives the virtual object as being located at the second coordinate on the second 3D display; and
the second 3D display is to display the virtual object at the sixth coordinate on the second 3D display.

US Pat. No. 10,368,058

METHOD AND APPARATUS FOR EXTENDING BINOCULAR CAMERA POSITIONING RANGE

Beijing Pico Technology C...

1. An apparatus for extending binocular camera positioning range, comprising:a rotatable base, a positioning module, a judging module, and a controlling module;
the rotatable base is disposed on the binocular camera and configured to drive a lens of the binocular camera to rotate;
the positioning module is configured to obtain an image of a target to be positioned, and calculate, in real time, spatial coordinates of the target to be positioned in a field of vision of the binocular camera according to the image that is collected by the binocular camera according to a sampling frequency;
the judging module is configured to, according to the spatial coordinates of the target to be positioned calculated in real time, determine whether the target to be positioned will go out of the field of vision of the binocular camera soon; and
the controlling module is configured to, when the judging module determines that the target to be positioned will go out of the field of vision of the binocular camera soon, control the rotatable base to drive the lens of the binocular camera to rotate, and adjust a direction of the lens of the binocular camera so that the field of vision of the binocular camera always covers the target to be positioned;
wherein the judging module uses the following two solutions in combination to determine whether the target to be positioned will go out of the field of vision of the binocular camera soon:
solution one: a certain area is pre-demarcated in the field of vision of the binocular camera, and when the binocular camera positions that the target to be positioned goes out of the pre-demarcated area, it is judged as going out of the field of vision soon; and
solution two: a movement speed and a movement trajectory of the target to be positioned is obtained according to collected spatial coordinates of the target to be positioned, the movement state of the target to be positioned is predicted based on the movement speed and the movement trajectory, and if it is predicted that the target to be positioned will go out of the field of vision of the binocular camera in a short time period in the movement state, it is judged as going out of the field of vision soon.

US Pat. No. 10,368,057

SYNCHRONIZING DATA STREAMS

Amazon Technologies, Inc....

1. A system comprising:a first camera configured to generate depth data of an environment based on acquired depth images;
a second camera configured to acquire color images of the environment;
a computing device coupled to the first camera and the second camera, the computing device comprising:
a pulse-width-modulation (PWM) unit coupled to the first camera and the second camera and configured to generate a pulse and an interrupt at a first time;
one or more processors to receive the interrupt;
memory, coupled to the one or more processors;
a first driver stored in the memory and executable on the one or more processors to receive a depth image acquired by the first camera, the first camera configured to acquire the first depth image in response to receiving the pulse from the PWM unit;
a second driver stored in the memory and executable on the one or more processors to receive a color image acquired by the second camera, the second camera acquiring the color image in response to receiving the pulse from the PWM unit;
a first timestamp queue for storing, in the memory, timestamps for association with depth images acquired by the first camera;
a second timestamp queue for storing, in the memory, timestamps for association with color images acquired by the second camera;
an interrupt service stored in the memory and executable on the one or more processors to:
receive a call from the one or more processors in response to the one or more processors receiving the interrupt;
store, in response to receiving the call: (i) a first timestamp corresponding to the first time in the first timestamp queue, and (ii) a second timestamp corresponding to the first time in the second timestamp queue;
an application stored in the memory and executable on the one or more processors to:
receive the depth image from the first driver;
read the first timestamp from the first timestamp queue;
add metadata that is based on the first timestamp to the depth image;
receive the color image from the second driver;
read the second timestamp from the second timestamp queue; and
add metadata that based on the second timestamp to the color image.

US Pat. No. 10,368,056

DEPTH DATA DETECTION AND MONITORING APPARATUS

SHANGHAI PERCIPIO TECHNOL...

1. A depth data detection apparatus, comprising:an infrared coded projection system for projecting a textured infrared beam to a space to be measured to form randomly distributed infrared textures on an object to be detected in the space to be measured;
two infrared image sensors for respectively imaging the space to be measured so as to form two infrared textured images, the two infrared image sensors have a predetermined relative spatial position relationship therebetween, so that depth data of infrared textures relative to the two infrared image sensors can be determined based on a position difference of texture segment images correspondingly formed in the two infrared texture images by the same texture segment in the infrared textures and the predetermined relative spatial position relationship,
the infrared coded projection system comprises:
at least two infrared light generators for generating infrared light respectively;
an optical system, wherein the infrared light generated by the infrared light generator forms the textured infrared beam after passing through the optical system;
a controller for controlling and switching the at least two infrared light generators such that the at least two infrared light generators alternately generate infrared light;
wherein the infrared coded projection system has multiple operating modes, in different operating modes, the controller switches different infrared light generators into operation, and in each operating mode, a different infrared light generator projects textured infrared beams at a different projection angle and/or from a different position to the space to be measured, in order to form randomly distributed infrared texture on the object to be detected in the space to be measured;
wherein for each operating mode, the two infrared image sensors are configured to image the space to be measured respectively, to form two infrared textured images;
wherein the depth data detection apparatus further comprises:
a processor that is configured to:
acquire two infrared texture images obtained by using the two infrared image sensors in different operating modes,
for each operating mode, determine depth data of the infrared textures relative to the two infrared image sensors in the operating mode, based on the predetermined relative spatial position relationship between the two infrared image sensors and the position difference of the texture segment images correspondingly formed in the two infrared texture images by the same texture segment in the infrared textures,
fuse the depth data determined in multiple operating modes to obtain new depth data as final depth data of the object to be detected.

US Pat. No. 10,368,055

ACTOR-MOUNTED MOTION CAPTURE CAMERA

Two Pic MC LLC, Burbank,...

1. An external three-dimensional camera system comprising a plurality of head-mounted cameras configured to capture images of at least a portion of an actor's face from at least two different angles, wherein at least two head-mounted cameras of the plurality of head-mounted cameras are independently adjustable in position with respect to one another.

US Pat. No. 10,368,054

QUANTITATIVE THREE-DIMENSIONAL IMAGING OF SURGICAL SCENES

Intuitive Surgical Operat...

1. A device comprising:an endoscope; an image sensor array comprising at least three imaging sensors having coplanar overlapping fields of view, disposed to image fields of view adjacent to the endoscope, wherein each imaging endoscope,
wherein each imaging sensor including a pixel array that is separate from the pixel arrays of other imaging sensors; and a light source disposed to illuminate the field of view; wherein the endoscope includes an elongated portion having a first end portion and a second end portion opposite the first end portion; wherein the image sensor array is disposed displaced from the first end portion of the endoscope, closer to the second end portion of the endoscope; the device further including; a light pipe disposed to transmit an image from a field of view adjacent the first end portion of the endoscope to the image sensor array displaced from the first end portion, closer to the second end portion of the endoscope.

US Pat. No. 10,368,053

STRUCTURED LIGHT ACTIVE DEPTH SENSING SYSTEMS COMBINING MULTIPLE IMAGES TO COMPENSATE FOR DIFFERENCES IN REFLECTIVITY AND/OR ABSORPTION

QUALCOMM Incorporated, S...

1. A device adapted to compensate for differences in surface reflectivity in an active depth sensing system using structured light, the device comprising:a receiver sensor configured to capture a first code mask image and a second code mask image of a scene onto which a structured light pattern is projected using a light source, the structured light pattern encoding a plurality of codewords, the first code mask image captured using a first exposure time and the second code mask image captured using a second exposure time; and
a processing circuit configured to:
detect a number of pixels of an undecodable region of the first code mask image, the undecodable region corresponding to a portion of the first code mask image in which no recognized codeword is detected;
determine the second exposure time for the receiver sensor to capture the second code mask image, wherein the second exposure time is based on the number of pixels and an indication representing a threshold percentage of pixels;
extract first pixels corresponding to first decodable codewords from the first code mask image and second pixels corresponding to second decodable codewords from the second code mask image;
generate a third code mask image that includes the first extracted pixels and the second extracted pixels; and
ascertain depth information for the scene based on a difference between a first position of a codeword in the structured light pattern and a second position of the codeword detected in the third code mask image.

US Pat. No. 10,368,052

DYNAMIC DISTRIBUTION OF THREE-DIMENSIONAL CONTENT

Comcast Cable Communicati...

1. A system comprising:a computing device;
a first device located remotely from the computing device; and
a second device located remotely from the computing device,
wherein the computing device is configured to:
send, to at least the first device and the second device, a first segment of a first portion of multi-dimensional multimedia content;
receive and indication that, for subsequent segments of the multi-dimensional multimedia content, the first portion of the multi-dimensional multimedia content and a second portion of the multi-dimensional multimedia content should be sent to the first device; and
send, based on the indication and to at least the first device and the second device, a subsequent segment of the first portion of the multi-dimensional multimedia content and a subsequent segment of the second portion of the multi-dimensional multimedia content;
wherein the first device is configured to:
receive the subsequent segment of the first portion of the multi-dimensional multimedia content and the subsequent segment of the second portion of the multi-dimensional multimedia content; and
cause output of a combination of the subsequent segment of the first portion of the multi-dimensional multimedia content and the subsequent segment of the second portion of the multi-dimensional multimedia content; and
wherein the second device is configured to:
receive the subsequent segment of the first portion of the multi-dimensional multimedia content and the subsequent segment of the second portion of the multi-dimensional multimedia content; and
cause output of the subsequent segment of the first portion of the multi-dimensional multimedia content.

US Pat. No. 10,368,051

3D-HEVC INTER-FRAME INFORMATION HIDING METHOD BASED ON VISUAL PERCEPTION

Ningbo University, Ningb...

1. A 3D-HEVC (Three Dimensional High Efficiency Video Coding) inter-frame information hiding method based on visual perception comprising steps of information embedding and information extraction, wherein:the step of information embedding comprises:
(1A) at an information embedding terminal, taking Sorg as an original stereo video, recording a left view color video of the Sorg as Lorg, recording a right view color video of the Sorg as Rorg, and taking W as secret information to be embedded, wherein: W is a binary number which contains nW bits, W=wnW wnW?1L wiL w2w1, a width of both a left view color image of the Lorg and a right view color image of the Rorg is M, a height thereof is N, both the M and the N can be divisible by 64, a total frame number of both all left view color images of the Lorg and all right view color images of the Rorg is F, here, F?1, nW is a integer and
wnW wnW?1L wiL w2w1 respectively represent a value of a (nW)th bit, a value of a (nW?1)th bit, . . . , a value of an ith bit, . . . , a value of a second bit and a value of a first bit, each of the wnW wnW?1L wiL w2w1 is 0 or 1, 1?i?nW;(1B) obtaining a stereo saliency image of each left view color image of the Lorg through a stereo image saliency model, recording a stereo saliency image of a jth left view color image of the Lorg as Lorg,ju, calculating an otsu threshold of the stereo saliency image of each left view color image of the Lorg, and recording the otsu threshold of the Lorg,ju as yjL, wherein 1?j?F,
also, obtaining a stereo saliency image of each tight view color image of the Rorg through the stereo image saliency model, recording a stereo saliency image of a jth right view color image of the Rorg as Rorg,ju, calculating an otsu threshold of the stereo saliency image of each right view color image of the Rorg, and recording the otsu threshold of the Rorg,ju as yjR;
(1C) dividing the stereo saliency image of each left view color image of the Lorg into non-overlapped
image blocks each of which has a size of 64×64 recording a kth image block of the Lorg,ju as Borg,j,kL, calculating a mean value of pixel values of all pixels of each image block of the stereo saliency image of each left view color image of the Lorg, recording the mean value of the pixel values of all the pixels of the Borg,j,kL as qj,kL, determining whether each image block of the stereo saliency image of each left view color image of the Lorg is a salient block or a non-salient block according to the mean value of the pixel values of all the pixels of each image block of the stereo saliency image of each left view color image of the Lorg and the otsu threshold of the stereo saliency image of each left view color image of the Lorg, wherein: if the qj,kL is larger than or equal to the yjL, the Borg,j,kL is determined to be the salient block, if the qj,kL is smaller than the yjL, the Borg,j,kL is determined to be the non-salient block, here,
also, dividing the stereo saliency image of each right view color image of the Rorg into non-overlapped
image blocks each of which has a size of 64×64, recording a kth image block of the Rorg,ju as Borg,j,kR, calculating a mean of pixel values of all pixels of each image block of the stereo saliency image of each right view color image of the Rorg, recording the mean value of the pixel values of all the pixels of the Borg,j,kR as qj,kR, determining whether each image block of the stereo saliency image of each right view color image of the Rorg is a salient block or a non-salient block according to the mean value of the pixel values of all the pixels of each image block of the stereo saliency image of each right view color image of the Rorg and the otsu threshold of the stereo saliency image of each right view color image of the Rorg, wherein: if the qj,kR is larger than or equal to the yjR, the Borg,j,kR is determined to be the salient block, if the qj,kR is smaller than the yjR, the Borg,j,kR is determined to be the non-salient block;(1D) generating a binary pseudorandom sequence which contains nW bits through logistics chaotic mapping, taking the binary pseudorandom sequence as a secret key and recording the secret key as E, here, E=enW enW?1L eiL e2e1, perform an XOR (exclusive OR) operation on a value of each bit of the W and a value of each corresponding bit of the E, obtaining an XOR result, taking the XOR result as encrypted information and recording the encrypted information as W?, here, W?=w?nWw?nW?1L w?iL w?2w?1, wherein: the enW enW?1L eiL e2e1 respectively represent a value of the (nW)th bit, a value of the (nW?1)th bit, . . . , a value of the (i)th bit, . . . a value of the second bit and a value of the first bit of the E each of the enW e enW?1L eiL e2e1 is 0 or 1, w?nW w?nW?1L w?iL w?2w?1 respectively represent a value of the (nW)th bit, a value of the (nW?1)th bit, . . . , a value of the (i)th bit, . . . a value of the second bit and a value of the first bit of the W?, each of the w?nW w?nW?1L w?iL w?2w?1 is 0 or 1, w?, is an XOR value of the wi and the ei;
(1E) coding the Lorg and the Rorg in frame through a 3D-HEVC standard coding platform, defining a jth left view color image of the Lorg to be coded or a jth tight view color image of the Rorg to be coded as a current frame and recording the current frame as Pj, wherein an initial value of the j is 1;
(1F) judging whether the Pj is a P-frame or a B-frame, wherein if it is, step (1G) is executed, if it is not, step (1I) is executed;
(1G) coding the Pj in coding-tree-unit, defining a kth coding-tree-unit to be coded of the Pj as a current coding block and recording the current coding block as Borg,j,k, wherein
here an initial value of the k is 1;(1H-a) reading coding quantization parameter of the Borg,j,k and recording the coding quantization parameter as QPorg,j,k, reading a value w?i? of a i?th bit of the W? and a value w?i?+1 of a (i?+1)th bit of the W?, transforming the w?i?+1 and the w?i? into decimal values and recording the decimal value as di?, here,
wherein an initial value of the i? is 1, 1?i??nW?1, and each of w?i?+1 and is 0 or 1;(1H-b) when the Pj is the jth left view color image of the Lorg, judging whether a remainder result of the QPorg,j,k to 4 is equal to the di?, wherein if the remainder result is not equal to the di?, when the Borg,j,kL is a salient block, the QPorg,j,k is downwardly modulated by the w?i? and the w?i?+1, so that coding quantization parameter embedded with secret information of the Borg,j,k is obtained and recorded as QPorg,j,k, and then step (1H-c) is executed; when the Borg,j,kL is a non-salient block, the QPorg,j,k is upwardly modulated by the w?i? and the w?i?+1, so that the coding quantization parameter embedded with secret information of the Borg,j,k is obtained and recorded as the QP?org,j,k and then the step (1H-c) is executed; if the remainder result is equal to the di?, the QPorg,j,k is directly recorded as the coding quantization parameter embedded with secret information of the Borg,j,k which is denoted as the QP?org,j,k, QP?org,j,k=QPorg,j,k, and then the step (1H-c) is executed, here, “=” is an assignment symbol in the QP?org,j,k=QPorg,j,k;
when the Pj is the jth right view color image of the Rorg, judging whether a remainder result of the QPorg,j,k to 4 is equal to di?, the wherein if the remainder result is not equal to the di?, when the Borg,j,kR is a salient block, the QPorg,j,k is downwardly modulated by the w?i? and the w?i?+1 so that coding quantization parameter embedded with secret information of the Borg,j,k is obtained and recorded as QP?org,j,k, and then the step (1H-c) is executed; when the Borg,j,kR is a non-salient block, the QPorg,j,k is upwardly modulated by the w?i? and the w?i?+1, so that the coding quantization parameter embedded with secret information of the Borg,j,k is obtained and recorded as the QP?org,j,k, and then the step (1H-c) is executed; if the remainder result is equal to the di?, the QPorg,j,k is directly recorded as the coding quantization parameter embedded with secret information of the Borg,j,k which is denoted as the QP?org,j,k, QP?org,j,k=QPorg,j,k and then the step (1H-c) is executed;
(1H-c) judging whether the QP?org,j,k is in a range of [0, 51], wherein if it is, step (1H-d) is executed; otherwise, when QP?org,j,k>51, the QPorg,j,k is downwardly modulated by the w?i? and the w?i?+1, the coding quantization parameter embedded with secret information QP?org,j,k of the Borg,j,k is obtained again, and then the step (1H-d) is executed; when QP?org,j,k<0, the QPorg,j,k is upwardly modulated by the w?i? and the w?i?+1, the coding quantization parameter embedded with secret information QP?org,j,k of the Borg,j,k is obtained again, and then the step (1H-d) is executed;
(1H-d) coding the Borg,j,k with the QP?org,j,k, completing a secret information embedded process of the Borg,j,k, after completing coding of the Borg,j,k, judging whether the Borg,j,k is a skip block, wherein if it is, step (1H-e) is directly executed, otherwise, i?=i?+2 is set, the step (1H-e) is executed, here, “=” is an assignment symbol in the i?=i?+2;
(1H-e) setting k=k+1 regarding a next coding-tree-unit to be coded of the Pj as a current coding block and recording the next coding-tree-unit to be coded as Borg,j,k, returning to the step (1H-a) to continue till all coding-tree-units of the Pj are completely coded, executing step (1I), wherein “=” is an assignment symbol in the k=k+1;
(1I) setting j=j+1, regarding a next left view color image to be coded of the Lorg or a next right view color image to be coded of the Rorg as a current frame and recording the current frame as Pj, returning to the step (1F) and continuing till all left view color images in the Lorg and all right view color images in the Rorg are completely coded, and obtaining video stream embedded with secret information, wherein “=” is an assignment symbol in the j=j+1; and
(1J) an information embedding terminal sending initial value information which generates the secret key E to an information extraction terminal;
the step of information extraction comprises:
(2A) defining the video stream embedded with secret information received at an information extraction terminal as a target video stream and recording the target video stream as str.bindec;
(2B) according to the initial value information sent from the information embedding terminal in step (1J), through the logistics chaotic mapping, the information extraction terminal generating another secret key E which is as same as the secret key E of the information embedding terminal;
(2C) parsing the str.bindec frame by frame, and defining a frame to be parsed in the str.bindec as a current frame;
(2D) judging the current frame is a P-frame or B-frame, wherein if it is, step (2E) is executed, otherwise, step (2H) is executed;
(2E) parsing the current frame coding-tree-unit by coding-tree-unit, and defining a coding-tree-unit to be parsed in the current frame as a current parsing block;
(2F) judging whether the current parsing block is a skip block, wherein if it is, step (2G) is executed, otherwise, coding quantization parameter embedded with secret information of the current parsing block are parsed and recorded as OP?dec, and then a remainder result QP?dec of to 4 is calculated and recorded as d?dec, wherein the d?dec is 0, 1, 2 or 3, and then the decimal d?dec is transformed to binary number, values of two bits extracted from the current parsing block are obtained, such that a secret information extraction process of the current parsing block is completed, and then the step (2G) is executed;
(2G) regarding a next coding-tree-unit to be parsed of the current frame as a current parsing block, and then returning to the step (2F) till all coding-tree-units of the current frame are completely processed, and then step (2H) is executed;
(2H) regarding a next frame to be parsed of the str.bindec as a current frame, and then returning to the step (2D) till all frames of the str.bindec are completely processed, such that secret information extraction is completed; and
(2I) defining extracted values of nW bits as encrypted information and recording the encrypted information as W?dec, here, W?dec=w?dec,nWw?dec,nW?1L w?dec,iL w?dec,2w?dec,1, and then perform an XOR (exclusive OR) operation on a value of each bit of the W?dec and a value of each corresponding bit of the E, obtaining an XOR result, taking the XOR result as decrypt secret information and recording the decrypt secret information as W?dec, here, W?dec=w?dec,nWw?dec,nW?1L w?dec,iL w?dec,2w?dec,1, wherein: the w?dec,nWw?dec,nW?1L w?dec,iL w?dec,2w?dec,1 respectively represent a value of the (nW)th bit, a value of the (nW?1)th bit, . . . , a value of the (i)th bit, . . . a value of the second bit and a value of the first bit of the W?dec, each of the w?dec,nWw?dec,nW?1L w?dec,iL w?dec,2w?dec,1 is 0 or 1, wdec,nWwdec,nW?1L wdec,iL wdec,2wdec,1 respectively represent a value of the (nW)th bit, a value of the (nW?1)th bit, a value of the (i)th bit; . . . a value of the second bit and a value of the first bit of the if Wdec each of the wdec,nWwdec,nW?1L wdec,iL wdec,2wdec,1 is 0 or 1.

US Pat. No. 10,368,050

METHOD AND APPARATUS FOR DISTRIBUTION OF 3D TELEVISION PROGRAM MATERIALS

Google Technology Holding...

1. A method for distributing video program material, comprising:receiving a 3D video stream and metadata associated with the 3D video stream, wherein the metadata includes a 3D to 2D conversion option applicable to the video stream;
determining that a 3D to 2D conversion is to be performed based on the 3D to 2D conversion option;
in response to determining that the 3D to 2D conversion is to be performed, identifying a type of 3D to 2D conversion that is to be performed based on the 3D to 2D conversion option, wherein the type of 3D to 2D conversion is a value from a plurality of values that at least indicates an output resolution for a 2D video stream and a manner in which a left 3D view and a right 3D view of the 3D video stream are included within the 3D video stream;
determining that both the left 3D view and the right 3D view are encoded within each frame of the 3D video stream based on the value associated with the type of 3D to 2D conversion;
identifying the output resolution for the 2D video stream indicated by the type of 3D to 2D conversion; and
converting the 3D video stream to the 2D video stream with the output resolution indicated by the type of 3D to 2D conversion using either the left 3D video or the right 3D view.

US Pat. No. 10,368,049

LIGHT FIELD DISPLAY CONTROL METHOD AND APPARATUS, AND LIGHT FIELD DISPLAY DEVICE

BEIJING ZHIGU RUI TUO TEC...

1. A light field display control method, comprising:determining at least one depth distribution sub-region of content according to a display depth of field (DoF) range of a light field display device and depth distribution information of the content, wherein each depth distribution sub-region of the at least one depth distribution sub-region is located outside the display DoF range; and
adjusting a focal length of a first lenslet according to at least the display DoF range and the depth distribution sub-region, wherein the first lenslet is a lenslet that is in a lenslet array of the light field display device and affects display of a first object, and the first object is a part, which is located in the depth distribution sub-region, of the content,
wherein, adjusting the focal length of the first lenslet includes:
determining, according to an expected focal length of the first lenslet, a phase difference that is formed after incident light passes through different parts of the first lenslet;
determining, according to a mapping relationship between phase differences and external fields, an external field corresponding to the phase difference; and
changing, by means of the external field, the phase difference that is formed after the incident light passes through the different parts of the first lenslet to adjust the focal length of the first lenslet.

US Pat. No. 10,368,047

SIX-DEGREE OF FREEDOM VIDEO PLAYBACK OF A SINGLE MONOSCOPIC 360-DEGREE VIDEO

ADONE INC., San Jose, CA...

1. A non-transitory computer storage medium storing computer-useable instructions that, when used by one or more computing devices, cause the one or more computing devices to perform operations for providing six-degree of freedom viewing of a monoscopic 360-degree video, the operations comprising:recover, from a monoscopic 360-degree video of a subject scene, a sparse three-dimensional geometric representation of the subject scene and a camera motion path;
generate a dense three-dimensional geometric representation of the subject scene based at least in part on the recovered sparse three-dimensional geometric representation of the subject scene and the recovered camera motion path; and
synthesize at least one novel viewpoint of the subject scene based at least in part on a portion of a provided frame of the monoscopic 360-degree video, at least a portion of the dense three-dimensional representation of the subject scene, and obtained motion data.

US Pat. No. 10,368,046

METHOD AND APPARATUS FOR GENERATING A THREE DIMENSIONAL IMAGE

KONINKLIJKE PHILIPS N.V.,...

1. An apparatus for generating an output three dimensional image, the apparatus comprising:a first image generating circuit arranged to generate an intermediate three dimensional image, the intermediate three dimensional image comprising a plurality of regions,
wherein the plurality of regions are spatial subdivisions of the intermediate three dimensional image,
wherein the first image generating circuit is arranged to generate a number of image blocks of pixel values for the plurality of regions,
wherein the number of image blocks is different for at least two regions of the plurality of regions, and
wherein each image block comprises pixel values for a group of pixels, the group of pixels corresponding to a view direction;
a second image generating circuit arranged to generate the output three dimensional image, the output three dimensional image comprising a number of view images from the intermediate three dimensional image, wherein each of the number of view images corresponds to a view direction; and
an adaptor arranged to adapt a number of image blocks with different viewing directions for at least a first region,
wherein the at least first region is one of the plurality of regions, and
wherein the adaptation is in response to a property of at least one of the intermediate three dimensional image and a representation of a three dimensional scene from which the first image generating circuit is arranged to generate the intermediate three dimensional image.

US Pat. No. 10,368,045

SYSTEM AND METHOD FOR PRESENTING VIRTUAL REALITY CONTENT TO A USER

Visionary VR, Inc., Los ...

1. A system for presenting content to a user, the system comprising:one or more sensors that generate output signals conveying information related to a view direction of the user, the view direction of the user corresponding to a physical direction toward which the gaze of the user is directed;
a display that presents the content to the user, wherein presentation of the content via the display visually simulates virtual objects superimposed within a real world view of the physical space determined by the view direction of the user via the display, wherein the view direction toward which the gaze of the user is directed corresponds to an orientation of the display, and wherein the content includes multiple fields that are viewable and fixed spatially with respect to the physical space and the positions of the fields in the virtual space are independent of the view direction of the user, the multiple fields including a first viewable field and a second viewable field; and
one or more physical computer processors configured by computer readable instructions to:
determine the view direction of the user based on the output signals;
identify a change in the view direction of the user from the first viewable field to the second viewable field based on the output signals;
cause a change in flow of the content being presented via the display responsive to identifying the change in the view direction of the user, wherein the change in flow of the content alters the rhythm, pace, and/or style of the content; and
cause the display to provide a sensory cue to the user responsive to identifying the change in the view direction of the user.

US Pat. No. 10,368,044

DISPLAYING DCI AND OTHER CONTENT ON AN ENHANCED DYNAMIC RANGE PROJECTOR

Dolby Laboratories Licens...

1. A multi-modulation projector display system, said display system comprising:a light source;
a controller;
a first modulator, said first modulator being illuminated by said light source and said first modulator comprising a plurality of analog mirrors to modulate light from the light source;
a second modulator, said second modulator being illuminated by light from said first modulator and capable of modulating light from said first modulator, and said second modulator comprising a plurality of mirrors; said controller further comprising:
a processor;
a memory, said memory associated with said processor and said memory further comprising processor-readable instructions, such that when said processor reads the processor-readable instructions, causes the processor to perform the following instructions:
receiving input image data, said image data comprising at least one highlight feature;
determining whether the input image data comprises DCI image data;
processing the input image data as EDR image data upon detecting EDR image data;
setting the luminance of the first modulator as a function of a ratio of DCI max luminance to max luminance of the EDR projector system upon detecting DCI image data;
performing dynamic range processing on the DCI image data by creating a blurred halftone image on the first modulator; and
rendering the dynamic range processed DCI image data from the first modulator on the second modulator.

US Pat. No. 10,368,043

PROJECTOR AND ILLUMINATION SYSTEM THEREOF

Coretronic Corporation, ...

1. A projector, comprising:an illumination system, comprising:
an excitation light source group, comprising at least one first light emitting element, wherein the first light emitting element is configured to provide a first beam;
a wavelength conversion element, having a reflective area and a wavelength conversion area, wherein the reflective area and the wavelength conversion area are configured to cut into a transmission path of the first beam by turns; and
a light combining element, disposed between the excitation light source group and the wavelength conversion element and having at least one first dichroic portion, at least one first reflective portion, and a first light combining surface facing the first light emitting element, wherein the first dichroic portion corresponds to a first quadrant of the first light combining surface and the first reflective portion corresponds to a third quadrant of the first light combining surface,
wherein the first beam is configured to penetrate through the first dichroic portion and to be transmitted to the wavelength conversion element,
wherein the reflective area is configured to reflect the first beam to the first reflective portion,
wherein the wavelength conversion area is configured to convert the first beam into an excited beam and reflect the excited beam to the light combining element,
wherein the first dichroic portion and the first reflective portion of the light combining element are configured to reflect the excited beam, and the first reflective portion of the light combining element is configured to reflect the first beam from the reflective area, so that the first beam and the excited beam constitute an illumination beam;
a light engine module, comprising a light valve, wherein the light valve is located on a transmission path of the illumination beam and is configured to convert the illumination beam into an image beam; and
a projection lens, located on a transmission path of the image beam, wherein the image beam becomes a projection beam after passing through the projection lens.

US Pat. No. 10,368,042

LIGHT SOURCE UNIT AND PROJECTION-TYPE DISPLAY

SONY CORPORATION, Tokyo ...

1. A light source device, comprising:at least one light source section of a plurality of light source sections that includes a first light source and a second light source that are configured to emit rays of a same color of a plurality of colors;
a light quantity detector configured to receive each of the emitted rays of the plurality of colors; and
a control section configured to:
control a light emission timing of each of the first light source and the second light source;
measure, at first different timings, a first light quantity of the rays emitted by one of the first light source or the second light source, based on a detection result of the light quantity detector; and
extinguish other of the first light source or the second light source for a first time period for the measurement,
wherein the first time period is less than a light emission period of the rays emitted by each of the first light source and the second light source.

US Pat. No. 10,368,041

IMAGING DEVICE, IMAGING SYSTEM, AND IMAGE PROCESSING METHOD

CANON KABUSHIKI KAISHA, ...

1. An imaging device comprising:a pixel unit in which at least one first pixel and a plurality of second pixels are arranged in a matrix, wherein the plurality of second pixels are arranged around the first pixel, and each of the plurality of second pixels is able to provide more brightness information than is provided by the first pixel;
a directional property determination unit that determines a direction of an intensity distribution based on differences among values of the plurality of second pixels;
a correlation value calculation unit that calculates a correlation value of the values of the plurality of second pixels; and
an interpolation processing unit that, when the correlation value is greater than a threshold that is based on a noise signal intensity in the values of the plurality of second pixels, interpolates a value of the first pixel based on the direction of the intensity distribution from the values of the plurality of second pixels and, when the correlation value is less than or equal to the threshold, interpolates the value of the first pixel from the values of the plurality of second pixels without depending on the direction of the intensity distribution.

US Pat. No. 10,368,040

DOORBELL CAMERA WITH BATTERY AT CHIME

GOOGLE LLC, Mountain Vie...

1. A doorbell camera system, comprising:a camera doorbell subsystem coupled to receive power from an alternating current (AC) power source, the camera doorbell subsystem comprising: a doorbell button, a camera module, a light emitting diode (LED), and a first processor; and
a chime subsystem coupled to receive power from the AC power source, the chime subsystem comprising: a current compensation network, a second processor, a battery, and chime driver circuity operative to be coupled to a chime
wherein during a doorbell button press event, the chime subsystem consumes a first level of current, and wherein during a standby mode in which there is no doorbell button press event, the chime subsystem consumes a second level of current, wherein the second level of current is greater than the first level of current.

US Pat. No. 10,368,039

VIDEO MONITORING SYSTEM

Stryker Corporation, Kal...

1. A bed system for a patient care facility comprising:a bed comprising a base, a plurality of wheels coupled to the base, a brake for braking the wheels, and a patient support surface supported on the base and configured to support a patient thereon;
a camera positioned within a room of the patient care facility and configured to capture images of a location within the room and output signals representative of the images;
a database containing shape information regarding a shape of the bed and identity information regarding identities of staff of the patient care facility; and
a computer device in communication with the camera, the database, and the bed, the computer device configured to use the signals and the shape information to determine when the bed is moved into the location, to use the signals and the identity information to identify a staff member accompanying the bed when the bed is moved into the location, to use the signals and the identity information to determine when the staff member departs from the room, and to automatically send a signal to the bed causing the bed to activate the brake on the bed if the staff member does not activate the brake.

US Pat. No. 10,368,038

MONITORING CAMERA

Sony Corporation, Tokyo ...

1. A monitoring camera for a surveillance system comprising:a horizontally movable unit for rotation through horizontal angles, the horizontally movable unit disposed on a base;
a vertically movable unit disposed in the horizontally movable unit for rotation through vertical angles, wherein the vertically movable unit includes:
an image capturing unit configured to capture an image of a subject therein, said image capturing unit having a zooming function,
a curved board with a plurality of light sources including a first light source configured to emit illuminating radiation and a second light source configured to emit illuminating radiation,
a motor configured to change a position of the curved board,
a lens unit configured to apply said illuminating radiation in a direction which is substantially identical to a direction in which said image capturing unit captures the image, and
an irradiation control unit configured to control the motor to variably set, while zooming, an irradiation angle of said illuminating radiation to irradiate an area,
wherein the irradiation angle is set based on the position of the curved board, and the position of the curved board is changed depending on a zoom ratio of the image capturing unit,
wherein the area as irradiated by the first and second light sources is larger than a captured area by the image capturing unit, and
wherein an optic axis of the first light source and an optic axis of the second light source are parallel to an optic axis of the image capturing unit; and
the monitoring camera further comprising a communication interface configured to transmit the captured image to a remote site.

US Pat. No. 10,368,037

PUBLIC SAFETY CAMERA MONITORING SYSTEM AND METHOD

Purdue Research Foundatio...

10. A system for determining a travel path, comprising:a network of at least one camera;
a communication hub coupled to the network of at least one camera;
at least one electronic communication device;
a data processing system coupled to the communication hub, the data processing system comprising one or more processors configured to:
(a) establish an interface with a 3rd-party mapping system via the electronic communication device,
(b) receive a start point and an end point by a user on the interface for a preselected zone,
(c) generate input data for the 3rd-party mapping system based on the start and end points,
(d) provide the input data to the 3rd-party mapping system,
(e) receive output data from the 3rd-party mapping system associated with a path from the start point to the end point,
(f) identify waypoints in the output data,
(g) identify a camera from a predetermined list of cameras of the preselected zone closest to a line between each of the two consecutive waypoints,
(h) determine a center of a viewing angle of the identified camera from a list of predetermined viewing angles for each of the cameras in the list of cameras of the preselected zone,
(i) calculate a path from the start point through each of the viewing angle centers to the end point,
(j) set the view angle center between each of the two consecutive waypoints as a new start point and iterating steps (c) through (i) until the end point is one of the two consecutive waypoints, at which iteration the incremental path is calculated from a viewing angle center representing the last pair of consecutive waypoints to the end point, and
(k) display the calculated path on the electronic communication device.

US Pat. No. 10,368,036

PAIR OF PARKING AREA SENSING CAMERAS, A PARKING AREA SENSING METHOD AND A PARKING AREA SENSING SYSTEM

VIVOTEK INC., New Taipei...

1. A pair of parking area sensing cameras, comprising:a first parking area sensing camera configured to monitor a first parking area and generate a parking area status of the first parking area;
a second parking area sensing camera configured to monitor a second parking area different from the first parking area and generate a parking area status of the second parking area; and
wherein the first parking area sensing camera is configured to display the parking area status of the second parking area, wherein the parking area status of the second parking area is available to park;
wherein the first parking area sensing camera is connected to the second parking area sensing camera through an Ethernet connection, and the first parking area sensing camera is configured to receive the parking area status of the second parking area directly from the second parking area sensing camera;
wherein the first parking area sensing camera is mounted above and closer to the second parking area than to the first parking area, and the second parking area sensing camera is mounted above and closer to the first parking area than to the second parking area;
wherein the first parking area and the second parking area are separate from each other by a driving lane in a parking lot;
wherein the second parking area sensing camera is configured to receive the parking area status of the first parking area, and the second parking area sensing camera is configured to display the parking area status of the first parking area on an LED light; and
wherein the first parking area sensing camera is configured to display the parking area status of the second parking area on an LED light.

US Pat. No. 10,368,035

MONITORING SYSTEM, MONITORING METHOD, AND MONITORING PROGRAM

NEC CORPORATION, Minato-...

1. A monitoring system comprising:a video acquirer that acquires a video;
a detector that detects entering of a target object into a blind spot generated by a shielding object in the video and appearance of the target object from the blind spot; and
a notifier that makes a notification that an abnormality occurred behind the shielding object if the target object does not appear from the blind spot even after a first time elapses since entering of the target object into the blind spot.

US Pat. No. 10,368,034

METHODS AND SYSTEMS FOR CONNECTING CALLER AND CALLEE CLIENT DEVICES

FACEBOOK, INC., Menlo Pa...

1. A method, comprising:generating a request to engage in a video communication between a caller client device and a callee client device;
identifying a time window during which the caller remains available, the time window designated by the caller client device;
transmitting the request to engage in the video communication, the request remaining open during the time window and configured to allow the callee client device to accept the call at any time during the time window to transition directly into a synchronous video call;
collecting handshake information for connecting the caller client device with the callee client device while the call window remains open and before the request is accepted;
sharing at least one of audio or video from the caller client device to the callee client device during the time window;
receiving an acceptance of the request during the time window; and
connecting the caller client device and the callee client device in direct response to the acceptance while the time window remains open.

US Pat. No. 10,368,033

DISPLAY DEVICE AND VIDEO COMMUNICATION TERMINAL

BOE TECHNOLOGY GROUP CO.,...

1. A display device, comprising:a transparent display panel, configured to display an image; and
a camera, configured to acquire an image of an object in front of the transparent display panel;
wherein the transparent display panel comprises a plurality of subpixel units arranged in an array form, and each of the subpixel units comprises a colorless transparent region and a light-emitting region which emits light at the front side of the transparent display panel;
an area of the colorless transparent region is larger than an area of the light-emitting region; and
a second position adjustment component configured to move the camera automatically, wherein the second position adjustment component comprises:
a first position detection unit, configured to acquire a position of a face of the object;
a second position detection unit, configured to acquire a position of a face of a person which is displayed on the transparent display panel; and
a position control unit, configured to move the camera, to make the face of the object, the face of the person which is displayed on the transparent display panel and the camera to be at an identical straight line.

US Pat. No. 10,368,032

EYE CONTACT ENABLING DEVICE FOR VIDEO CONFERENCING

1. A method for conducting a video conference comprising:providing a first computing system associated with a local user, wherein the first computing system comprises a processor, a display, and an image capture device embedded in the display, wherein the image capture device is surrounded by pixels of the display, and wherein the first computing system is in communication with a second computing system associated with a distant user during the video conference;
the processor receiving a first image of the distant user involved in the video conference;
the processor determining a position on the first image associated with a focal point, on the first image, of the local user involved in the video conference;
the processor further locating a portion of the first image, near the focal point, that can obscure the image capture device embedded in the display;
the processor positioning a user interface including the first image on the display so that the position of the focal point displayed on the display is in physical proximity to the image capture device embedded in the display and within the portion of the first image that obscures the image capture device, and wherein the first image is displayed on the display;
with the local user's gaze on the focal point and into the embedded image capture device, the image capture device capturing a second image of the local user; and
the first computing system sending the second image to the second computing system associated with the distant user, wherein a gaze of the local user appears to be at the distant user in the second image.

US Pat. No. 10,368,031

SYSTEMS AND METHODS TO CONTROL JUDDER VISIBILITY

Dolby Laboratories Licens...

1. A method to control judder visibility, the method comprising:providing, by a computer, at least two input frames comprising a first frame and a second frame;
estimating, by a computer, an interpolation map based on the at least two input frames, thereby obtaining an estimated interpolation map, wherein the estimated interpolation map specifies a temporal interpolation position for at least one pixel of the at least two input frames;
interpolating, by a computer, at least one additional frame, based on the at least two input frames and the estimated interpolation map, thereby obtaining at least one interpolated frame, wherein at least one pixel of the at least one interpolated frame corresponds to at least one pixel at the time specified by the interpolation map of that at least one pixel; and
identifying, by a computer, for the at least one pixel of the at least two input frames, an amount of judder, wherein the estimating, by a computer, an interpolation map is based also on the amount of judder, and wherein identifying an amount of judder comprises:
generating a luminance change map and a contrast change map using the first frame and the second frame by calculating temporal differences between the first frame and the second frame for luminance and contrast; and
generating a judder map using the luminance change map and the contrast change map, wherein the judder map comprises judder information for the first frame.

US Pat. No. 10,368,030

OPTIMIZED HISTOGRAM COMPUTATION FOR ADAPTIVE BACK LIGHT ALGORITHM DURING CAMERA PREVIEW AND VIDEO PLAYBACK

QUALCOMM Incorporated, S...

1. A method for image processing at a device, comprising:capturing, at a sensor of the device, an image frame including frame composition data;
generating histogram metadata for the image frame;
encoding the histogram metadata as supplemental enhancement information (SEI) for the image frame;
receiving, at a display post-processing module of the device, the image frame and the histogram metadata;
computing, by the display post-processing module of the device, a target display setting for the image frame based at least in part on the histogram metadata; and
outputting the image frame to a display based at least in part on the computed display setting.

US Pat. No. 10,368,029

ELECTRONIC DEVICE AND METHOD FOR PROJECTION OF DISPLAYED CONTENT

BlackBerry Limited, Wate...

1. A method for an electronic device, the method comprising:receiving one of a first and second projection-mode triggers, each projection-mode trigger having associated first and second projection-mode display parameters,
wherein the first projection mode trigger comprises sensing by a first sensor of the electronic device that the electronic device is placed in a heads-up-display (HUD) cradle and the display parameters associated with the first projection-mode trigger are configured for projection onto a reflective surface of the HUD display cradle, and
wherein the second projection mode trigger comprises sensing by a second sensor the electronic device that the electronic device is moved from an initial position in which the electronic device is on a surface with the display of the electronic device facing downwards to a tilted position in which the electronic device is tilted up from the surface and the display parameters associated with the second projection-mode trigger are configured for projection onto the surface in the tilted position;
receiving, from an application, content information for display on a display of the electronic device;
in response to receiving the first projection mode trigger:
initiating a first projection mode at the electronic device in which the content information is altered based on the first display parameters associated with the first projection-mode trigger to generate first altered content information configured for projection onto the reflective surface of the HUD display cradle, and
displaying the first altered content information on the display of the electronic device; and
in response to receiving the second projection mode trigger:
initiating a second projection mode at the electronic device in which the content information is altered based on the second display parameters associated with the second projection-mode trigger to generate second altered content information configured for projection onto the surface in the tilted position, and
displaying the second altered content information on the display of the electronic device.

US Pat. No. 10,368,028

DETECTION OF AN ANALOG CONNECTION IN A VIDEO DECODER

STMicroelectronics (Alps)...

1. A video decoder comprising:a processing circuit configured to supply a composite digital video signal to a first analog output path and to provide a component digital video signal to a second analog output path;
wherein the second analog output path comprises first, second, and third analog output sub-paths, each of the first, second, and third analog output sub-paths comprising:
a digital to analog converter configured to receive a respective portion of the component digital video signal and to output a respective analog video signal;
an analog amplifier configured to receive and amplify the analog video signal;
an impedance matching circuit coupled to an output of the analog amplifier and configured to match impedance of a corresponding input terminal of a display device configured to display the output analog video signal;
a circuit configured to compare a voltage based on the amplified analog video signal to a reference signal and to generate a load connection detect signal based thereupon, the load connection detect signal indicating whether the video decoder is coupled to the display device; and
a voltage divider coupled between a node downstream of the impedance matching circuit and the circuit;
wherein the reference signal is a voltage of the analog video signal as output by the digital to analog converter; and
wherein the circuit is configured to generate the load connection detect signal as indicating that the video decoder is coupled to the display device where a voltage at a center tap of the voltage divider is less than the voltage of the reference signal, and to generate the load connection detect signal as indicating that the video decoder is not coupled to the display device where the voltage at the center tap of the voltage divider is more than the voltage of the reference signal.

US Pat. No. 10,368,027

IMAGING APPARATUS INCLUDING UNIT PIXEL, COUNTER ELECTRODE, PHOTOELECTRIC CONVERSION LAYER, AND VOLTAGE SUPPLY CIRCUIT

PANASONIC INTELLECTUAL PR...

1. An imaging apparatus comprising:a pixel electrode;
a counter electrode facing the pixel electrode; and
a photoelectric conversion layer disposed between the pixel electrode and the counter electrode, the photoelectric conversion layer converting light incident on the photoelectric conversion layer into charge carriers; wherein:
the photoelectric conversion layer exhibits a first wavelength sensitivity characteristic in a wavelength range when a first voltage is applied between the pixel electrode and the counter electrode,
the photoelectric conversion layer exhibits a second wavelength sensitivity characteristic in the wavelength range when a second voltage which is different from the first voltage is applied between the pixel electrode and the counter electrode, the second wavelength sensitivity characteristic being different from the first wavelength sensitivity characteristic, and
a third voltage that is different from each of the first voltage and the second voltage is applied between the pixel electrode and the counter electrode during at least a part of a period in which a voltage applied between the pixel electrode and the counter electrode is changed from the first voltage to the second voltage or from the second voltage to the first voltage.

US Pat. No. 10,368,025

IMAGING ELEMENT, IMAGING APPARATUS, ITS CONTROL METHOD, AND CONTROL PROGRAM

CANON KABUSHIKI KAISHA, ...

1. An imaging element comprising:a pixel portion in which pixels, each for photoelectrically converting an optical image of an object and generating a pixel signal, are arranged in a matrix form;
a first converter that converts at least pixel signals of a first pixel group in the pixel portion into first digital signals;
a second converter that converts pixel signals of a second pixel group that is different from the first pixel group in the pixel portion into second digital signals;
a control information generator that generates control information of a photographing operation of the object by using the first digital signals;
an image data output portion that outputs the second digital signals as image data outside of the imaging element; and
a control information output portion that outputs the control information independently from the image data outside of the imaging element,
wherein a frame rate for outputting the image data by the image data output portion is different from a frame rate for outputting the control information by the control information output portion.

US Pat. No. 10,368,024

SOLID-STATE IMAGE SENSOR CAPABLE OF RESTRICTING DIGITAL SIGNAL PROCESSING OPERATION DURING TIME SENSITIVE AND HEAVY LOAD PERIODS, METHOD OF CONTROLLING THE SAME, ELECTRONIC DEVICE, AND STORAGE MEDIUM

Canon Kabushiki Kaisha, ...

1. A stacked-type solid-state image sensor including a first semiconductor layer in which an imaging pixel portion including a plurality of pixels arranged in a matrix and a driving circuit to drive the plurality of pixels included in the imaging pixel portion are arranged, and a second semiconductor layer in which an A/D converter configured to convert an analog signal output from each pixel of the imaging pixel portion into a digital signal and a digital signal processing circuit configured to process the digital signal are arranged, the image sensor comprising:a clock generating circuit that generates clock signals and supply the clock signals to the driving circuit and the digital signal processing circuit, and
a restriction circuit that restricts the digital signal processing operation of the digital signal processing circuit which is controlled based on the clock signals generated by the clock generating circuit, wherein the restriction circuit restricts the digital signal processing operation of the digital signal processing circuit during a selection period of a vertical signal line of the imaging pixel portion or during the A/D converter converting an analog signal output from each pixel of the imaging pixel portion into a digital signal; and
wherein the restriction circuit restricts the digital signal processing operation of the digital signal processing circuit by applying a masking signal, masking clock edges of the clock signals supplied to the digital signal processing circuit during a predetermined period.

US Pat. No. 10,368,023

IMAGE SENSOR SUPPORTING VARIOUS OPERATING MODES AND OPERATING METHOD THEREOF

Samsung Electronics Co., ...

1. An image sensor, comprising:an active pixel sensor array comprising first to fourth pixel units sequentially arranged in a column, wherein each of the first to fourth pixel units includes a plurality of pixels which share a same floating diffusion region with each other, a first pixel group including the first and second pixel units is connected to a first column line, and a second pixel group including the third and fourth pixel units is connected to a second column line; and
a correlated double sampling circuit including first and second correlated double samplers configured to convert a first sense voltage sensed from a selected pixel of the first pixel group and a second sense voltage sensed from a selected pixel of the second pixel group into first and second correlated double sampling signals, respectively,
wherein the first sense voltage is converted into the first correlated double sampling signal by one of the first and second correlated double samplers, and
the second sense voltage is converted into the second correlated double sampling signal by the other of the first and second correlated double samplers.

US Pat. No. 10,368,022

MONOLITHICALLY INTEGRATED RGB PIXEL ARRAY AND Z PIXEL ARRAY

Google LLC

1. A method comprising:mounting, on a single semiconductor chip, an image sensor that includes a visible light pixel array for receiving visible light, and an infrared light pixel array for receiving infrared light;
forming a pixelated aperture layer over the image sensor, where the pixelated aperture layer includes apertures for pixelizing light over the visible light pixel array and the infrared light pixel array, wherein the apertures over the visible light pixel array are smaller than the apertures over the infrared light pixel array;
forming a filter layer that includes a colored filter array over the visible light pixel array;
forming a lens layer that includes microlenses over each pixel of the visible light pixel array and the infrared light pixel array; and
positioning a visible light optical system and an infrared light optical system over the single integrated chip such that a distance between a center of the visible light pixel array and a center of the infrared light pixel array is less than a distance between an optical axis associated with the visible light optical system and the optical axis associated with an infrared light optical system,
wherein the visible light optical system includes a first set of lenses that is configured to receive light reflected off an object and pass through the visible light and the infrared light optical system includes a second, different set of lenses that is configured to receive light reflected off the object and pass through the infrared light.

US Pat. No. 10,368,021

SYSTEMS AND METHODS FOR DERIVATIVE SENSING USING FILTERING PIXELS

MEMS Start, LLC, Arcadia...

1. An apparatus, comprising:an array of filtering pixels, each filtering pixel comprising:
a photodiode;
a filter circuit;
a read out field-effect transistor (FET);
a read bus for reading an output of each filtering pixel of the array of filtering pixels; and
each filtering pixel further comprising an additional read out FET and an additional read bus is configured to output a signal proportional to the intensity of light falling on the photodiode.

US Pat. No. 10,368,019

SOLID-STATE IMAGING DEVICE, METHOD FOR DRIVING SOLID-STATE IMAGING DEVICE, AND ELECTRONIC APPARATUS

Brillnics Japan Inc., To...

1. A solid-state imaging device comprising:a pixel portion in which pixels, each pixel including a photoelectric conversion reading part and a signal holding part, are arranged,
a readout portion for reading pixel signals from the pixel portion, and
a first signal line and a second signal line to which held signals of the signal holding part are output, wherein
a pixel signal read out from a pixel is a pixel signal including at least a readout signal and a readout reset signal,
the photoelectric conversion reading part of the pixel includes at least
an output node,
a photoelectric conversion element which stores a charge generated by photoelectric conversion in a storage period,
a transfer element capable of transferring the charge stored in the photoelectric conversion element in a transfer period,
a floating diffusion to which a charge stored in the photoelectric conversion element is transferred through the transfer element,
a first source-follower element which converts the charge of the floating diffusion to a voltage signal corresponding to the charge amount and outputs the converted signal to the output node, and
a reset element which resets the floating diffusion to a predetermined potential in a reset period, and
the signal holding part includes
an input node,
a first signal holding capacitor capable of holding a readout signal output from the output node of the photoelectric conversion reading part of the pixel and input to the input node,
a second signal holding capacitor capable of holding a readout reset signal output from the output node of the photoelectric conversion reading part of the pixel and input to the input node,
a first switch element which selectively connects the first signal holding capacitor with the output node of the photoelectric conversion reading part,
a second switch element which selectively connects the second signal holding capacitor with the output node of the photoelectric conversion reading part,
a first output part including a second source-follower element which outputs a signal held in the first signal holding capacitor in accordance with a held voltage and selectively outputting the converted signal to the first signal line, and
a second output part including a third source-follower element which outputs a signal held in the second signal holding capacitor in accordance with a held voltage and selectively outputting the converted signal to the second signal line,
wherein a drain side of the first source-follower element of the photoelectric conversion reading part can be selectively connected to a power supply potential or a reference potential.

US Pat. No. 10,368,017

IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND COMPUTER-READABLE RECORDING MEDIUM

OLYMPUS CORPORATION, Tok...

1. An image processing apparatus for correcting blinking defect noise contained in image data generated by an image sensor, the image sensor comprising: a plurality of pixels arranged two-dimensionally and configured to receive light from outside to generate a signal according to an amount of the received light; and a plurality of reading circuits configured to read the signal as a pixel value, the image processing apparatus comprising:an image processor that comprises hardware, the image processor being configured to:
acquire the image data and noise information including one of positional information on a reading circuit in which blinking defect noise caused by the reading circuit occurs and positional information on each of the pixels;
set the image data acquired by the acquisition interface as correction target image data;
calculate a movement amount of a subject based on the correction target image data and reference image data, the reference image data being based on image data acquired at a time different from the acquisition of the correction target image data;
estimate a random noise amount around a pixel of interest of the correction target image data, wherein the estimation comprises:
acquire a reference pixel of the reference image data corresponding to one of the pixel of interest and a neighboring pixel of the pixel of interest, based on the movement amount;
calculate, based on the random noise amount, a representative value that indicates an expected pixel value in which blinking defect noise does not occur; and
correct the pixel value of the pixel of interest based on the representative value.

US Pat. No. 10,368,016

PHOTOELECTRIC CONVERSION DEVICE AND IMAGING SYSTEM

CANON KABUSHIKI KAISHA, ...

1. A photoelectric conversion device comprising a semiconductor substrate and a pixel,wherein the pixel includes:
a first electrode portion;
a second electrode portion located between the first electrode portion and the semiconductor substrate;
a photoelectric conversion layer located between the first electrode portion and the second electrode portion and configured to generate signal electric charge; and
a voltage supply portion configured to set a reverse bias state and a forward bias state of the photoelectric conversion layer by supplying a plurality of voltages having respective different values to at least one of the first electrode portion and the second electrode portion,
wherein the signal electric charge accumulated in the second electrode portion is reset by setting the photoelectric conversion layer to the forward bias state,
wherein the voltage supply portion supplies a first voltage to one of the first electrode portion and the second electrode portion in order to set the reverse bias state such that electric charge having a first polarity is injected from the photoelectric conversion layer into the second electrode portion, the electric charge having the first polarity being the signal electric charge, and
wherein the voltage supply portion supplies a second voltage to the one of the first electrode portion and the second electrode portion in order to set the forward bias state such that electric charge having a second polarity opposite to the first polarity is injected from the photoelectric conversion layer into the second electrode portion.

US Pat. No. 10,368,015

APPARATUS AND METHOD FOR COMBINING IMAGES

Samsung Electronics Co., ...

1. An image composition apparatus, the apparatus comprising:at least one image sensor configured to acquire incident light and generate a first image signal with color information of a visible band of an optical spectrum and a second image signal comprising a wider band than the first image signal and including black-and-white components of the visible band of the optical spectrum; and
at least one processor configured to control to:
divide the first image signal into a color signal and a brightness signal,
combine the divided brightness signal of the first image signal with the second image signal, from the at least one image sensor, to generate a combined brightness signal including the wider band of the second image signal, and
compose the combined brightness signal including the wider band with the color signal of the first image signal to generate a color image,
wherein the second image signal, which is combined with the divided brightness signal of the first image signal, is a same image signal as the second image signal generated by the at least one image sensor.

US Pat. No. 10,368,014

DUAL-APERTURE RANGING SYSTEM

PIXART IMAGING INC., Hsi...

1. A ranging system, comprising:a first aperture stop comprising a sheet of IR-cut material and having a first diaphragm aperture;
a second aperture stop comprising a sheet of opaque material and having a second diaphragm aperture, wherein the second diaphragm aperture is larger than the first diaphragm aperture, and the sheet of opaque material of the second aperture stop overlaps a part of the sheet of IR-cut material of the first aperture stop; and
a pixel array
composed of IR pixels and green pixels without having any red pixel, or composed of IR pixels and blue pixels without having any red pixel, and
configured to receive light sequentially passing through the first aperture stop and the second aperture stop.

US Pat. No. 10,368,010

IMAGE PROCESSING SYSTEM, IMAGE PROCESSING METHOD, AND PROGRAM

NEC CORPORATION, Tokyo (...

1. An image processing system comprising:a non-transitory storage device to store instructions; and
one or more processors configured by the instructions to:
accept input of video images captured by a plurality of video cameras;
cause at least one video image among the inputted video images to be displayed by a display device;
register one or more persons appearing in the video image displayed by the display device; and
display a window overlapped on a video image in which a person appears, the window moving accordance with a movement of the person in the video image, and the window being displayed near the person in the video image such that the person in the video image is not within the window before or after the moving of the window, wherein the window comprises one or more person images which are respectively associable to the person on the video image and which are selected from the one or more registered persons.

US Pat. No. 10,368,008

IMAGING APPARATUS AND CONTROL METHOD WHEREIN AUTO BRACKET PARAMETERS AND IMAGE PROCESSES APPLIED ARE DETERMINED FROM IMAGE ANALYSIS

Canon Kabushiki Kaisha, ...

1. An imaging apparatus comprising:an imaging unit that includes an imaging optical system and an imaging device;
a system control circuit that determines a photographing scene based on a result of analysis on an image captured in advance by the imaging unit, selects one type of auto bracketing from a plurality of types of auto bracketing based on the determined photographing scene, causes the imaging unit to perform the one type of auto bracketing, and selects a predetermined number of modification processes from among a plurality of modification processes; and
an image processing circuit that performs the predetermined number of modification processes on a plurality of images generated by the one type of auto bracketing to generate output images,
wherein the system control circuit changes at least one of an upper limit, a lower limit, and a center value of a variance range of a value of at least one photographing parameter to be changed during the one type of auto bracketing based on a category instructed by a user, and
wherein the system control circuit selects the predetermined number of modification processes based on results of analysis on the plurality of images generated by the one type of auto bracketing and the category.

US Pat. No. 10,368,007

CONTROL APPARATUS, HEAD-MOUNTED DISPLAY, CONTROL SYSTEM, CONTROL METHOD, AND PROGRAM

Sony Interactive Entertai...

1. A control apparatus comprising:a posture specifying unit that specifies a posture of a head-mounted display including a light-emitting unit that emits light at luminance according to drive current, the posture specified based on a posture specifying image taken by a camera, the posture specifying image including an image of the light-emitting unit;
a luminance information specifying unit that specifies information indicating the luminance of the light-emitting unit based on a luminance specifying image taken by the camera that takes the posture specifying image, the luminance specifying image including an image of the light-emitting unit; and
a drive current control unit that controls the drive current of the light-emitting unit based on the specified information indicating the luminance of the light-emitting unit.

US Pat. No. 10,368,006

INFORMATION COMMUNICATION METHOD

PANASONIC INTELLECTUAL PR...

1. A method, comprising:obtaining a destination of a user of a terminal device;
setting an exposure time of an image sensor included in the terminal device so that, in an image obtained by capturing a subject by the image sensor, a bright line corresponding to each of a plurality of exposure lines included in the image sensor appears according to a change in luminance of the subject;
obtaining a bright line image including a plurality of bright lines, by capturing the subject that changes in luminance by the image sensor with the set exposure time;
obtaining identification information of the subject, by demodulating data specified by a pattern of the plurality of bright lines included in the obtained bright line image;
obtaining a position of the terminal device that is specified by the identification information; and
displaying an arrow, which indicates a direction from the position of the terminal device toward the destination, on a map on a display of the terminal device.

US Pat. No. 10,368,005

INFORMATION COMMUNICATION METHOD

PANASONIC INTELLECTUAL PR...

1. A method, comprising:setting an exposure time of an image sensor included in a terminal device so that, in an image obtained by capturing a subject by the image sensor, a bright line corresponding to each of a plurality of exposure lines included in the image sensor appears according to a change in luminance of the subject;
obtaining a bright line image, including a plurality of bright lines, by capturing the subject that changes in luminance by the image sensor with the set exposure time;
obtaining identification information of the subject, by demodulating data specified by a pattern of the plurality of bright lines included in the obtained bright line image;
obtaining an angle of light of the subject, the light of the subject entering into the image sensor,
calculating a distance between the terminal device and the subject using the angle;
obtaining a position of the subject that is specified by the identification information; and
calculating a position of the terminal device using the distance and the position of the subject.

US Pat. No. 10,368,004

LIGHT SOURCE CONTROL METHOD AND CAMERA APPARATUS THEREOF

VIVOTEK INC., New Taipei...

1. A light source control method applied to a camera apparatus, the camera apparatus comprising a device body, a plurality of image capturers, and a plurality of light sources, the plurality of image capturers being movably disposed on the device body, the plurality of light sources being disposed around the device body, the light source control method comprising:each image capturer capturing an uncompensated image toward a target region respectively when each light source is turned off;
turning on the plurality of light sources by turns;
each image capturer capturing a practical image toward the target region when each light source is turned on by turns; and
comparing image reference values of the practical images captured by the plurality of image capturers when each light source is turned on by turns with image reference values of the corresponding uncompensated images respectively, for controlling turning on or off of each light source respectively.

US Pat. No. 10,368,003

IMAGING DEVICE AND TIME-LAPSE IMAGING METHOD

FUJIFILM Corporation, To...

1. An imaging device, comprising:an imaging unit including an imaging lens and an imaging element;
a pan and tilt mechanism that rotates the imaging unit in a horizontal direction and a vertical direction relative to a device body;
a wireless communication unit configured to:
output a live-view image captured by the imaging unit to a display unit;
receive an instruction input for operating the pan and tilt mechanism through a manual operation;
receive an instruction input for specifying camerawork performed using the live-view image displayed on the display unit and the wireless communication unit, the instruction input for specifying camerawork specifying camerawork in time-lapse imaging in which a plurality of still images are captured at certain imaging intervals; and
receive an instruction input for start of the time-lapse imaging;
a control unit that controls at least the pan and tilt mechanism and controls the imaging unit to perform the time-lapse imaging based on the instruction input for specifying the camerawork when the control unit receives the instruction input for specifying the camerawork and then receives the instruction input for start of the time-lapse imaging; and
angle-detection units, each of the angle-detection units including a sensor, that detect pan and tilt angles of the imaging unit, respectively,
wherein the wireless communication unit receives the pan and tilt angles detected by the angle-detection unit at a time of setting an imaging direction of each of the plurality of still images, as the instruction input for specifying the camerawork, when two or more images of the plurality of still images of which the respective imaging directions are different are set,
wherein the two or more images of the plurality of still images include a start image and an end image of the time-lapse imaging,
wherein the wireless communication unit receives a number of the plurality of still images or a playback time of the plurality of still images, and an imaging period of the time-lapse imaging as an additional instruction input for specifying the camerawork, and
wherein the control unit calculates an imaging interval of the plurality of still images and a change in pan and tilt angles between each of the plurality of still images based on respective pan and tilt angles of the start image and the end image of the time-lapse imaging, the number of the plurality of still images or the playback time of the plurality of still images, and the imaging period of the time-lapse imaging, and controls the pan and tilt mechanism and the imaging unit based on the imaging interval and the change in the pan and tilt angles between each of the plurality of still images.

US Pat. No. 10,368,001

IMAGE SENSOR CONTROLLING GYROSCOPE SENSOR AND IMAGING DEVICE INCLUDING SAME

Samsung Electronics Co., ...

7. An image sensor module comprising:a gyroscope sensor; and
an image sensor configured to generate a flag signal that is activated to enable the gyroscope sensor and de-activated to disable the gyroscope sensor,
wherein the image sensor comprises
a pixel array including pixels arranged in rows extending from a first row to a last row and providing pixel signals,
a timing controller configured to control generation of exposure time for the pixels, and to generate the flag signal indicating a start and a stop of the exposure time,
wherein the exposure time comprises sequential generation of a first exposure time for first pixels disposed in the first row of the pixel array through a last exposure time for last pixels disposed in the last row of the pixel array, and
a first connection pin configured to transfer the flag signal to the gyroscope sensor, and
wherein the timing controller is configured to activate the flag signal in response to a first exposure time control signal supplied to the first pixels of the first row during the first exposure time, and to de-activate the flag signal in response to a last exposure time control signal supplied to the last pixels disposed in the last row during the last exposure time, and
the gyroscope sensor is enabled by the activated flag signal and is disabled in response to the de-activated flag signal.

US Pat. No. 10,368,000

DISTANCE MEASUREMENT DEVICE, DISTANCE MEASUREMENT METHOD, AND DISTANCE MEASUREMENT PROGRAM

FUJIFILM CORPORATION, To...

1. A distance measurement device comprising:an imaging optical system which forms a subject image indicating a subject;
an image sensor which captures the subject image formed by the imaging optical system;
a light emitter which emits directional light as light having directivity along an optical axis direction of the imaging optical system;
a light receiver which receives reflected light of the directional light from the subject;
a derivation unit which performs a distance measurement to derive a distance to the subject based on a timing at which the directional light is emitted by the light emitter and a timing at which the reflected light is received by the light receiver;
a shake correction unit which performs shake correction as correction of shake of the subject image caused by variation of the optical axis of the imaging optical system; and
a controller which performs control such that:
in a case of performing a distance measurement operation relating to the distance measurement and synchronously performing an imaging operation for a still image by the image sensor, the shake correction unit does not perform the shake correction, or performs the shake correction with a correction amount smaller than a normal correction amount determined in advance, and
in a case of performing the imaging operation without performing the distance measurement operation, the shake correction unit performs the shake correction with the normal correction amount.

US Pat. No. 10,367,999

TECHNIQUES TO SELECTIVELY CAPTURE VISUAL MEDIA USING A SINGLE INTERFACE ELEMENT

FACEBOOK, INC., Menlo Pa...

1. A computer-implemented method, comprising:receiving a haptic engagement signal;
configuring a visual media capture hardware device in a photo capture mode in direct response to receiving the haptic engagement signal, the photo capture mode capturing an input as a photo;
capturing a photograph using the visual media capture hardware device in the photo capture mode; and
configuring the visual media capture hardware device in a video capture mode in direct response to capturing the photograph, the video capture mode capturing an input as a video and being distinct from the photo capture mode.

US Pat. No. 10,367,998

METHOD FOR CONTROLLING IMAGING DEVICE, AND IMAGING DEVICE

Panasonic Intellectual Pr...

1. A method for controlling an imaging device that allows switching of an operation mode between a first mode to perform imaging in a first imaging wavelength band and a second mode to perform imaging in a second imaging wavelength band different from the first imaging wavelength band, the method comprising:when the operation mode is the first mode,
determining whether ambient light includes near-infrared light based on information obtained in the first mode and information obtained in the second mode,
maintaining the first mode when the ambient light includes near-infrared light, and
switching the operation mode to the second mode when the ambient light does not include near-infrared light; and
when the operation mode is the second mode,
determining whether ambient light includes near-infrared light based on information obtained in the first mode and information obtained in the second mode,
switching the operation mode to the first mode when the ambient light includes near-infrared light, and
maintaining the second mode when the ambient light does not include near-infrared light.

US Pat. No. 10,367,997

ENRICHED DIGITAL PHOTOGRAPHS

Synamedia Limited, Middl...

1. A method comprising:start capturing, by a capture device, audio data in response to a first user interaction with the capture device;
start capturing, by the capture device in response to a second user interaction, video data during the capturing of the audio data, the second user interaction happening after the first user interaction;
capturing, by the capture device in response to a third user interaction with the capture device, a digital photograph during the capturing of the audio data and the capturing of the video data, the third user interaction happening after the second user interaction;
stop capturing, by the capture device, the audio video data in response to a fourth user interaction with the capture device during the capturing of the audio data;
stop capturing, by the capture device, the audio data in response to a fifth user interaction with the capture device; and
providing, subsequent to the stop capturing of the audio data and the video data, a raw data file comprising a full resolution picture of the digital photograph in an image format and raw data corresponding to the audio data and the video data.

US Pat. No. 10,367,996

CALIBRATING PANORAMIC IMAGING SYSTEM IN MULTIPLE DIMENSIONS

IEC Infrared Systems, LLC...

1. An apparatus employed in a panoramic imaging system, the apparatus comprising:one or more processors configured to:
produce image correction data for a lens associated with the panoramic imaging system or a sensor associated with the lens and the panoramic imaging system, where the image correction data is based on an error identified in an individual image acquired by the lens or sensor, where the individual image was acquired with a plurality of pre-determined operating parameters;
produce strip correction data for the lens or sensor, where the strip correction data is based on an error identified in a strip of images pieced together from a plurality of individual images acquired by the lens or sensor;
produce panoramic image correction data for the lens or sensor, where the panoramic image correction data is based on an error identified in a panoramic image pieced together from two or more strips of images pieced together from the plurality of individual images acquired by the lens or sensor; and
store, in a memory, the image correction data, the strip correction data, the panoramic image correction data, or a combined correction value computed from the image correction data, the strip correction data, and the panoramic image data, and data that relates the pre-determined operating parameters to the image correction data, the strip correction data, the panoramic image correction data, or the combined correction value.

US Pat. No. 10,367,995

IMAGING APPARATUS HAVING CONTROL CIRCUIT THAT CORRESPONDS A REGION OF FIRST IMAGE DATA WITH SECOND IMAGE DATA

OLYMPUS CORPORATION, Tok...

1. An imaging apparatus comprising:a communication circuit that receives first image data transmitted from another imaging apparatus;
an imaging circuit that acquires second image data including a region corresponding to a region of the first image data;
a control circuit that selects a first region corresponding to an entire region of the first image data from the second image data, or selects a second region corresponding to a region of a part of the first image data in response to a touch operation, the control circuit causing the communication circuit to transmit, to a server, the first region, the second region, or information obtained by analyzing the first region or the second region, and to receive guide information from the server; and
a reproducing circuit that associates the first image data with the guide information to reproduce the first image data and the guide information.

US Pat. No. 10,367,994

SETTING DEVICE AND CAMERA

FUJIFILM Corporation, To...

1. A setting device comprising:an operation dial that includes a click mechanism and is endlessly rotatable in a normal rotation direction and a reverse rotation direction;
a display section that is provided on an upper surface of the operation dial;
a rotation detection unit that detects a rotation of the operation dial;
a set value switching unit that switches set values in order between a minimum value that can be set and a maximum value that can be set according to the rotation of the operation dial; and
a display control unit that controls a display on the display section according to the switching of the set values performed by the set value switching unit,
wherein the display control unit displays a currently selected set value at a set value-display position,
the display control unit displays the maximum value that can be set at a maximum value-display position that is fixed on a downstream side of the set value-display position in the normal rotation direction in a case in which a number of the set values that are selectable between the currently selected set value and the maximum value that can be set is larger than a number of previous set value-display positions,
the display control unit displays the minimum value that can be set at a minimum value-display position that is fixed on an upstream side of the set value-display position in the normal rotation direction in a case in which a number of the set values that are selectable between the currently selected set value and the minimum value that can be set is larger than a number of next set value-display positions,
in a case in which the number of the set values that are selectable between the currently selected set value and the maximum value that can be set is equal to or smaller than the number of previous set value-display positions, a display position of the maximum value that can be set moves between the maximum value-display position and the set value-display position of the operation dial, according to the number of the set values that is equal to or smaller than the number of previous set value-display positions, and
in a case in which the number of the set values that are selectable between the currently selected set value and the minimum value that can be set is equal to or smaller than the number of next set value-display positions, a display position of the minimum value that can be set moves between the minimum value-display position and the set value-display position of the operation dial, according to the number of the set values that is equal to or smaller than the number of next set value-display positions.

US Pat. No. 10,367,993

CHANGING OF LIST VIEWS ON MOBILE DEVICE

MICROSOFT TECHNOLOGY LICE...

1. A mobile computing device, comprising:a touch-sensitive display;
a camera configured to acquire an image;
a processor; and
memory comprising code executable by the processor to:
display on the touch sensitive display an image of a field of view of the camera at a first zoom setting;
detect a dynamic multi-touch gesture input over the touch-sensitive display;
in response to detecting the dynamic multi-touch gesture input, display on the touch-sensitive display an image of a field of view of the camera at a second zoom setting;
detect a swipe gesture over the touch-sensitive display; and
in response to detecting the swipe gesture, acquire the image of the field of view of the camera at the second zoom setting displayed on the touch-sensitive display.

US Pat. No. 10,367,991

FOCUS ADJUSTMENT DEVICE AND CONTROL METHOD OF FOCUS ADJUSTMENT DEVICE

Olympus Corporation, Tok...

1. A focus adjustment device including an image sensor which receives a light flux passing through an imaging lens including a focus lens, performs imaging, and then generates an image signal, the focus adjustment device performing a focus adjustment in which the focus lens is moved on the basis of the image signal, the focus adjustment device comprising:a focus detection region setting circuit which sets focus detection regions inside a region to be imaged by the image sensor, at least two of the focus detection regions having at least parts in common and being different in size from one another;
a direction determination circuit which detects, regarding each of the focus detection regions, contrast of the image signal in the focus detection region, and determines a movement direction of the focus lens to be in focus on the basis of a change of the contrast caused by the movement of the focus lens; and
a control circuit which causes the image sensor to repeat an imaging operation to generate consecutive frames of image data, and at the same time, performs, on the basis of the movement direction, one of a first focus adjustment operation to move the focus lens while minutely vibrating the focus lens to perform the focus adjustment, and a second focus adjustment operation to perform the focus adjustment while moving the focus lens in one direction,
wherein, while performing the first focus adjustment operation, the control circuit repeatedly determines whether or not the movement directions determined by the direction determination circuit for the respective focus detection regions are different from one another, and responsive to determining, for at least a predetermined number of consecutive frames of image data, that the movement directions for the respective focus detection regions are different from one another, the control circuit inhibits a switchover from the first focus adjustment operation to the second focus adjustment operation.

US Pat. No. 10,367,990

PHOTOGRAPHING APPARATUS, PHOTOGRAPHING METHOD AND RECORDING MEDIUM ON WHICH PHOTOGRAPHING/DISPLAY PROGRAM IS RECORDED

OLYMPUS CORPORATION, Tok...

1. A photographing apparatus comprising:an image pickup device picking up an optical image of an object through an image pickup optical system to obtain a picked-up image, the image pickup device being provided with divided pixels for receiving light on respective optical paths from the object divided in left and right directions;
a recording medium recording the picked-up image;
a display displaying the picked-up image; and
a processor comprising a focus judging section judging a state of focus of the image pickup optical system using a phase difference on an image pickup surface between image signals based on optical images respectively entering the divided pixels, the focus judging section judges whether or not the phase difference of each area of the picked-up image to be recorded immediately before photographing is equal to or has not increased in comparison with the phase difference of the recorded picked-up image in a same area, to determine a candidate area for an enlarged display.

US Pat. No. 10,367,989

IMAGING DEVICE AND FOCUSING CONTROL METHOD

FUJIFILM Corporation, To...

1. An imaging device comprising:an imaging element that includes a first signal detection pixel that detects a signal based on one beam among a pair of beams that passes through different portions in a pupil region of an imaging optical system including a focus lens, and a second signal detection pixel that detects a signal based on the other beam among the pair of beams, and images a subject through the imaging optical system; and
at least one processor configured to
calculate a defocus amount using a detection signal of the first signal detection pixel and a detection signal of the second signal detection pixel and drive the focus lens according to the defocus amount; and
detect whether a movement is present in a subject image captured by the imaging element,
wherein the processor further detects a size of the movement,
wherein the processor calculates the defocus amount according to an auto-focus execution instruction, drives, in a case where the defocus amount exceeds a threshold value, the focus lens according to the defocus amount, in a case where the movement is present in the subject image, the processor resets the threshold value to become larger as the size of the movement becomes larger, and then perform calculating the defocus amount using the detection signal of the first signal detection pixel and the detection signal of the second signal detection pixel and driving the focus lens according to the defocus amount again to complete auto-focusing based on a comparison of the defocus amount with the enlarged threshold value, and drives, in a case where the defocus amount calculated according to the auto-focus execution instruction is equal to or smaller than the threshold value, the focus lens according to the defocus amount to complete auto-focusing.