US Pat. No. 10,341,782

ULTRASONIC RECEIVER WITH COATED PIEZOELECTRIC LAYER

QUALCOMM Incorporated, S...

1. A method for fabricating an ultrasonic receiver configured to detect ultrasonic energy received at a first surface of the ultrasonic receiver, the ultrasonic receiver including an array of pixel circuits disposed on a substrate, the method comprising:coating a solution containing a polymer onto a first side of the array of pixel circuits, each pixel circuit in the array including at least one thin film transistor (TFT) element and a pixel input electrode;
subsequent to coating the solution, crystallizing the polymer to form a crystallized polymer layer; and
executing a poling process that includes applying an electric field across the crystallized polymer layer to form a piezoelectric layer that is in electrical contact with the pixel input electrodes.

US Pat. No. 10,341,781

DOUBLE RING RADIATOR AND DUAL CO-AX DRIVER

TYMPHANY HK LIMITED, Wan...

1. A loudspeaker comprising:a driver, the driver comprising:
an inner magnet disposed within a housing of the loudspeaker;
an outer magnet disposed within the housing radially outward of the inner magnet;
an inner voice coil having a first diameter;
an outer voice coil having a second diameter, the second diameter being larger than the first diameter;
wherein poles of the inner and outer magnets are oppositely disposed;
wherein the inner magnet is configured to contribute only to a magnetic circuit of the inner voice coil; wherein the outer magnet is configured to contribute to the magnetic circuit of the inner voice coil and to a magnetic circuit of the outer voice coil;
the loudspeaker further comprising:
a yolk having a generally U-shaped cross-section disposed at a rear of the driver;
a frame secured to the yolk;
a diaphragm connected to the frame and extending across a front of the driver; and
a phase plug disposed centrally at the front of the driver and protruding therefrom,
wherein the diaphragm is said connected to the frame at a periphery of the loudspeaker and is also connected centrally to the phase plug.

US Pat. No. 10,341,779

MINIATURE SOUNDING DEVICE

AAC TECHNOLOGIES PTE. LTD...

1. A miniature sounding device, comprising:a fixing system; and
a vibrating system comprising a diaphragm, a voice coil which is arranged underneath the diaphragm and is configured to drive the diaphragm to vibrate and sound, and a flexible circuit board arranged at an external side of the voice coil, the voice coil comprising a first end surface which is connected with the diaphragm and a second end surface which is opposite to the first end surface, the flexible circuit board and the voice coil connected to a horizontal plane of the diaphragm;
wherein, a first surface adjacent to the diaphragm of the flexible circuit board is level with the first end surface of the voice coil.

US Pat. No. 10,341,776

ELECTRONIC DEVICE

LG ELECTRONICS INC., Seo...

1. An electronic device comprising:a body;
a first support structured to move around an outer surface of the body, wherein the first support extends from the body along a line in a first direction;
a second support fixed to the body, wherein the second support extends from the body along a line in a second direction different from the first direction;
an ear loop extended from the first support to the second support, such that the ear loop forms a closed loop with the first support, the second support, and the body, wherein a shape of the ear loop changes according to a movement of the first support; and
a tilting unit installed inside the body,
wherein the tilting unit includes:
a damper-seated part being a plate and elongated along a region in which the first support is moved, wherein the damper-seated part includes ridges and valleys;
a damper controller elongated from a central axis of the body to the damper-seated part; and
a damper member fixed to the damper controller between the damper controller and the damper-seated part to contact the ridges or the valleys of the damper-seated part and coupled with the first support, and
wherein the damper member rotates in a clockwise direction or a counter-clockwise direction with respect to the central axis of the body.

US Pat. No. 10,341,775

APPARATUS, METHOD AND COMPUTER PROGRAM FOR RENDERING A SPATIAL AUDIO OUTPUT SIGNAL

Nokia Technologies Oy, E...

1. A method comprising;using a first microphone and a second microphone to detect ambient noise where the first microphone is positioned at a first position within a headset and the second microphone is positioned at a second position within the headset;
comparing the ambient noise detected by the first microphone to the ambient noise detected by the second microphone to determine locations of the microphones;
using the determined locations of the microphones to enable a spatial audio output signal to be rendered by the headset;
using the locations of the microphones to determine whether or not the headset is being worn; and
in response to determining that the headset is not being worn, pausing the spatial audio output signal rendered by the headset.

US Pat. No. 10,341,774

VIDEO AUDIO SYSTEM

FUJIFILM Corporation, To...

1. A video audio system comprising:an electroacoustic conversion unit which includes an electroacoustic conversion film having a macromolecular composite piezoelectric body formed by dispersing piezoelectric body particles in a viscoelastic matrix formed of a macromolecular material that is viscoelastic at normal temperature and thin film electrodes respectively laminated on both surfaces of the macromolecular composite piezoelectric body, curves and supports the electroacoustic conversion film, and uses at least a part of the electroacoustic conversion film as a vibration region; and
a display device which is a screen or a video display device to which videos are projected,
wherein the vibration region has a square shape,
a surface of the electroacoustic conversion unit on the electroacoustic conversion film side has a square shape,
at least one of the electroacoustic conversion units is disposed on a rear surface opposite to a surface of the display device on which videos are displayed, and a plurality of vibration regions is arranged on the entire rear surface of the display device,
the video audio system is provided with any of a constitution in which one electroacoustic conversion unit having the plurality of vibration regions is included, a constitution in which a plurality of the electroacoustic conversion units each having one vibration region is included or a constitution in which a plurality of the electroacoustic conversion units each having the plurality of vibration regions is included,
location information of the vibration regions is included in sound data that are input to the electroacoustic conversion units, and
a proportion of a total area of the plurality of vibration regions in an area of a region in the display device on which videos are displayed is 80% or more.

US Pat. No. 10,341,772

AUDIO STICK FOR CONTROLLING WIRELESS SPEAKERS

Dolby Laboratories Licens...

1. An apparatus for interfacing between a digital audio source device and two or more wireless speakers, comprising:a connector that connects to the digital audio source device, and that receives power from the digital audio source device, wherein the apparatus is powered by the received power;
an input digital audio interface that receives an input digital audio signal from the digital audio source device via a wired signal path;
a processor that receives the input digital audio signal from the input digital audio interface, and that generates an output digital audio signal comprising a surround audio signal; and
a wireless transceiver including an output audio transmitter that receives the output digital audio signal from the processor, and that transmits two or more calibration tones and the output digital audio signal to the two or more wireless speakers via a wireless signal path, wherein each calibration tone is unique for one of the wireless speakers,
wherein the wireless transceiver receives audio data from a first wireless speaker via the wireless signal path, and wherein the audio data has been recorded by the first wireless speaker from one of the calibration tones rendered by a second wireless speaker.

US Pat. No. 10,341,771

MAIN SPEAKER, SUB SPEAKER AND SYSTEM INCLUDING THE SAME

LG ELECTRONICS INC., Seo...

1. A system, comprising:a main speaker configured to:
receive a first audio signal from a first source device, and
output first audio associated with the first audio signal; and
at least one sub speaker configured to:
communicate with the first source device wirelessly or by wire, and
selectively operate in one of a first mode or a second mode,
wherein the at least one sub speaker, when operating in the first mode, is further configured to output second audio associated with the first audio signal received directly from the first source device
wherein the at least one sub speaker, when operating in the second mode, is further configured to output audio associated with a second audio signal received from a second source device,
wherein the first mode and the second mode are identified by respective mode LEDs,
wherein the first audio signal encodes data representing a plurality of sounds,
wherein, when the at least one sub speaker is operating in the first mode and a particular user input is received:
one of the main speaker or the at least one sub speaker extracts a particular sound, of the plurality of sounds, from the first audio signal and outputs only the extracted particular sound, and
another one of the main speaker or the at least one sub speaker outputs remaining sounds, of the plurality of sounds, from the first audio signal and does not output the extracted particular sound, and
wherein, when the at least one sub speaker is operating in the first mode and the particular user input is not received, the first audio outputted by the main speaker and the second audio outputted by the at least one sub speaker include each of the plurality of sounds encoded in the first audio signal.

US Pat. No. 10,341,770

ENCODED AUDIO METADATA-BASED LOUDNESS EQUALIZATION AND DYNAMIC EQUALIZATION DURING DRC

Apple Inc., Cupertino, C...

1. A method for loudness equalization in a playback system, comprising:receiving audio content, and metadata for the audio content, wherein the metadata includes a plurality of dynamic range control (DRC) gain values that have been computed for the audio content;
deriving a playback level from a user volume setting for the playback system;
comparing the playback level with an assigned mixing level that is assigned to the audio content;
applying an inverse DRC characteristic to the plurality of DRC gain values received in the metadata to compute a plurality of instantaneous loudness values for the audio content, wherein the inverse DRC characteristic is an inverse of a DRC characteristic that was applied to the audio content at an encoding side to produce the DRC gain values; and
computing a plurality of parameters that define an equalization filter by which the received audio content is filtered before driving a speaker in the playback system, wherein the parameters are computed based on 1) the plurality of instantaneous loudness values computed using the inverse DRC characteristic, and 2) the comparing of the playback level with the mixing level.

US Pat. No. 10,341,769

SOUND AMPLIFICATION SYSTEM INTEGRATED WITH BACK CAVITY PRESSURE SENSING AND AUDIO PLAYER

ZILLTEK TECHNOLOGY (SHANG...

1. A sound amplification system integrated with back cavity pressure sensing, comprising:a loudspeaker body, comprising: a vibrating diaphragm, a voice coil connected with the vibrating diaphragm, and a back cavity, wherein the voice coil is connected to a driving unit, at least one microphone is disposed in the back cavity, and the microphone senses an internal pressure of the back cavity;
a computing unit, making an estimation of a displacement of the vibrating diaphragm based on the internal pressure; and
a processing unit, determining the displacement of the vibrating diaphragm when compared with a reference displacement, when it is determined that the displacement exceeds a set threshold, a pre-compensation signal is generated and provided to the driving unit;
wherein the internal pressure and the displacement of the vibrating diaphragm are closely linked and shown as the following formula:
p(s)=X(s)*S/Ca;
wherein p(s) is the Laplace transform of the internal pressure of the back cavity;
X(s) is the Laplace transform of the equivalent displacement of the vibrating diaphragm;
s is a Laplace independent variable;
S is an equivalent area of the vibrating diaphragm; and
Ca is a mechanical compliance of the back cavity;
wherein an equivalent circuit of the loudspeaker comprises a primary circuit, and the primary circuit comprises force Fcoil generated by a coil, the coil and the vibrating diaphragm velocity v in the circuit, an equivalent inductance of the loudspeaker mass Mms, an equivalent resistance of the loudspeaker resistance Rms, and an equivalent capacitance of the loudspeaker mechanical compliance Cms; and
wherein primary equivalent force of a converter F?=pS, wherein F? is the primary equivalent force of the converter, p is the internal pressure of the back cavity, and S is the equivalent area of the vibrating diaphragm; and
the equivalent circuit of the loudspeaker further comprises a secondary circuit, and the secondary circuit comprises the equivalent capacitance of the loudspeaker mechanical compliance Ca, and a secondary body velocity U=vS in which U is secondary body velocity, v is the coil and the vibrating diaphragm velocity, and S is the equivalent area of the vibrating diaphragm; and wherein a ratio between the primary circuit and the secondary circuit is S:1.

US Pat. No. 10,341,768

SPEAKER ADAPTATION WITH VOLTAGE-TO-EXCURSION CONVERSION

Cirrus Logic, Inc., Aust...

1. A method, comprising:receiving a current and a voltage for a transducer;
converting the voltage to a converted displacement value using a voltage-to-displacement adaptive filter;
determining an error signal based on the current, the voltage, and the converted displacement value; and
updating the voltage-to-displacement adaptive filter using the error signal.

US Pat. No. 10,341,767

SPEAKER PROTECTION EXCURSION OVERSIGHT

Cirrus Logic, Inc., Aust...

1. A method, comprising:modifying an input audio signal by an excursion limiter based on a first excursion prediction to obtain an excursion-limited audio signal for reproduction at a transducer;
determining a second excursion prediction based on at least one speaker monitor signal;
determining a third excursion prediction based on the excursion-limited audio signal; and
adjusting the modifying by the excursion limiter of the input audio signal based on the second excursion prediction, wherein the step of adjusting the modifying of the input audio signal comprises comparing the second excursion prediction and the third excursion prediction.

US Pat. No. 10,341,766

MICROPHONE APPARATUS AND HEADSET

1. A microphone apparatus configured to provide an output audio signal (SF) in dependence on voice sound (V) received from a user of the microphone apparatus, the microphone apparatus comprising:a first microphone unit configured to provide a first input audio signal (X) in dependence on sound received at a first sound inlet;
a second microphone unit configured to provide a second input audio signal (Y) in dependence on sound received at a second sound inlet spatially separated from the first sound inlet;
a linear main filter (F) with a main transfer function (HF) configured to provide a main filtered audio signal (FY) in dependence on the second input audio signal (Y);
a linear main mixer (BF) configured to provide the output audio signal (SF) as a beamformed signal in dependence on the first input audio signal (X) and the main filtered audio signal (FY); and
a main filter controller (CF) configured to control the main transfer function (HF) to increase the relative amount of voice sound (V) in the output audio signal (SF),
characterized in that the microphone apparatus further comprises:
a linear suppression filter (Z) with a suppression transfer function (Hz) configured to provide a suppression filtered signal (ZY) in dependence on the second input audio signal (Y);
a linear suppression mixer (BZ) configured to provide a suppression beamformer signal (Sz) as a beamformed signal in dependence on the first input audio signal (X) and the suppression filtered signal (ZY);
a suppression filter controller (CZ) configured to control the suppression transfer function (Hz) to minimize the suppression beamformer signal (SZ);
a linear candidate filter (W) with a candidate transfer function (Hw) configured to provide a candidate filtered signal (WY) in dependence on the second input audio signal (Y);
a linear candidate mixer (BW) configured to provide a candidate beamformer signal (SW) as a beamformed signal in dependence on the first input audio signal (X) and the candidate filtered signal (WY);
a candidate filter controller (CW) configured to control the candidate transfer function (Hw) to be congruent with the complex conjugate of the suppression transfer function (HZ); and
a candidate voice detector (AW) configured to use a voice measure function (A) to determine a candidate voice activity measure (Vw) of voice sound (V) in the candidate beamformer signal (Sw), and in that the main filter controller (CF) further is configured to control the main transfer function (HF) to converge towards being congruent with the candidate transfer function (Hw) in dependence on the candidate voice activity measure (Vw).

US Pat. No. 10,341,765

SYSTEM AND METHOD FOR PROCESSING SOUND BEAMS

InSoundz Ltd., Raanana (...

1. A sound processing system, comprising:a sound sensing unit including a plurality of microphones, wherein each microphone is configured to capture non-manipulated sound signals, wherein at least a portion of the non-manipulated sound signals is stored in a database;
a beam synthesizer including a plurality of first modules, each first module corresponding to one of the plurality of microphones, wherein each first module is configured to filter the non-manipulated sound signals captured by the corresponding microphone to generate filtered sound signals;
a sound analyzer communicatively connected to the sound sensing unit to receive the captured non-manipulated sound signals and to the beam synthesizer, wherein the sound analyzer is configured to generate a manipulated sound beam based on the filtered sound signals; and
a switch, wherein the switch is configured to provide sound signals to the sound analyzer from at least one of: the sound sensing unit, and the database.

US Pat. No. 10,341,764

STRUCTURES FOR DYNAMICALLY TUNED AUDIO IN A MEDIA DEVICE

1. A method, comprising:receiving acoustic data associated with an acoustic output;
determining, using the acoustic data, a target low frequency response associated with an audio device, the audio device comprising a hybrid radiator formed using a smart fluid;
determining a value associated with a property of the smart fluid, the value being determined based on the target low frequency response associated with the audio device;
calculating, using a dynamic tuning application implementing a dynamic tuning algorithm, a magnitude of an external stimulus associated with the value, wherein the magnitude of the external stimulus is calculated, based on the dynamic tuning algorithm, to modify the property of the smart fluid to achieve the target low frequency response at the hybrid radiator; and
sending a control signal to a source, the control signal configured to cause the source to apply the external stimulus of the magnitude, the external stimulus including an electric current, an application of the external stimulus configured to change the property of the smart fluid.

US Pat. No. 10,341,763

PASSIVE RADIATOR ASSEMBLY

HARMAN INTERNATIONAL INDU...

1. A passive radiator assembly for a loudspeaker system comprising:a pair of passive radiators including a first passive radiator and a second passive radiator; and
a frame having a first opening, a second opening and a third opening, wherein the first opening and the second opening are located on parallel sides of the frame, respectively, wherein the first passive radiator and the second passive radiator are mounted into the first opening and the second opening of the parallel sides of the frame, respectively, so as to oppose each other at a predetermined distance,
wherein the first passive radiator has a first maximum excitation amplitude and the second passive radiator has a second maximum excitation amplitude, and wherein the predetermined distance between the mounted first passive radiator and the mounted second passive radiator is larger than a sum of the first maximum excitation amplitude and the second maximum excitation amplitude.

US Pat. No. 10,341,762

DYNAMIC GENERATION AND DISTRIBUTION OF MULTI-CHANNEL AUDIO FROM THE PERSPECTIVE OF A SPECIFIC SUBJECT OF INTEREST

SONY CORPORATION, Tokyo ...

1. A system for packaging and distribution of media content, comprising:a memory configured to store location information of a plurality of subjects located in a defined area,
wherein audio from each subject of the plurality of subjects is captured with at least one audio-capture device of a plurality of audio-capture devices; and
circuitry configured to:
assign a weight to each subject of the plurality of subjects based on at least one of social media trends associated with each subject of the plurality of subjects or historical performance information of each subject of the plurality of subjects;
select a subject-of-interest from the plurality of subjects in the defined area based on the weight assigned to each subject of the plurality of subjects;
select a set of audio-capture devices from the plurality of audio-capture devices, based on a location of the selected subject-of-interest;
receive a set of audio streams from the selected set of audio-capture devices;
generate a multi-channel audio based on the received set of audio streams, the location of the subject-of-interest, and a set of locations of a set of subjects equipped with the selected set of audio-capture devices; and
communicate the generated multi-channel audio to a consumer device,
wherein an acoustic environment that surrounds the subject-of-interest in the defined area is reproduced as a surround sound environment at the consumer device from a perspective of the subject-of-interest, based on an output of the multi-channel audio by the consumer device.

US Pat. No. 10,341,761

ACOUSTIC WAVEGUIDE FOR AUDIO SPEAKER

Tymphany HK Limited, Wan...

1. An audio system, comprising:a pair of loudspeakers disposed in a common acoustic cavity; and
an acoustic waveguide disposed in the common acoustic cavity between the pair of loudspeakers,
wherein the audio systems includes only one acoustic waveguide,
wherein the acoustic waveguide comprises:
a first main surface facing the woofer; and
a second main surface facing the tweeter,
wherein the first main surface and the second main surface are smoothly connected to each other, and
wherein the first main surface is convex facing the woofer and the second main surface is bell-shaped facing the tweeter.

US Pat. No. 10,341,760

ELECTRONIC EAR PROTECTION DEVICES

IP Holdings, Inc., Grand...

1. An electronic ear protection device, comprising:a pair of ear cups, each having a speaker therein;
a headband positioned above the pair or ear cups; and
a pair of arms that connect the pair of ear cups to opposite ends of the headband; and
an electronic sound control module attached to the headband, the module comprising:
one or more microphones configured to detect an ambient sound and generate a corresponding ambient sound signal,
one or more inputs operable to control a volume of the ambient sound signal delivered to the speakers in the ear cups via one or more cables that extend from the electronic sound control module to the speakers, and
circuitry operable to process the ambient sound signal and to communicate the processed ambient sound signal to the speakers in the ear cups via the one or more cables,
wherein a user can actuate the one or more inputs of the electronic sound control module attached to the headband to thereby control the volume of ambient sound provided to the user through the speakers in the ear cups.

US Pat. No. 10,341,759

SYSTEM AND METHOD OF WIND AND NOISE REDUCTION FOR A HEADPHONE

Apple Inc., Cupertino, C...

1. A method of noise reduction for a headphone comprising:receiving an acoustic signal from an external microphone positioned outside a housing of an earcup of the headphone;
receiving an acoustic signal from an internal microphone positioned inside the housing of the earcup;
processing a downlink signal to generate an estimate of a speaker signal that is to be output by a speaker of the headphone;
removing the estimate of the speaker signal from the acoustic signal from the internal microphone to generate a corrected internal microphone signal;
spectrally mixing the corrected internal microphone signal with the acoustic signal from the external microphone to generate a mixed signal, wherein
a lower frequency portion of the mixed signal includes a corresponding lower frequency portion of the corrected internal microphone signal, and
a higher frequency portion of the mixed signal includes a corresponding higher frequency portion of the acoustic signal from the external microphone,
processing the corrected internal microphone signal to generate an anti-noise signal; and
adding the anti-noise signal to the downlink signal to generate the speaker signal to be output by the speaker.

US Pat. No. 10,341,758

WIRELESS AUDIO SYSTEM AND METHOD FOR WIRELESSLY COMMUNICATING AUDIO INFORMATION USING THE SAME

BESTECHNIC (SHANGHAI) CO....

1. A wireless audio system, comprising:a first wireless headphone configured to:
receive, from an audio source, audio information using a first type of short-range wireless communication;
in response to successfully receiving the audio information from the audio source, generate an error correcting code based on the audio information; and
transmit an error correcting message including the error correcting code without transmitting the audio information; and
a second wireless headphone configured to:
receive, only from the audio source, the audio information using the first type of short-range wireless communication;
receive, from the first wireless headphone, the error correcting message including the error correcting code; and
in response to successfully receiving the audio information from the audio source based on the error correcting code, transmit an acknowledgement (ACK) message to the audio source,
wherein one of the first and second wireless headphones works in a snoop mode to communicate with the audio source based on communication parameters of another one of the first and second wireless headphones.

US Pat. No. 10,341,757

EARPHONE

Panasonic Intellectual Pr...

1. An earphone comprising: an audio transmitter configured to transmit sound; a housing having an internal space for containing the audio transmitter; a sound passage pipe having a tubular shape and configured to be inserted into an external auditory canal to guide sound produced at the audio transmitter into the external auditory canal; a radiator configured to radiate light into the external auditory canal; and a light receiver disposed in the internal space of the housing and configured to convert the light into a signal, the light having been reflected off the external auditory canal and passed through an internal space of the sound passage pipe, wherein the housing, the sound passage pipe, and the radiator are disposed in this order, and wherein the radiator is located outside the housing;the earphone, further comprising a reflector in the internal space of the housing, the reflector reflecting the light that has passed through the internal space of the sound passage pipe toward the light receiver.

US Pat. No. 10,341,756

PURE WIRELESS EARPHONES USING OPTIMAL MONOPOLE ANTENNAE

Fujikon Industrial Co., L...

1. A pair of pure wireless earphones using optimal monopole antennae, comprising in-ear type earphone housings and RF signal generation devices disposed in the in-ear type earphone housings, the in-ear type earphone housings being formed by buckling top housings and bottom housings in pairs and being internally provided with accommodating cavities, the bottoms of the bottom housings extending downwards and being sound outlets communicated with the accommodating cavities, the bottom housings being internally provided with loudspeakers communicated with the sound outlets and being sleeved with ear pads for plugging external acoustic foramina at the bottoms, characterized in that positioning stages matched with the shape of auricular concha cavities are at the tops of the bottom housings; the outer bottom faces of the positioning stages fit the surfaces of the auricular concha cavities in pairs; the outer walls of the positioning stages are in contact with tragi; the RF signal generation devices are located in the accommodating cavities; each RF signal generation device consists of an antenna, a main PCB and a battery; each main PCB comprises a Bluetooth chipset; each antenna is used for establishing RF communication links with an audio source and a secondary earpiece; each battery, each antenna and each loudspeaker are electrically connected with the corresponding main PCB; and a ball is drawn with each antenna as a center point and the outer wall of the corresponding in-ear type earphone housing closest to the antenna as a radius to form a space as an antenna holding area in which the antenna is located, and the radius of the ball is greater than 4 mm; wherein the main PCBs are provided with vertically disposed metal grounding layers, which evenly surround the edges of the main PCBs and are used for ensuring the even distribution of radio-frequency radiation currents.

US Pat. No. 10,341,755

WEARABLE SOUND EQUIPMENT

LG ELECTRONICS INC., Seo...

1. A wearable sound equipment comprising:a main body shaped to be wearable on a user's body part;
an earbud comprising an audio output unit;
a board located in the main body and comprising a ground;
a cable having a first end coupled to the main body and a second end coupled to the earbud, the cable comprising an audio line and a ground line;
an audio chipset coupled to the audio line and configured to transmit a sound signal to the audio output unit;
an antenna radiator located in the main body;
a feeder connecting the board and one end of the antenna radiator and configured to supply electric power to the antenna radiator; and
a ground connection located between the ground and the ground line, the ground connection connecting a first end of the ground line and the ground of the board,
wherein connection points of both the feeder and the ground connection are located at one end portion of the board that is away from a middle portion of the board such that the feeder and the ground connection are positioned near one another, and
wherein an electric field is stronger at the one end portion of the board than the middle portion of the board.

US Pat. No. 10,341,754

MICROPHONE BOOM STRUCTURE

Qingdao Goertek Technolog...

1. A microphone boom structure, comprising a first end boom segment, a second end boom segment, and a plurality of intermediate boom segments sequentially disposed between the first end boom segment and the second end boom segment;an end of the first end boom segment is hinged with an end of the adjacent intermediate boom segment, an end of the second end boom segment is hinged with an end of the adjacent intermediate boom segment, and the ends of two adjacent intermediate boom segments are hinged;
an adjusting structure is further provided between the end of the first end boom segment and the end of the adjacent intermediate boom segment, the end of the second end boom segment and the end of the adjacent intermediate boom segment, and the ends of two adjacent intermediate boom segments, respectively; and
the adjusting structure comprises an axial hole provided in a boom segment, an adjusting bar inserted into the axial hole, and an elastic compression member disposed between the axial hole and the adjusting bar.

US Pat. No. 10,341,753

EARRING-TYPE MICROPHONE

1. An earring-type microphone, which is hung on a user's ear so as to be used, the earring-type microphone comprising:two fixing members respectively fixed to both ears of a user;
a cable configured to be an elastic member and connecting the two fixing members such that a part of the cable comes into close contact with the user's neck;
speakers respectively coupled to both ends of the cable; and
a microphone part installed at one side of the cable for inputting voice,
wherein when the user applies force to the microphone part at the time of inputting voice through the microphone part, the cable is stretched such that the microphone part moves from the user's neck toward the area around the user's mouth, and if the force applied to the microphone part is removed, the microphone part comes into close contact with the area around the user's mouth by means of the elastic force of the cable.

US Pat. No. 10,341,752

REMOVABLE CARTRIDGE FOR INSERTION INTO PORTABLE COMMUNICATIONS DEVICE

MOTOROLA SOLUTIONS, Chic...

1. A portable communications device comprising:a housing;
a speaker cone disposed within the housing;
a frame having first and second surfaces, wherein the first surface is planar and the second surface is non-planar and spaced from the speaker cone;
an open gap extending between the first and second surfaces of the frame; and
a membrane coupled to the first surface of the frame, wherein a portion of the membrane is suspended over the open gap;
wherein the housing includes a battery compartment, and wherein the battery compartment defines an opening for receipt of the frame and membrane.

US Pat. No. 10,341,751

ELECTRONIC DEVICE HAVING EXPANDABLE SPEAKER BOX

Intel Corporation, Santa...

1. An electronic device, comprising:a base having:
at least one speaker box located at least partially within the base, wherein the at least one speaker box is at least partially comprised of a bi-metal material, wherein the bi-metal material includes a first metal with a first coefficient of thermal expansion and a second metal with a second coefficient of thermal expansion, the first metal and the second metal being in contact, and wherein the bi-metal material is to bend in response to an application of heat causing the first metal to expand by a first amount based on the first coefficient of thermal expansion and the second metal to expand by a second amount based on the second coefficient of thermal expansion, the first amount being different from the second amount; and
at least one speaker located within the at least one speaker box; and
a lid attached to the base, wherein the at least one speaker box is to be expanded when the lid is open and the at least one speaker box is to be compressed when the lid is closed, wherein at least one wall of the at least one speaker box is displaced away from a surface of the base when the at least one speaker box is expanded, and wherein a first portion of the at least one wall is displaced further from the surface than a second portion of the at least one wall is displaced from the surface when the at least one speaker box is expanded.

US Pat. No. 10,341,750

PRESSURE EQUALIZATION AUDIO SPEAKER DESIGN

LOGITECH EUROPE S.A., La...

1. An audio speaker, comprising:a sealed speaker enclosure having one or more enclosure walls that at least partially define an internal region;
a speaker assembly mounted on one of the one or more enclosure walls; and
a liquid-tight, gas permeable, porous element that is disposed between a port formed through a first wall of the one or more enclosure walls and the internal region, wherein
the liquid-tight, gas permeable, porous element is sealably mounted to the first wall,
the liquid-tight, gas permeable, porous element is hydrophobic, and
the liquid-tight, gas permeable, porous element is configured to allow a generation of acoustic pressures in the internal region by the speaker assembly at acoustic frequencies greater than a first frequency and relieve acoustic pressures generated in the internal region by allowing air to pass through the port, when the relieved acoustic pressures are created at frequencies less than the first frequency.

US Pat. No. 10,341,749

METHOD OF CONTROLLING DIGITAL PHOTOGRAPHING APPARATUS AND DIGITAL PHOTOGRAPHING APPARATUS USING THE SAME

Samsung Electronics Co., ...

1. An electronic device, comprising:a microphone unit to generate an audio electrical signal; and
one or more processors configured to:
generate audio data from the audio electrical signal;
create an audio file; and
store the generated audio data in the audio file by:
if a first input is detected, starting recording the audio data;
if a second input is detected while the audio data is being recorded, pausing the recording of the audio data;
if a third input is detected while the recording of the audio data is paused, continuing the recording of the audio data;
if a fourth input is detected while the audio data is being recorded by the first input or the third input, completing the audio file; and
if the fourth input is detected while the recording of the audio data is paused by the second input, completing the audio file;
wherein the fourth input corresponds to a single button press.

US Pat. No. 10,341,748

PACKET-OPTICAL IN-BAND TELEMETRY (POINT) FRAMEWORK

Infinera Corporation, Su...

1. A device configured for packet-optical in-band telemetry in a packet-optical network, the device comprising:a processor configured to:
receive a packet including at least a header and a payload at a packet layer;
read intent information from the header, wherein the intent information indicates a type of telemetry data;
translate the intent information from the packet layer to generate a device-specific action in an optical layer, wherein the device-specific action provides the type of telemetry data;
execute the device-specific action in the optical layer to generate a response corresponding to the intent;
associate the response with the intent; and
encode the response in the packet layer for downstream data forwarding.

US Pat. No. 10,341,747

SYSTEM AND METHOD FOR WAVELENGTH CONVERSION AND SWITCHING

Futurewei Technologies, I...

1. A path computation element (PCE) for an optical network, the PCE comprising:a receiver configured to:
receive, from a first network node in the optical network, first information comprising a first connectivity matrix and per port wavelength restrictions of the first network node, wherein the first connectivity matrix represents a connection configuration between first ingress ports and first egress ports for the first network node; and
receive, from a second network node in the optical network, second information comprising a second connectivity matrix and per port wavelength restrictions of the second network node, wherein the second connectivity matrix represents a connection configuration between second ingress ports and second egress ports for the second network node; and
a processor coupled to the receiver and configured to compute a lightpath through the optical network using the first information and the second information.

US Pat. No. 10,341,746

SENSOR SYSTEM WITH A SENSOR DATA BUFFER

ROBERT BOSCH GMBH, Stutt...

1. A device comprising:a sensor data buffer, wherein:
the device is configured to:
obtain sensor data from a plurality of sensors of the device;
organize the obtained sensor data into sensor-data frames that are stored in the sensor data buffer and are each respectively associated with a respective one of a plurality of time periods by, for each of the plurality of time periods, grouping together all of the sensor data obtained for the respective time period into a single respective one of the sensor-data frames, so that (a) sizes of different ones of the plurality of sensor-data frames differ depending on respective numbers of the plurality of sensors from which sensor data had been obtained for the respective time periods with which the respective sensor-data frames are associated and (b) for each of the plurality of sensors, all data recorded by the respective sensor at different times are separated into different ones of the sensor-data frames that each corresponds to a respective one of the different times;
subsequent to the organization of the obtained sensor data into the sensor-data frames stored in the sensor data buffer of the device, retrieve relevant portions of the sensor data of the sensor-data frames; and
execute an application on the device using the retrieved relevant portions of the sensor data;
each of the sensor-data frames includes a header that identifies all of the sensors whose data is included in the respective sensor-data frame and a sensor data area that includes the sensor data of the respective sensor-data frame; and
the retrieval is performed based on the headers.

US Pat. No. 10,341,745

METHODS AND SYSTEMS FOR PROVIDING CONTENT

Comcast Cable Communicati...

1. A method comprising:receiving, by a network device, a first content item that comprises a plurality of fragment identifiers;
determining, based on a device identifier associated with a first device, that the first device is configured to communicate via Quadrature Amplitude Modulation (QAM);
transmitting, to the first device via QAM, the first content item;
determining, based on a request from a second device and based on a device identifier associated with the second device, that the second device is configured to communicate via Internet Protocol (IP); and
transmitting, to the second device via IP, at least one second content item, of a plurality of second content items related to the first content item, and metadata associated with the at least one second content item, wherein the metadata and at least one fragment identifier of the plurality of fragment identifiers facilitate synchronization of the first content item and the at least one second content item.

US Pat. No. 10,341,744

SYSTEM AND METHOD FOR CONTROLLING RELATED VIDEO CONTENT BASED ON DOMAIN SPECIFIC LANGUAGE MODELS

NBCUniversal Media, LLC, ...

1. A system for controlling related video content, the system comprising:a memory configured to store information;
one or more processors configured to:
obtain first audio information of first video content from the memory;
identify a first primary plurality of time codes indicating lengths of time between occurrences of a first keyword within the first audio information;
obtain second audio information of second video content from the memory;
identify a first secondary plurality of time codes indicating lengths of time between occurrences of the first keyword within the second audio information;
compare patterns of occurrences of the first keyword in the first video content and the first keyword in the second video content based on the first primary plurality of time codes and the first secondary plurality of time codes; and
generate information indicating whether the first video content and the second video content are related based on the compared patterns; and
output the generated information indicating whether the first video content and the second video content are related, wherein the outputted generated information enables a selection between the first video content or the second video content for storage or display.

US Pat. No. 10,341,743

BANDWIDTH EFFICIENT MULTIPLE USER PANORAMIC VIDEO STREAM DELIVERY SYSTEM AND METHOD

Altia Systems, Inc., Cup...

1. A computer-implemented method for transmitting video from a camera to a plurality of video receivers, comprising:receiving video from the camera corresponding to a scene being imaged;
transmitting the video as a plurality of video streams, each to one of the plurality of video receivers;
receiving feedback information from each of the plurality of video receivers, wherein said feedback information comprises information defining a first region of the video currently being viewed on the video receiver, and wherein the feedback information further comprises information based on at least one of a zooming operation or a panning operation performed at each of the plurality of video receivers; and
performing an optimization operation to optimize each of the plurality of video streams being transmitted for a particular video receiver based on the feedback information, wherein said optimization operation comprises pre-rendering a second region of video before said second region is requested for viewing on the video receiver, wherein said second region is proximally related to said first region, and wherein said optimization operation comprises optimizing a portion currently being viewed at each of the plurality of video receivers selectively by dividing each frame of the portion currently being viewed into a plurality of blocks and transforming each block into a frequency domain based on a quantization parameter, wherein the quantization parameter varies for each of the plurality of blocks.

US Pat. No. 10,341,742

SYSTEMS AND METHODS FOR ALERTING A USER TO MISSED CONTENT IN PREVIOUSLY ACCESSED MEDIA

Rovi Guides, Inc., San J...

1. A method for automatically alerting a user that a segment in a media asset that was previously missed by the user will be missed again by notifying the user to maintain attention to the segment, the method comprising:generating for display to a user, during a first time period, the media asset;
detecting that the user is disregarding the segment of the media asset during the first time period;
in response to detecting that the user is disregarding the segment of the media asset during the first time period, storing, in a profile associated with the user, a position within the media asset corresponding to the segment of the media asset disregarded by the user and an indication that the user disregarded the segment of the media asset during the first time period;
generating for display to the user, during a second time period subsequent to the first time period, the media asset beginning at a time in the media asset that precedes the segment;
determining, based on the profile of the user, whether the user has previously accessed the media asset; and
in response to determining that the user has previously accessed the media asset:
retrieving, from the profile, the indication that the user disregarded the segment of the media asset and the position within the media asset corresponding to the segment of the media asset;
determining, during the second time period, whether a play position of the media asset corresponds to the position; and
in response to determining that the play position of the media asset corresponds to the position:
determining whether the user is disregarding the media asset during the second time period; and
in response to determining that the user is disregarding the media asset during the second time period, outputting an alert to the user, based on the indication, that the segment was previously missed by the user.

US Pat. No. 10,341,741

METHODS, SYSTEMS, AND MEDIA FOR PRESENTING SUGGESTIONS OF MEDIA CONTENT

Google LLC, Mountain Vie...

1. A method for presenting suggestions of media content, the method comprising:determining, using a communication network, a location of a user device;
determining, using a hardware processor, probabilities of a media content item having a plurality of defined presentation start times and being played back at a plurality of predicted future locations of the user device based on a presence of a detected nearby display device that is suitable for presenting the media content item and that is associated with the user device over the communication network;
determining, using the hardware processor, a time at which a user interface that suggests that the media content item be played back at one of the plurality of defined presentation start times based on the determined probabilities is to be presented, wherein the time corresponds to when the user device is located at one of the plurality of predicted future locations; and
causing, using the hardware processor, the user interface that suggests viewing the media content item on the detected nearby display device to be presented at the determined time on the user device.

US Pat. No. 10,341,740

METHOD AND SYSTEM FOR RECORDING RECOMMENDED CONTENT WITHIN A USER DEVICE

The DIRECTV Group, Inc., ...

1. A method comprising:receiving an external recommendation list at a user device through a network, said external recommendations list comprises a marketing list comprising a plurality of marketing list entries and a related program list comprising a plurality of related program entries that are related to the plurality of marketing list entries, each marketing list entry comprising a related score corresponding to a strength of similarity to a respective related program entry of the related program list;
generating a most viewed programming list at the user device;
comparing the external recommendations list and most viewed programming list using the related score to identify matches within the marketing list entries and the related program list based on a probability of correlation;
generating a recommended recording list in the user device using the matches from comparing the external recommendation list and the most viewed programming list;
storing at least one content from within the marketing list entries and the related program list of the recording recommendation list at the user device to form stored content;
displaying a list of stored content by the user device;
selecting a stored content selection from the list of stored content; and
displaying the stored content at a display associated with the user device.

US Pat. No. 10,341,739

METHODS AND SYSTEMS FOR RECOMMENDING PROVIDERS OF MEDIA CONTENT TO USERS VIEWING OVER-THE-TOP CONTENT BASED ON QUALITY OF SERVICE

Rovi Guides, Inc., San J...

1. A method of recommending providers of media content to users viewing over-the-top content, the method comprising:receiving, at a server, a request from a user device, over a communications network, to access media assets from a cloud-based aggregator, wherein the media assets are available from a plurality of over-the-top content providers;
determining, at the server, a first content provider of the plurality of over-the-top content providers that provides a media asset;
determining, at the server, a second content provider of the plurality of over-the-top content providers that provides the media asset;
querying, at the server, the user device or the first content provider for a first checksum value;
determining, at the server, a first network service transmission characteristic based on the first checksum value;
determining, at the server, a first quality of service for the user device to receive the media asset from the first content provider based on the first network service transmission characteristic;
querying, at the server, the user device or the second content provider for a second checksum value;
determining, at the server, a second network service transmission characteristic based on the second checksum value;
determining, at the server, a second quality of service for the user device to receive the media asset from the second content provider based on the second network service transmission characteristic;
retrieving, at the server, a threshold quality of service based on a threshold network service characteristic;
comparing, at the server, both the first quality of service and the second quality of service to the threshold quality of service wherein each network service transmission characteristic is chosen from the group consisting of: error rate, bit rate, throughput lag, transmission delay, jitter, minimum bandwidth, and maximum delay;
determining, at the server, that the first quality of service equals or exceeds the threshold quality of service and that the second quality of service does not equal or exceed the threshold quality of service; and
in response to determining that the first quality of service equals or exceeds the threshold quality of service and that the second quality of service does not equal or exceed the threshold quality of service, generating for display a first media listing for the media asset from the first content provider and not generating for display a second media listing for the media asset from the second content provider.

US Pat. No. 10,341,738

SILO MANAGER

Flextronics AP, LLC, San...

1. A method of managing a plurality of media sources on an intelligent Television (TV), the method comprising:detecting, by a silo manager executing on a processor of the intelligent TV, a request to access a selected media source of the plurality of media sources, each media source of the plurality of media sources being represented as a silo of a plurality of silos displayed on an interface of the intelligent TV in a strip or grid;
detecting, by the silo manager, that a media source of the plurality of media sources other than the selected media source is already active on the intelligent TV;
transitioning, by the silo manager, from the silo representing the media source other than the selected media source to the silo representing the selected media source using a sliding effect on the strip or grid;
enabling, by the silo manager, content associated with the selected media source;
displaying, by the silo manager, the enabled content associated with the selected media source using a zoom effect, wherein the zoom effect comprises shrinking content of the media source other than the selected media source into the silo representing the media source of the plurality of media sources other than the selected media source in the strip or grid and expanding the silo representing the selected media source from the strip or grid to full screen;
receiving, by a panel manager executing on the processor of the intelligent TV, a user input that requests activation of a media center panel while displaying content, the media center panel comprising a translucent user interface displayed in at least a portion of the television display over the enabled content associated with the selected media source, wherein the media center panel has two or more types displayed in the user interface based on a type of the user input, wherein the two or more media center panel types comprise at least one media center panel associated with a top level of a hierarchy of media center panels and one or more media center panels associated with a level of the hierarchy of media center panels below the top level of the hierarchy of the media center panels, and wherein the one or more media center panels associated with a level of the hierarchy of media center panels below the top level of the hierarchy of the media center panels comprises one or more of a volume panel displaying information about an audio volume control or other settings for volume, a settings panel displaying information about settable characteristics of the intelligent TV, or a notification panel displaying information about video on demand displays, favorites, or currently provided programs;
determining, by the panel manager, the type of media center panel requested based on the type of the user input; and
displaying, by the panel manager, on the television display, the determined media center panel type.

US Pat. No. 10,341,736

MULTIPLE HOUSEHOLD MANAGEMENT INTERFACE

Sonos, Inc., Santa Barba...

1. A method comprising:displaying, via a controller interface on a graphical display of a computing device, (i) representations of multiple households and (ii) a first control, the first control selectable to select among the multiple households, wherein each household comprises one or more respective playback devices;
receiving, via the first control of the controller interface, input data to select a first household from among the multiple households;
displaying, via the controller interface on the graphical display, (i) a representation of the first household and (ii) a second control, the second control selectable to select among multiple playlists of audio content;
receiving, via the second control of the controller interface, input data to select a first playlist from among the multiple playlists;
based on receiving the input data to select the first playlist from among the multiple playlists, sending, via a network interface of the computing device to a cloud server system, one or more instructions that cause the first household to play back the first playlist on one or more playback devices of the first household;
updating the controller interface on the graphical display to display a representation of the first household playing back audio content;
receiving, via the first control of the controller interface, input data to select a second household from among the multiple households;
displaying, via the controller interface on the graphical display, (i) a representation of the second household and (ii) the second control;
receiving, via the second control of the controller interface, input data to select a second playlist from among the multiple playlists;
based on receiving the input data to select the second playlist from among the multiple playlists, sending, via the network interface of the computing device to the cloud server system, one or more instructions that cause the second household to play back the second playlist on one or more playback devices of the second household; and
updating the controller interface on the graphical display to display (i) a representation of the second household playing back audio content and (ii) the representation of the first household playing back audio content.

US Pat. No. 10,341,735

SYSTEMS AND METHODS FOR SHARING CONTENT SERVICE PROVIDER SUBSCRIPTIONS

Rovi Guides, Inc., San J...

1. A method for sharing access to a subscription service that provides a user selected media asset, the method comprising:generating, for display, a graphical user interface comprising (1) a plurality of media asset identifiers for media assets that one or more friends of a user have viewed within a threshold time period and (2) a representation of how many of the one or more friends viewed a same one of the media assets;
receiving a selection by the user of a media asset identifier of the plurality of media asset identifiers;
accessing a first list of subscription services with which the user has an account registered;
searching content available from each subscription service in the first list of subscription services to determine whether the content includes a media asset associated with the selected media asset identifier;
in response to determining that the content available from each subscription service in the first list of subscription services does not include the media asset, accessing a second list of subscription services with which a friend, with whom the user is connected by way of a social network platform, has an account registered;
searching content available from each subscription service in the second list of subscription services to determine whether the content includes the media asset;
in response to determining that the content available from a given subscription service in the second list of subscription services includes the media asset, generating for display to the user a selectable option to query the friend for access to the given subscription service;
in response to receiving a selection from the user of the selectable option to query the friend for access, receiving, from the friend, access credentials corresponding to the account of the friend with the given subscription service to authorize the user to access the media asset through the account with the given subscription service; and
generating for display, to the user, the media asset based on receiving the access credentials from the friend.

US Pat. No. 10,341,734

METHOD AND SYSTEM FOR PRESENTING ADDITIONAL CONTENT AT A MEDIA SYSTEM

Gracenote, Inc., Emeryvi...

1. A method comprising:receiving, by a media system, a first sequence of media content;
receiving, by the media system, a subset of reference fingerprints selected from a plurality of reference fingerprints based on the reference fingerprints of the subset of reference fingerprints being associated with a subset of channels of a plurality of channels that the media system is used to watch more frequently than other channels of the plurality of channels;
generating, by the media system, a comparison fingerprint using first media content within the first sequence of media content;
determining, by the media system, that the comparison fingerprint does not match any reference fingerprints of the subset of reference fingerprints;
sending, by the media system, to a server system, a request for additional media content that includes the comparison fingerprint, wherein, based on the determining, the media system includes the comparison fingerprint in the request for comparison with additional reference fingerprints at the server system;
receiving, by the media system, a response to the request that includes information enabling the media system to replace or supplement the first media content with second media content, wherein the information comprises data indicative of an insertion point for the second media content;
determining, by the media system, a frame within the first sequence of media content at which to present the second media content based on the data indicative of the insertion point; and
providing, by the media system, for display a second sequence of media content that includes a portion of the first sequence of media content that occurs prior to the frame and includes the second media content.

US Pat. No. 10,341,733

COMPANION DEVICE

SHARP KABUSHIKI KAISHA, ...

1. A method for a companion device to receive current service information from a primary device comprising:(a) said companion device requesting said current service information from said primary device;
(b) said requesting by said companion device comprising input parameters including:
(i) a companion device ID;
(ii) a companion device application ID; and
(iii) a companion device application version;
(c) said requesting by said companion device comprising current information requested including:
(i) a request for current available non real-time audiovisual content for a current show; and
(ii) a request for timeline location information within said current show;
(d) said companion device receiving from said primary device in response to said requesting said current service information a current service information response;
(e) said receiving said current service information response including:
(i) a primary device ID;
(ii) said current available non real-time audiovisual content for said current show; and
(iii) said timeline location information within said current show.

US Pat. No. 10,341,732

VIDEO QUALITY OPTIMIZATION BASED ON DISPLAY CAPABILITIES

CenturyLink Intellectual ...

1. A method, comprising:receiving, with a computing system, a first user input from a user indicating a user request for a first media content;
autonomously determining, with the computing system, one or more first characteristics of a plurality of characteristics of a first playback device, wherein the one or more first characteristics comprise dynamic range;
sending, with the computing system and to a media content source over a network, a first request for the first media content, the first request comprising information regarding one or more first presentation characteristics that are based at least in part on the determined one or more first characteristics of the first playback device;
receiving, with the computing system, a first version of the first media content, the first version of the first media content having the one or more first presentation characteristics;
relaying, with the computing system, the received first version of the first media content to the first playback device for presentation to the user of the first version of the first media content;
detecting, with the computing system, that the first playback device has been disconnected prior to presentation to the user of an entirety of the first media content and detecting a second playback device that is different from the first playback device has been connected; and
based on a determination that the first playback device has been disconnected prior to presentation to the user of an entirety of the first media content and based on a determination that the second playback device that is different from the first playback device has been connected, autonomously determining, with the computing system, one or more second characteristics of a plurality of characteristics of the second playback device and relaying, with the computing system, a second version of the first media content having one or more second presentation characteristics associated with the second playback device for presentation to the user.

US Pat. No. 10,341,731

VIEW-SELECTION FEEDBACK FOR A VISUAL EXPERIENCE

Google LLC, Mountain Vie...

1. A computer-implemented method comprising:presenting a visual experience on a display of a mobile media-consumption device, the visual experience comprising:
a live-action or computer-animated film; and
a story having story views and context views, the story views presenting at least a portion of an event of an authored series of events, the context views presenting context of the visual experience, the context comprising additional detail about at least one of the story views or an element of the visual experience separate from the authored series of events;
receiving view selections during the presentation of the visual experience, the view selections non-passively made by a user during the presentation of the visual experience, the view selections received through one or more orientation sensors of the mobile media-consumption device, the view selections configured to view a part, but not all, of the visual experience, the part of the visual experience comprising at least a portion of one of the story views or a portion of one of the context views from among the story views and the context views of the story of the visual experience;
viewing, based on the received view selections, the part of the visual experience on the display;
determining, during the presentation of the part of the visual experience and based on the view selections, an alteration to a current or future story view or a current or future context view of the visual experience;
providing the alteration, during the presentation of the part of the visual experience, effective to alter the current or future story view or the current or future context view of the visual experience;
determining, based on the received view selections and the viewed part of the visual experience, one or more interest elements of the visual experience, the determined one or more interest elements of the visual experience corresponding to an interest level of the user;
aggregating the determined one or more interest elements of the visual experience into feedback;
transmitting the feedback of the visual experience;
receiving another visual experience, the other visual experience adjusted to correspond to the transmitted feedback; and
presenting the other visual experience on the display of the mobile media-consumption device.

US Pat. No. 10,341,730

VIDEO-ON-DEMAND CONTENT DELIVERY SYSTEM FOR PROVIDING VIDEO-ON-DEMAND SERVICES TO TV SERVICE SUBSCRIBERS

Broadband iTV, Inc., Hon...

1. A video-on-demand content delivery system for providing video-on-demand services to a plurality of TV service subscribers via a closed system, the system comprising:(a) a video-on-demand application server, comprising a first set of one or more computers and a first set of computer-readable memory operatively connected to the first set of one or more computers of the video-on-demand application server, that receives, from a Web-based content management system, video content in a video format used by the video-on-demand content delivery system and receives, from the Web-based content management server, associated video-on-demand application-readable metadata usable in a video-on-demand content menu;
wherein the received video content was uploaded to the Web-based content management system by a content provider device associated with a video content provider via an open online network in a digital video format, along with associated metadata including title information, category information, and subcategory information designated by the video content provider, to specify a respective hierarchical location of a respective title of the video content within a video-on-demand content menu using the respective hierarchically-arranged category information and subcategory information associated with the respective title;
(b) a video server, comprising a second set of one or more computers and a second set of computer-readable memory operatively connected to the one or more computers of the video server, wherein the video server is associated with the video-on-demand application server, and wherein the video server stores the received video content and supplies the video content, upon request, for transmission to a respective set top box operatively connected to respective TV equipment of a respective TV service subscriber of the plurality of TV service subscribers;
(c) a tracking system, comprising a third set of one or more computers and a third set of computer-readable memory operatively connected to the third set of one or more computers of the tracking system, wherein the tracking system tracks data indicative of selections, by the respective TV service subscriber, for viewing of the video content;
(d) a profiling system, comprising a fourth set of one or more computers and a fourth set of computer-readable memory operatively connected to the fourth set of one or more computers of the profiling system, wherein the profiling system is operatively connected to the video-on-demand application server and to the video server, wherein the profiling system is configured to:
(1) consolidate TV service subscriber-related data to profile the respective TV service subscriber;
(2) provide the consolidated TV service subscriber-related data to a targeting system to reformat a user interface for presenting the video-on-demand services to the respective TV service subscriber; and
(3) segregate and maintain as private a portion of TV service subscriber-related data collected specifically for the video content provider; and
(e) the targeting system, comprising a fifth set of one or more computers and a fifth set of computer-readable memory operatively connected to the fifth set of one or more computers of the targeting system, wherein the targeting system is operatively connected to the profiling system, the video-on-demand application server and the video server, wherein the targeting system is configured to receive, by the respective TV service subscriber, the data indicative of selections that is tracked by the tracking system for viewing of the video content, to obtain from the profiling system the consolidated TV service subscriber-related data, and to reformat the user interface to be displayed to the respective TV service subscriber based at least on the consolidated TV service subscriber-related data and the data indicative of selections, by the respective TV service subscriber, for viewing of the video content; wherein the video-on-demand application server is programmed to perform the steps of:
(i) providing the respective set top box operatively connected to the respective TV equipment of the respective TV service subscriber with access to the video-on-demand content menu for navigating through titles, including the respective title of the received video content, by category information and subcategory information in order to locate a particular one of the titles whose associated video content is desired for viewing on the respective TV equipment, wherein the video-on-demand content menu lists the titles using a same hierarchical structure of category information and subcategory information as was designated by the video content provider in the uploaded metadata for the video content and wherein the video-on-demand content menu is formatted by the targeting system; and
(ii) in response to the respective TV service subscriber selecting, via a TV control unit in communication with the respective set top box, the respective title associated with the video content from the hierarchically-arranged category information and subcategory information of the video-on-demand content menu, and the respective set top box transmitting an electronic request for the video content associated with the selected title, retrieving the selected video content from the video server, and transmitting the selected video content to the respective set top box for display on the respective TV equipment of the respective TV service subscriber.

US Pat. No. 10,341,729

METHODS AND SYSTEMS FOR RECOMMENDING MEDIA CONTENT RELATED TO A RECENTLY COMPLETED ACTIVITY

Rovi Guides, Inc., San J...

1. A method for recommending media content, the method comprising:receiving, using control circuitry, a first activity datum, related to a first activity being performed by a given user, the first activity being unrelated to media consumption;
determining, based on the first activity, a second activity datum, related to a second activity from among a plurality of possible activities that occur after the first activity is complete, by:
retrieving, from a database, the plurality of possible activities that occur after the first activity is complete; and
determining the second activity datum from the plurality of possible activities;
cross-referencing the first activity datum with a database listing media assets accessed by users after performing the first activity to identify a media asset associated with the second activity datum;
determining whether the identified media asset is available to the given user following the first activity; and
in response to determining that the identified media asset is available to the given user, generating a recommendation of the identified media asset for display on a display device.

US Pat. No. 10,341,728

MEDIA SYSTEMS FOR TEMPORALLY AND CONTEXTUALLY RELEVANT RECOMMENDATIONS

SLING MEDIA L.L.C., Fost...

1. A media system comprising:a client device coupled to a network;
a database maintaining usage information associated with a user of the client device, the usage information pertaining to one or more preceding viewings of media content by the user; and
a server coupled to the client device via the network and to the database to:
identify a current viewing context; and
in response to the user selecting a filter graphical user interface element presented within a graphical user interface display on the client device:
determine one or more previously viewed media programs associated with the current viewing context based on the usage information;
identify a plurality of recently available instances of media programs that best match information associated with the one or more previously-viewed media programs from among a plurality of available media programs from one or more content sources coupled to the network that originated after a preceding viewing session for the user;
provide a filtered graphical user interface display on the client device including only the plurality of recently available instances of media programs that best match the information associated with the one or more previously viewed media programs, wherein:
the filtered graphical user interface display is populated with a first number of the plurality of recently available instances of media programs having a preferred program type of a plurality of program types and a second number of the plurality of recently available instances of media programs having a different program type of the plurality of program types; and
the first number relative to the second number corresponds to a relative distribution of program type preferences for the user across the plurality of program types; and
initiate presentation, on the client device, of audiovisual content of a selected media program of the plurality of recently available instances of media programs included on the filtered graphical user interface display that is selected by a user of the client device.

US Pat. No. 10,341,727

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING PROGRAM

Rakuten, Inc., Setaguya-...

1. An information processing apparatus comprising:at least one memory configured to store computer program code; and
at least one processor configured to access said at least one memory, read said computer program code, and operate as instructed by said computer program code, said computer program code including:
content identification information obtaining code configured to cause at least one of said at least one processor to obtain content identification information for identifying a content to be presented with respect to a user identifier C;
log retrieval code configured to cause at least one of said at least one processor to retrieve, from a storage that stores operation logs each of which is associated with content identification information for identifying a presented content and includes both a time at which an operation to control presentation of the presented content was performed during presentation of the presented content and details of the operation, operation logs corresponding to the obtained content identification information;
generating code configured to cause at least one of said at least one processor to generate control information for controlling how to present the content indicated by the obtained content identification information, in accordance with a tendency of operation changes that is identified based on the retrieved operation logs, the control information including details of control and a timing of the control, wherein
based on a frequency of appearance of each of a plurality of patterns included in the operation changes, the generating code causes at least one of said at least one processor to select at least one of the plurality of patterns and generate the control information in accordance with the selected at least one pattern, wherein a first pattern of the plurality of patterns includes a first control at a first position and Wherein the first control is associated with a first plurality of user identifiers, and wherein the first pattern of the plurality of patterns further includes a second control at a second position associated with a second plurality of user identifiers, and
providing code configured to cause at least one of said at least one processor to provide the generated control information, wherein:
the operation logs include an operation log associated with a user identifier A and an operation log associated with a user identifier B, but do not include an operation log associated with the user identifier C,
the user identifier C is different than the user identifier A, and
the user identifier C is different than the user identifier B.

US Pat. No. 10,341,726

VIDEO SENDER AND VIDEO RECEIVER

Toshiba Visual Solutions ...

1. An electronic apparatus comprising:a storage configured to store information including categories of video data in association with which one of a color signal and a frame rate takes precedence over the other;
a receiver configured to receive content information including a category of the video data output by another apparatus;
a controller configured to select one of the color signal and the frame rate of the video data to be received, having a higher priority than the other on the basis of the received content information and the stored information in the storage; and
a transmitter configured to transmit specific information regarding the one of the color signal and the frame rate selected by the controller to the other apparatus via a digital interface compliant with an HDMI standard, the digital interface comprising a channel 0, a channel 1, and a channel 2 of the HDMI standard, the specific information differing from extended display identification data (EDID) of the HDMI standard, wherein
the receiver is configured to receive the video data with 4:2:0 format from the other apparatus via the digital interface when the frame rate has a higher priority than the color signal, the video data with the 4:2:0 format comprising four Y components and two C components for four pixels, by receiving the two C components via only the channel 0 at a same clock period and receiving the four Y components via only both the channel 1 and the channel 2 at the same clock period.

US Pat. No. 10,341,725

METHODS AND SYSTEMS FOR DETERMINING USER ENGAGEMENT BASED ON USER INTERACTIONS DURING DIFFERENT TIME INTERVALS

Rovi Guides, Inc., San J...

1. A method for determining a level of user engagement based on user interactions, the method comprising:receiving a first media asset for consumption;
selecting a first time interval based on a first start time and a first end time;
retrieving a first record of a first plurality of user inputs received during the first time interval, wherein the first record indicates an input type for each of the first plurality of user inputs;
selecting a second time interval, prior to the first time interval, by shifting the first start time and the first end time to determine a second start time and a second end time for the second time interval;
retrieving a second record of a second plurality of user inputs received during the second time interval, wherein the second record indicates an input type for each of the second plurality of user inputs;
determining a first frequency of each input type of the first plurality of user inputs during the first time interval;
determining a second frequency of each input type of the second plurality of user inputs during the second time interval;
generating a first metric that describes the first frequency of each input type of the first plurality of user inputs during the first time interval;
generating a second metric that describes the second frequency of each input type of the second plurality of user inputs during the second time interval;
determining that the first media asset is consumed during the first time interval and a second media asset is consumed during the second time interval;
tagging the first media asset with the first metric and the second media asset with the second metric;
determining the level of user engagement for the first media asset based on the tagging by:
comparing the first frequency of each input type in the first plurality of user inputs with a corresponding second frequency of each input type in the second plurality of user inputs;
calculating respective percent differences between the first frequency of each input type in the first plurality of user inputs and the corresponding second frequency in the second plurality of user inputs; and
determining the level of user engagement for the first media asset based on the respective percent differences;
selecting a portion of media to insert into the first media asset based on the level of user engagement for the first media asset; and
inserting the selected portion of media into the first media asset.

US Pat. No. 10,341,724

VIEWER IDENTIFICATION BASED ON WIRELESS DEVICE PROXIMITY

ARRIS Enterprises LLC, S...

1. A method comprising:measuring the strength of a signal between a wireless client device and one or more receivers, wherein the wireless client device is associated with a first user;
based upon the measured strength of the signal between the wireless client device and the one or more receivers, determining a position of the wireless client device relative to an access point, wherein the position of the wireless client device relative to the access point comprises a distance between the wireless client device and the access point and a direction of the wireless client device with respect to the access point;
determining that the position of the wireless client device relative to the access point is associated with a display device, wherein the association between the position of the wireless client device relative to the access point and the display device is determined based upon an identification of the association within a log entry, wherein the log entry is created in response to a command that is received from the wireless client device, the command comprising an identification of the display device, and wherein the creation of the log entry comprises:
measuring a strength of a signal received from the wireless client device; and
logging an association between the measured strength of the signal received from the wireless device and the display device;
identifying the first user as a potential viewer of the display device.

US Pat. No. 10,341,723

IDENTIFICATION AND INSTANTIATION OF COMMUNITY DRIVEN CONTENT

SONY INTERACTIVE ENTERTAI...

1. An apparatus for providing community driven content, the apparatus comprising:at least one sensor that captures sensor data, wherein the at least one sensor includes a camera that captures an image of an environment;
a network interface;
a memory; and
a processor connected to the sensor, the network interface, and the memory, wherein the processor executes instructions to:
identify a number of users that are present in the environment based on the captured image, wherein the number of users includes at least a first user,
determine an identity of the first user based on the captured image,
record usage history for engagement with one or more types of content by the first user over a period of time,
predict one or more user preferences for the first user based on the usage history recorded for the first user,
determine that a second user is available for interaction with the first user when the sensor data indicates that the second user is within a predetermined vicinity of the first user, wherein the sensor data further indicates an identity of the second user,
determine user preferences for the second user based on the second user identity,
determine a suggested action for continued engagement with the one or more types of content based on the user preferences for the first user and the user preferences of the second user determined to be available,
determine a disengagement time when the first user is predicted to disengage from active participation with the one or more types of content based on the recorded usage history, and
output the suggested action at a predetermined time to at least one connected display device or audio device, wherein the predetermined time is at or before the disengagement time.

US Pat. No. 10,341,721

METHOD AND SYSTEM FOR PROCESSING MULTI-MEDIA CONTENT

MIMIK TECHNOLOGY INC., V...

1. A system for processing of multi-media content for a user device independent of the user device location, the system comprising:a first serving node located at a home, said first serving node configured to receive the multi-media content from a content provider via a network and configured to deliver said multi-media content to a plurality of user devices registered with said first serving node at the home served by said first serving node;
each of said plurality of user devices associated with at least one user selected from a plurality of users registered at the home, at least one of said plurality of user devices associated with more than one of said plurality of users; and
each of said plurality of users associated with a record in a database accessible by said first serving node, said database including content characterization and preferences of each of said plurality of users, said content characterization and preferences of each of said plurality of users depending on a user device each of said plurality of users is using and a type of network that each of said plurality of users is using to access said multi-media content,
wherein said first serving node is configured to uniquely distinguish each of said plurality of users by:
verification of a unique id that is associated with said user device, wherein the unique id is assigned to said user device at a time of registration with said first serving node at the home; and
determination of the type of network through an interface at which a request for said multi-media content was received,
wherein when the multi-media content is received by said first serving node, said multi-media content is reformatted by said first serving node, based on said content characterization and preferences of one of said plurality of users for display on one of said plurality of user devices selected by the one of said plurality of users, and content transmission bit rate of said multi-media content is adjusted based on a speed of said one of said plurality of user devices, wherein said one of said plurality of user devices is selected by said one of said plurality of users,
wherein said first serving node is configured to multiplex the multi-media content directly to the plurality of user devices, and
wherein said one of said plurality of user devices is a mobile phone, and said first serving node transmits a location of said mobile phone to a server within the network, and when said mobile phone is closer to a second serving node at a second home than the home of said first serving node, and said second serving node is connected to the network, said server is configured to instruct said second serving node to deliver the multi-media content to said mobile phone.

US Pat. No. 10,341,720

FAST-START STREAMING AND BUFFERING OF STREAMING CONTENT FOR PERSONAL MEDIA PLAYER

SLING MEDIA LLC, Foster ...

1. An automated process executable by a media player device to play a media stream received via a network, the process comprising:receiving the media stream by the media player device from a remotely-located program source in communication with the media player via the network;
storing the received media stream in a buffer of the media player device;
subsequently retrieving the received the media stream from the buffer for playback of the received media stream by the media player device;
while the media stream is being provided for playback, the media player device receiving a user command to change the content of the media stream; and
responsive to the media player device receiving the user command, the media player device flushing the buffer prior to receiving the changed content in the media stream from the remotely-located program source via the network.

US Pat. No. 10,341,719

ENTRY ADAPTER FOR COMMUNICATING EXTERNAL SIGNALS TO AN INTERNAL NETWORK AND COMMUNICATING CLIENT SIGNALS IN THE CLIENT NETWORK

PPC BROADBAND, INC., Eas...

1. An entry adapter for allowing external signals to be conducted between an external network and an internal client network, allowing client signals to be conducted in the internal client network, and blocking the client signals from being conducted from the internal client network to the external network, the entry adapter comprising:a first port configured to allow external signals to be received by the entry adapter;
a second port configured to be connected to a subscriber device, so as to allow the external signals to be conducted to the subscriber device;
a plurality of third ports each configured to allow the client signals to be conducted to the internal client network, wherein the external signals comprise cable television (CATV) signals having a frequency between 5 MHz and 45 MHz or between 54 MHz and 1002 MHz, and the client signals comprise Multimedia over Coaxial Alliance (MoCA) signals having a frequency between 1125 MHz and 1525 MHz;
a first splitter configured to receive the external signals from the first port;
a frequency band rejection device configured to receive the external signals from the first splitter and to allow the external signals to proceed from the first splitter to the plurality of third ports, and to block the client signals from proceeding from the plurality of third ports, through the frequency band rejection device, and to the first port;
a second frequency band rejection device electrically connected to the first port, wherein the second frequency band rejection device is configured to block the client signals from being communicated to the external network;
a second splitter configured to receive the external signals from the frequency band rejection device, wherein the second splitter is configured to split the external signals from the frequency band rejection device and distribute the external signals received from the frequency band rejection device to the plurality of third ports; and
a passive signal path extending from the first splitter to the second port, wherein the passive signal path does not extend through any splitters between the first splitter and the second port, such that a strength of the external signal is not diminished between the first port and the second port to an extent that would interfere with communication between the external network and the subscriber device, and wherein the passive signal path does not include any powered signal conditioning components; and
an active signal path extending from the first splitter to the frequency band rejection device, wherein the active signal path includes one or more powered signal conditioning components.

US Pat. No. 10,341,718

PASSIVE MULTI-PORT ENTRY ADAPTER AND METHOD FOR PRESERVING DOWNSTREAM CATV SIGNAL STRENGTH WITHIN IN-HOME NETWORK

PPC BROADBAND, INC., Eas...

1. A passive cable television (CATV)/Multimedia over Coaxial Alliance (MoCA) signal distribution apparatus, comprising:an input port for receiving CATV input signals;
a radiofrequency (RF) output port for outputting CATV signals;
a Gateway port for connection to a Gateway device, for bidirectionally communicating CATV and MoCA signals;
a plurality of MoCA signal ports for connection to a plurality of MoCA devices, respectively, each of the plurality of MoCA signal ports for bidirectionally receiving and outputting MoCA signals;
a first splitter connected to the input port, and configured to provide first and second CATV output signals, the second CATV output signal being transmitted to the RF output port;
a diplex filter including a lowpass filter section configured to receive the first CATV output signal from the first splitter, and a highpass filter section configured to bidirectionally receive MoCA signals, while isolating the MoCA signals from both the first CATV output signal and the input port, thereby preventing the MoCA signals and the first CATV output signal from being combined at the input port; and
a second splitter including a first bidirectional MoCA signal line connected to the highpass filter section of the diplex filter, and a plurality of other bidirectional signal lines connected individually to the plurality of MoCA signal ports, respectively,
wherein the apparatus permits a plurality of MoCA clients to independently communicate the MoCA signals with the Gateway device.

US Pat. No. 10,341,717

SYSTEMS AND METHODS FOR FACILITATING ACCESS TO CONTENT ASSOCIATED WITH A MEDIA CONTENT SESSION BASED ON A LOCATION OF A USER

Verizon Patent and Licens...

1. A method comprising:determining, by a content delivery system, that a user profile of a user is logged in to a first access device during a media content session associated with the first access device in which the first access device presents media content to the user, the first access device provided at a location at a user premises;
detecting, by the content delivery system, that the user moves outside a vicinity of the first access device while the user profile is logged in to the first access device during the media content session;
identifying, by the content delivery system in response to the user moving outside the vicinity of the first access device, a second access device associated with the user and that is within a vicinity of the user;
detecting, by the content delivery system and while the user is outside the vicinity of the first access device, an occurrence of an event that occurs while the media content is presented by the first access device during the media content session, the event represented by a video highlight of a sporting event;
automatically providing, by the content delivery system in response to the identifying of the second access device and the detecting of the occurrence of the event, session management content corresponding to the media content session associated with the first access device for presentation by the second access device while the user is outside the vicinity of the first access device, the session management content comprising a notification that includes information regarding the event and an option for the user to experience the video highlight of the sporting event; and
preventing, by the content delivery system while the user profile of the user is logged in to the first access device and the user is outside the vicinity of the first access device and while the media content is presented by the first access device at the location at the user premises, the notification including the option for the user to experience the video highlight of the sporting event from being presented by way of the first access device to an additional user that is within the vicinity of the first access device and that is presented with the media content by way of the first access device while the user profile of the user is logged in to the first access device.

US Pat. No. 10,341,716

LIVE INTERACTION SYSTEM, INFORMATION SENDING METHOD, INFORMATION RECEIVING METHOD AND APPARATUS

TENCENT TECHNOLOGY (SHENZ...

1. An information sending method, applied in an interaction platform installed in a server, the method comprising:receiving from a first client of at least one client, an interaction instruction for instructing the interaction platform to add an interaction prop into a live stream;
intercepting an image frame from the current live stream, after receiving the interaction instruction;
generating instant feedback information according to the image frame and the interaction prop indicated by the interaction instruction; and
sending the instant feedback information to the first client which sends the interaction instruction;
receiving a prop acquisition instruction from the first client, the prop acquisition instruction containing a type and number of the interaction prop;
transferring a number of resources corresponding to the type and number of the interaction prop from an account corresponding to the first client to an account corresponding to the interaction platform, after receiving the prop acquisition instruction;
detecting whether the number of the resources transferred from the account corresponding to the first client is consistent with the type and number of the interaction prop in the prop acquisition request, and if yes, adding the corresponding type and number of interaction prop into an interaction prop library corresponding to the first client and
sending an exchange success indication to the first client, the exchange success indication containing the type and number of the interaction prop.

US Pat. No. 10,341,715

EXTENSIONS TO TRIGGER PARAMETERS TABLE FOR INTERACTIVE TELEVISION

Saturn Licensing LLC, Ne...

1. A method of a reception apparatus for processing triggers, comprising:receiving, by a receiver of the reception apparatus, content and a first trigger included in closed caption data of the content, the first trigger identifying a location of a table;
outputting the received content for display;
extracting the location of the table from the closed caption data of the content;
acquiring, by circuitry of the reception apparatus, the table from the extracted location, the table including a plurality of actions to be performed for an application which is configured to be executed in synchronization with the content;
storing the table in a memory of the reception apparatus;
after storing the table in the memory of the reception apparatus, receiving a second trigger included in the closed caption data of the content, the second trigger including a reference to an entry in the stored table indicating at least one of the plurality of actions included in the stored table; and
performing, by the circuitry, the one of the plurality of actions.

US Pat. No. 10,341,714

SYNCHRONIZATION OF MULTIPLE AUDIO ASSETS AND VIDEO DATA

TIME WARNER CABLE ENTERPR...

1. A method comprising the steps of:obtaining, at a personal media device in a premises, from a content source device in said premises, a secondary audio asset; and
providing, from said personal media device to said content source device, data indicative of a time delay causing said content source device to delay display of digital video data and playback of a primary soundtrack synchronizing a playback of said at least one secondary audio asset by said personal media device with said display of said digital video data and said playback of said primary soundtrack.

US Pat. No. 10,341,713

METHODS AND SYSTEMS FOR PROVIDING CONTENT

Comcast Cable Communicati...

1. A method comprising:receiving, by a content management device and from a user device, a first media control request while a first content item is being outputted for display, wherein the first media control request comprises an identifier that identifies the first content item;
determining, by the content management device, a time point of the first content item associated with the first media control request;
determining, by the content management device, a second content item based on:
the time point of the first content item associated with the first media control request,
the first content item, and
a requested viewing speed associated with the first media control request; and
causing, by the content management device, simultaneous display of the first content item and the second content item by the user device.

US Pat. No. 10,341,712

SYSTEMS AND METHODS FOR AUDIO TRACK SELECTION IN VIDEO EDITING

GoPro, Inc., San Mateo, ...

1. A system that automatically edits video clips to synchronize accompaniment by different musical tracks, the system comprising:one or more non-transitory storage media storing video content and first instructions defining a first edit of the video content, the first instructions indicating specific portions of the video content included in the first edit of the video content and an order of the specific portions of the video content within the first edit of the video content, the specific portions of the video content including a first portion and a second portion, the second portion following the first portion in the first edit of the video content, wherein:
the first edit of the video content includes one or more occurrences of video events, the individual occurrences of the video events corresponding to different moments within the first edit of the video content;
the first portion of the video content includes a first video event occurring at a first moment within the first edit of the video content;
the second portion of the video content includes a second video event occurring at a second moment within the first edit of the video content; and
the first edit of the video content is synchronized with a musical track, the musical track providing an accompaniment for the first edit of the video content, the musical track characterized by audio event markers including a first audio event marker occurring at a third moment within the musical track and a second audio event marker occurring at a fourth moment within the musical track, the fourth moment occurring later in the musical track than the third moment, the individual audio event markers corresponding to different moments within the musical track,
the first edit of the video content is synchronized with the musical track such that the first moment corresponding to the first video event is aligned to the third moment corresponding to the first audio event marker and the second moment corresponding to the second video event is aligned to the fourth moment corresponding to the second audio event marker; and
a boundary between the first portion of the video content and the second portion of the video content in the first edit of the video content is located at a given audio event of the musical track at or near a mid-point between the first video event and the second video event; and
one or more physical processors configured by machine readable instructions to:
determine a change to the musical track; and
determine second instructions defining a second edit of the video content that is synchronized with the changed musical track so that one or more moments within the second edit of the video content corresponding to one or more of the occurrences of the video events are aligned with one or more moments within the changed musical track corresponding to one or more of the audio event markers, wherein determination of the second instructions includes:
identifying the mid-point between the first video event and the second video event;
identifying an audio event of the changed musical track that is nearest to the mid-point; and
shifting the boundary between the first portion of the video content and the second portion of the video content in the second edit of the video content to be located at the audio event of the changed musical track that is nearest to the mid-point.

US Pat. No. 10,341,711

REMOTE CONTROLLER DEVICE WITH ELECTRONIC PROGRAMMING GUIDE AND VIDEO DISPLAY

Saturn Licensing LLC, Ne...

1. A remote controller device operable to control a television, the remote controller device comprising:a remote controller display screen, wherein the remote controller display screen is a touch screen: and
a processor in communication with the remote controller display screen and configured to:
concurrently display on the remote controller display screen a first item of video content in real time, wherein an area of the remote controller display screen consisting of the display of the first item of video content is a video content display area of the remote controller display screen, metadata of the first item of video content, and available programming content information of other items of video content that can be displayed on the television and are different from the first item of video content, wherein the first item of video content is displayed while the television is concurrently displaying in real time the first item of video content,
receive a first user command for preliminarily selecting a second item of video content that is different from the first item of video content being displayed on the television, the first user command comprising the user touching only a second item area of the remote controller display screen, wherein the second item area is different from the video content display area and corresponds to the second item of video content,
receive the second item of video content in response to the first user command, in response to the first user command and while the television continues to display the first item of video content, replace the first item of video content displayed on the remote controller display screen with the second item of video content, wherein the second item of video content is displayed concurrently with the available programming content of the video content that can be displayed on the television and the metadata of the first item of video content,
after replacing the first item of video content displayed on the video content display area with the second item of video content in response to the first user command selecting the second item area, receive a second user command for finally selecting the second item of video content, the second user command comprising touching by the user of only the second item of video content of the video content display area of the remote controller display screen while the second item of video content is displayed on the video content display area of the remote controller display screen, and
in response to the second user command transmit to the television a selection command that causes the television to display the second item of video content on the television.

US Pat. No. 10,341,710

PROGRAM RECORDING METHOD AND DEVICE, AND SET TOP BOX

ZTE Corporation, Shenzhe...

1. A program recording method, comprising:acquiring, by a set top box, first Electronic Program Guide (EPG) information of a Digital Video Broadcast (DVB) and second EPG information of an Over-The-Top (OTT) service;
selecting a program to be recorded from a program list integrated with the first EPG information and the second EPG information, and determining a program type of the program to be recorded; and
recording the program to be recorded according to the determined program type
wherein recording the program to be recorded according to the determined program type comprises:
when it is determined that the program to be recorded pertains to a DVB television program, recording the program to be recorded by using a Local Personal Video Recorder (LPVR) manner;
when it is determined that the program to be recorded pertains to an OTT network video television program, recording the program to be recorded by using a Network Personal Video Recorder (NPVR) manner; and
when it is determined that the program to be recorded pertains to a DVB television program and an OTT network video television program simultaneously, recording the program to be recorded by using an NPVR manner.

US Pat. No. 10,341,709

ELECTRONIC DISPLAY SYSTEMS CONNECTED TO VEHICLES AND VEHICLE-BASED SYSTEMS

Allstate Insurance Compan...

1. An electronic display system comprising:one or more processors;
a network interface configured to transmit content to one or more digital roadside displays or other digital displays; and
at least one memory storing computer-readable instructions that, when executed by the one or more processors, cause the electronic display system to:
determine that a first vehicle is in proximity of or on-route to the one or more digital roadside displays;
receive driving pattern data for the first vehicle including driving behaviors and driving performance metrics;
determine first digital content for the one or more digital roadside displays or other digital displays, based on the received driving pattern data for the first vehicle;
determine a beginning time and an ending time for displaying the first digital content; and
transmit the first digital content and the beginning time and ending time via the network interface to the one or more digital roadside displays or other digital displays.

US Pat. No. 10,341,708

METHODS AND APPARATUS THAT FACILITATE CONTROLLING MULTIPLE DEVICES

Time Warner Cable Enterpr...

1. A method of using an alarm system control panel located at a customer premise, the method comprising:detecting, at said alarm system control panel, user selection of a first user selectable control option, wherein said first user selectable control option is displayed on a display screen of said alarm system control panel, said first user selectable control option being one of an alarm activation control option or an alarm de-activation control option;
requesting, in response to said detecting user selection of said first user selectable control option, video recording information corresponding to a recording device; and
presenting at said alarm system control panel, following receipt of said video recording information, a recording device control option to a user of said alarm system control panel.

US Pat. No. 10,341,707

METHOD AND SYSTEM FOR USING A SECOND SCREEN DEVICE FOR INTERACTING WITH A SET TOP BOX TO ENHANCE A USER EXPERIENCE

The DIRECTV Group, Inc., ...

1. A method comprising:saving a screen image signal of a display of a second screen device;
communicating the screen image signal to a set top box;
displaying the screen image signal on the display associated with the set top box; and
wherein displaying the screen image signal comprises displaying the screen image signal through a hypertext transfer protocol engine of the set top box.

US Pat. No. 10,341,706

DIGITAL OVERLAY OFFERS ON CONNECTED MEDIA DEVICES

The Nielsen Company (US),...

1. A method comprising:receiving, at data processing hardware:
a broadcast commercial from a broadcast stream to be streamed to a client media device; and
available offers from offer providers of an offer distribution network;
identifying, by automatic content recognition of the data processing hardware, commercial metadata related to the broadcast commercial;
determining, by the data processing hardware, whether any of the available offers comprise a respective direct offer, the respective direct offer having offer metadata matching commercial metadata of the broadcast commercial, the commercial metadata defining a brand or an entity associated with the broadcast commercial;
when any of the available offers comprise the respective direct offer, automatically delivering, by the data processing hardware, the respective direct offer to a display of the client media device as an overlay or a pop-up advertisement during streaming of the broadcast commercial; and
when the available offers fail to comprise the respective direct offer:
determining, by the data processing hardware, that at least one available offer is an indirect offer, the indirect offer having offer metadata indicating a commercial relationship with the brand or the entity associated with the broadcast commercial; and
automatically delivering, by the data processing hardware, the indirect offer to the display of the client media device as the overlay or the pop-up advertisement during streaming of the broadcast commercial.

US Pat. No. 10,341,705

DIGITAL OVERLAY OFFERS ON CONNECTED MEDIA DEVICES

The Nielsen Company (US),...

1. A communication device comprising:a memory storing instructions and a plurality of offers in a virtual wallet;
private area networking (PAN) circuitry;
a display device; and
a processor coupled to the display device and configured to execute the instructions to:
pair the PAN circuitry with a television configured to enable interaction by the communication device with offers delivered in conjunction with commercials streamed to the television;
detect a first offer delivered over an offer distribution network to the communication device and in conjunction with a first broadcast commercial, the first offer including a selectable indicia, wherein an offer computing system of the offer distribution network is configured to use automatic content recognition to identify commercial metadata from the first broadcast commercial and responsively deliver the first offer to the communication device; and
deliver the first offer to the virtual wallet in response to detecting selection of the selectable indicia;
associate a tracker of the offer computing system with the first offer in the virtual wallet, the tracker managed by a third party and configured to identify redemption of the first offer at a vendor point-of-sale (POS) system and a location of the redemption of the first offer;
detect a selection of the first offer within the virtual wallet;
communicate with the vendor POS system configured to redeem the first offer in response to detecting the selection of the first offer;
update the tracker based on the communication with the vendor POS system and the location of the redemption of the first offer; and
detect a targeted offer based on the updated tracker at a time when the television receives a second broadcast commercial, the targeted offer corresponding to broadcast content of the second broadcast commercial.

US Pat. No. 10,341,704

MULTICAST-BASED CONTENT TRANSMITTING SYSTEM AND METHOD, AND DEVICE AND METHOD FOR ESTIMATING HIGH-SPEED MOVEMENT

SK PLANET CO., LTD., Seo...

1. A user terminal comprising:a memory;
a processor configured to execute instructions stored in the memory and to:
receive an N×M multicast stream channel list having the N×M multicast stream channels configured of N multicast stream channels having transmission start times arranged at time intervals of T and M multicast stream channels of different transmission rates configured in each of the N multicast stream channels;
transmit a channel selection signal for selecting a multicast stream channel from the N×M multicast stream channel list;
receive contents through a multicast stream channel corresponding to the channel selection signal;
select an available multicast stream channel from the N×M multicast stream channel list;
confirm loss of packets in the transmitted contents;
compare a number of lost packets of the transmitted contents with a reference value; and
restore the lost packets using a Forward Error Correction method.

US Pat. No. 10,341,703

INTEGRATED AUDIENCE INTERACTION MEASUREMENTS FOR VIDEOS

HITACHI, LTD., Tokyo (JP...

1. A method, comprising:determining, for a video program, viewing patterns of a plurality of viewers of the video program;
for a threshold of the plurality of viewers having changing viewing patterns for the video program during broadcast of the video program:
determining if the threshold of the plurality of viewers having the changing viewing patterns are only associated with a local station broadcasting to a local geographical region;
responsive to the threshold of the plurality of viewers having the changing viewing patterns being only associated with a local geographical region:
transmitting, to a station managing the local geographical region,
instructions regarding at least one of video program outage and bit rate throughput issue for the video program.

US Pat. No. 10,341,702

METHOD AND SYSTEM FOR PROVIDING DIFFERENT CATEGORIES OF PROGRAMMING DATA TO A USER DEVICE FROM HEAD END SYSTEMS

The DIRECTV Group, Inc., ...

1. A method comprising:receiving data for linear content comprising first metadata comprising a linear content type category, channel data, schedule data, program data, a content category and a first thumbnail image from a traffic and scheduling system and second metadata for a non-linear content type comprising a second thumbnail image and different than the linear content type from a content management system at a listing service module, said non-linear content being content available on demand at a request of a user and said linear content broadcasted at a predetermined time to a plurality of users;
receiving billing data from a first module for the content having the linear content type and the non-linear content type at the listing service module;
combining the first metadata and at least a first portion of the billing data in the listing service module to form first combined programming data and combining the second metadata and at least a second portion of the billing data to form second combined programming data;
communicating the first combined programming data for linear content to a program guide web service;
communicating the second combined programming data for non-linear content to a non-linear program guide web service;
distributing the first combined programming data to a linear cache from the program guide web service through a distribution application based on the linear content type;
distributing the second combined programming data to a non-linear cache from the non-linear program guide web service through the distribution application based on the non-linear content type, said non-linear cache separate from the linear cache and is separately accessible from a user device;
storing the first combined programming data in the linear cache accessible from a user device and storing the second combined programming data in the non-linear cache;
accessing the linear cache from the user device;
thereafter, displaying, on a display associated with the user device, a linear content display screen comprising the first thumbnail image and in response thereto communicating the first combined programming data from the linear cache;
accessing the non-linear cache from the user device; and
thereafter, displaying, on a display associated with the user device, a non-linear content display screen comprising the second thumbnail image and in response thereto communicating the second combined programming data from the non-linear cache.

US Pat. No. 10,341,701

CLUSTERING AND ADJUDICATION TO DETERMINE A RECOMMENDATION OF MULTIMEDIA CONTENT

Edge2020 LLC, Herndon, V...

1. An apparatus, comprising:a system controller to retrieve data from at least one database, the data including information associated with at least one of subscribers, multimedia content, and subscriber interaction with customer premises equipment, and transmit, to a customer premises equipment of a subscriber, a recommendation of multimedia content; and
a system processor to formulate an input dataset from the retrieved data, perform nonlinear manifold clustering on the input dataset to formulate a multi-dimensional subscriber cluster and a multi-dimensional multimedia content cluster having similarities between elements therein, determine a nonlinear manifold metric distance between multi-dimensional vector elements of the formulated multi-dimensional subscriber cluster and the multi-dimensional multimedia content cluster, and determine the recommendation of multimedia content based on the nonlinear manifold metric distance between multi-dimensional vector elements of the formulated multi-dimensional subscriber cluster and the multi-dimensional multimedia content cluster crossing a threshold.

US Pat. No. 10,341,700

DYNAMIC BINDING FOR USE IN CONTENT DISTRIBUTION

Level 3 Communications, L...

1. A system comprising:an electronic dynamic binding system comprising:
at least one or more processors coupled to one or more memory systems;
a traffic monitor stored on the one or more memory systems and executable by the at least one or more processors, wherein the traffic monitor is configured to monitor network traffic associated with content from a content provider mapped to a number of content servers in a content delivery network serving the content;
a metric determination module stored on the one or more memory systems and executable by the at least one or more processors, wherein the metric determination module is configured to compute at least one metric associated with the network traffic, wherein the at least one metric comprises content popularity;
a threshold adjustment module stored on the one or more memory systems and executable by the at least one or more processors, wherein the threshold adjustment module is configured to:
(i) adjust the number of content servers mapped to the content provider; and
(ii) provide hysteresis when adjusting the number of content servers;
a binding map stored on the one or more memory systems, wherein the binding map identifies:
(i) the content servers mapped to the content provider; and
(ii) a maximum set of content servers that can be bound to the content provider; and
a remapping module stored on the one or more memory systems and executable by the at least one or more processors, wherein the remapping module is configured to:
when the at least one metric associated with the network traffic is greater than a threshold:
(i) remap other content servers of the content delivery network to the content provider based on the at least one metric associated with the network traffic; and
(ii) gradually allow content requests to be received by the other content servers.

US Pat. No. 10,341,699

SYSTEM FOR ADDRESSING ON-DEMAND TV PROGRAM CONTENT ON TV SERVICES PLATFORM OF A DIGITAL TV SERVICES PROVIDER

Broadband iTV, Inc., Hon...

1. A method for receiving, via a closed system, video content to be viewed on a subscriber device, having a tuner, associated with a subscriber of a video-on-demand system using a hierarchically arranged interactive electronic program guide, comprising:(a) transmitting, from the subscriber device to a television service platform, a request by the subscriber to log in to the video-on-demand system;
(b) generating, by the subscriber device in response to a first request by the subscriber after logging in to the video-on-demand system, the interactive electronic program guide to be presented to the subscriber as a templatized video-on-demand display on a display for the subscriber device to access video-on-demand programs previously stored on a video server associated with a television service provider in a digital video format as part of the video-on-demand system, wherein the subscriber device has access to a plurality of different display templates for use with the interactive electronic program guide, and wherein the interactive electronic program guide enables the subscriber using the subscriber device to navigate in a drill-down manner through titles by category information in order to locate a first of the titles whose associated video content is desired for viewing on the subscriber device using the same category information in metadata associated with the video content,
wherein the navigating through the titles in a drill-down manner comprises navigating from a first level of a hierarchical structure of the interactive electronic program guide to a second level of the hierarchical structure of the interactive electronic program guide to locate a first title;
(c) tracking, at the subscriber device, navigation data related to a navigation path taken by the subscriber in navigating through the interactive electronic program guide in the drill-down manner to select the video-on-demand programs for viewing, including the first title and the category information associated with the first title;
(d) providing, by the subscriber device to a profiling system, the tracked navigation data for the subscriber for preparing subscriber profile data, wherein the subscriber profile data is to be provided to a targeting system to generate feedback data as to subscriber preferences based at least on the subscriber profile data; and
(e) generating, by the subscriber device in response to a second request by the subscriber, an updated interactive electronic program guide to be displayed to the subscriber on the templatized video-on-demand display on the display of the subscriber device, wherein the updated interactive electronic program guide is prepared based on the feedback data from the targeting system, and wherein the generation of the updated interactive electronic program guide comprises obtaining, via an application program interface of the television service provider, the titles and the category information associated with the titles for the video-on-demand programs to populate the updated interactive electronic program guide;
wherein the templatized video-on-demand display has been generated in a plurality of layers, comprising:
(a) a first layer comprising a background screen to provide at least one of a basic color, logo, or graphical theme to display;
(b) a second layer comprising a particular display template from the plurality of different display templates layered on the background screen, wherein the particular display template comprises one or more reserved areas that are reserved for displaying content provided by a different layer of the plurality of layers; and
(c) a third layer comprising reserved area content generated using the received video content, the associated metadata, and an associated plurality of images to be displayed in the one or more reserved areas in the particular display template as at least one of text, an image, a navigation link, and a button;
wherein a first template of the plurality of different display templates is used as the particular display template for the templatized video-on-demand display for displaying the first level of the hierarchical structure of the interactive electronic program guide and wherein a second template of the plurality of different display templates is used as the particular display template for the templatized video-on-demand display for displaying the second level of the hierarchical structure of the interactive electronic program guide.

US Pat. No. 10,341,698

SYSTEMS AND METHODS FOR DISTRIBUTING CONTENT USING A COMMON SET OF ENCRYPTION KEYS

DIVX, LLC, San Diego, CA...

1. A content distribution system, comprising:at least one content distribution server; and
a source encoder comprising a processor and a memory containing an encoding application, wherein the encoding application configures the processor to encode source content as a plurality of alternative streams of protected video each having a different bitrate by performing the steps of:
identifying a plurality of sections for the source content;
identifying a common set of keys for encrypting corresponding portions of the source content across a plurality of different encodings;
for each particular section of the plurality of sections:
encoding the particular section to produce a plurality of encodings of the particular section for each of the plurality of alternative streams of protected video, wherein the plurality of encodings of the particular section comprises encodings at a plurality of different bitrates;
partially encrypting a portion of at least one encoded frame from the plurality of encodings of the particular section using a particular key of the common set of keys so that each partially encrypted frame contains encrypted portions and unencrypted portions of data;
storing encryption information that identifies the partially encrypted portion of the at least one encoded frame; and
storing the encrypted plurality of encodings of the particular section in a set of one or more container files on a set of servers that form part of a content distribution system;
storing a container index that provides references to a location for each section of the plurality of sections in at least one of the set of container files; and
storing a reference to the common set of keys on the at least one content distribution server.

US Pat. No. 10,341,697

METHOD AND SYSTEM FOR REMOTELY CONTROLLING CONSUMER ELECTRONIC DEVICES

Gracenote, Inc., Emeryvi...

1. A method comprising:accessing, at a media system, a sequence of media content;
accessing, at the media system, replacement media content, wherein the replacement media content is selected based on at least a portion of the sequence of media content; and
causing presentation of a displayed sequence of media content that includes at least a portion of the sequence of media content and at least a portion of the replacement media content, the presentation of the displayed sequence of media content including:
interrupting the at least the portion of the replacement media content by causing presentation of a further sequence of media content in response to a first request received during the presentation of the at least the portion of the replacement media content, the further sequence of media content being presented without presenting the replacement media content;
if a second request is received within a predetermined time of the first request, resuming presentation of the replacement media content in response to the second request; and
if the second request is received outside the predetermined time of the first request, presenting the sequence of media content responsive to the second request,
wherein the predetermined time is related to a duration of the replacement media content.

US Pat. No. 10,341,696

SYSTEM AND METHOD FOR SEAMLESS SWITCHING THROUGH BUFFERING

Visible World, LLC., Phi...

1. A method comprising:receiving a first data stream comprising at least a first segment, wherein the first segment comprises a first starting point and a first end point;
receiving a second data stream comprising at least a second segment, wherein the second segment comprises a second starting point and a second end point;
encoding the first segment for transmission at a first data rate;
encoding the second segment;
determining a switch gap size, wherein the switch gap size comprises at least a predetermined amount of time needed to switch from transmitting the first segment of the first data stream to transmitting the second segment of the second data stream;
determining a second data rate such that a difference between a first transmit time of the first segment at the first data rate and a second transmit time of the first segment at the second data rate approximates the switch gap size; and
multiplexing the encoded first segment for transmission at the second data rate and the encoded second segment such that the second starting point of the encoded second segment is synchronized with the first end point of the encoded first segment.

US Pat. No. 10,341,695

MEDIA MANAGEMENT BASED ON DERIVED QUANTITATIVE DATA OF QUALITY

1. A method comprising:identifying, via a processor, a set of video files associated with a request, where each video file in the set of video files is an instance of a same video stored in preparation for user viewing;
identifying a respective segment within each video file of the set of video files, to yield a respective identified segment of each video file;
rating a signal quality of the respective identified segment of each video file according to a number of compression artifacts found in the respective identified segment of each video file, to yield a respective rated video segment;
concatenating a composite version of the same video using the respective rated video segment from multiple instances of the set of video files; and
returning, in response to the request, the composite version.

US Pat. No. 10,341,694

DATA PROCESSING METHOD AND LIVE BROADCASTING METHOD AND DEVICE

ALIBABA GROUP HOLDING LIM...

1. A data processing method, comprising:converting audio and video data into broadcast data in a predetermined format, and performing speech recognition on audio data in the audio and video data;
in response to text information obtained from the speech recognition comprising a preset keyword, converting the text information to a corresponding operation instruction according to a preset corresponding relationship between the keyword and the operation instruction, and sending the operation instruction to a network device; and
in response to the text information obtained from the speech recognition not comprising the preset keyword, adding text information obtained from speech recognition into the broadcast data.

US Pat. No. 10,341,693

PRE-EMPTIVE CONTENT CACHING IN MOBILE NETWORKS

International Business Ma...

1. A method comprising:determining a current location and a current velocity of a mobile communications device;
determining a rate at which a user of the mobile communications device is accessing a data stream that has been received from a current wireless transceiver at the current location, wherein the current wireless transceiver transmits the data stream to the mobile communications device at the current location;
generating, based on a determined current location and the current velocity of the mobile communications device, a prediction for a next wireless transceiver to be accessed by the mobile communications device at a next location;
pre-caching a portion of the data stream at the predicted next wireless transceiver, wherein the portion of the data stream to be pre-cached is at least partially based on the rate at which the data stream is being accessed by the user of the mobile communications device at the current location;
determining that the mobile communications device has moved to the next location;
responsive to the determination that the mobile communications device has moved to the next location, streaming the pre-cached portion of the data stream from the predicted next wireless transceiver to the mobile communications device;
detecting, by one or more processors, repeated user disruptions of a playback of the data stream; and
adjusting, by the one or more processors, the pre-caching of the portion of the data stream at the predicted next wireless transceiver based on the repeated user disruptions of the playback of the data stream.

US Pat. No. 10,341,692

LIVE STREAMING-TV CONTENT, ACQUISITION, TRANSFORMATION, ENCRYPTION, AND DISTRIBUTION SYSTEM, AND METHOD FOR ITS USE

1. A virtual set-top-box emulation system for wirelessly delivering audio and video to one or more users comprising:a video/audio receiver;
an encoder process/encoder machine operationally associated with the video/audio receiver;
an UPLOADer Process/machine operationally associated with the encoder process/encoder machine;
a Content Delivery Network operationally associated with the UPLOADer Process/machine; and
one or more client/subscriber machines operationally associated with the Content Delivery Network;
wherein each said client/subscriber machine is an internet-connected device;
one or more databases operationally associated with each internet connected device(s);
a secure token component operationally associated with the databases and the internet connected device(s);
wherein a client/server relationship is established individually between each internet connected device and the Content Delivery Network through a browser session and a video player on each internet-connected device, and the client/server relationship utilizes a real-time, direct, point-to-point encrypted connection using Socket-IO to produce a WS-Security protocol wherein Transport Control Protocol (TCP) provides the encapsulation required for Transport Layer Security (TLC), which in turn provides encapsulation for WebSocket Security (WSS);
wherein each component involved in the client/server relationship is authenticated;
wherein a user selects a program to view on an internet connected device and the internet connected device requests the program from a server which parses a storage archive for the program;
wherein the program is encrypted by a cryptographically generated key on a per-session, per-user basis;
a DVR reader operationally associated with the server and the storage archive then transmits the program through the Content Delivery Network to the internet connected device allowing the user to view the requested content;
wherein the DVR works asynchronously and in the background, for all users, allowing each user to record one or more selected programs with specific start and stop points.

US Pat. No. 10,341,691

INHERITANCE IN SAMPLE ARRAY MULTITREE SUBDIVISION

GE VIDEO COMPRESSION, LLC...

1. A decoder for reconstructing an array of information samples encoded in a data stream and representing video information, the decoder comprising:an extractor configured for:
extracting, from the data stream, inheritance information associated with an inheritance coding block of the array of information samples, the inheritance information indicating as to whether inheritance is used, wherein the inheritance coding block corresponds to a first hierarchy level of a sequence of hierarchy levels and is composed of a set of coding sub-blocks, each of which corresponds to a second hierarchy level of the sequence of hierarchy levels, the first hierarchy level being indicated with a lower value than that of the second hierarchy level,
extracting, from the data stream if the inheritance is used with respect to the inheritance coding block, an inheritance subset associated with the inheritance coding block, the inheritance subset including at least one syntax element of a predetermined syntax element type, and
extracting, from the data stream, respective residual information associated with each of the set of coding sub-blocks; and
a predictor configured for:
copying the inheritance subset including the at least one syntax element into a set of syntax elements representing coding parameters used in an inter coding process corresponding to each of the set of coding sub-blocks,
determining, for each of the set of coding sub-blocks, a coding parameter used in the inter coding process associated with the corresponding coding sub-block based on the at least one syntax element, and
predicting a respective prediction signal for each of the set of coding sub-blocks based on the coding parameter determined for the coding sub-block,
wherein each of the set of coding sub-blocks is reconstructed based on the respective prediction signal and the respective residual information.

US Pat. No. 10,341,690

INHERITANCE IN SAMPLE ARRAY MULTITREE SUBDIVISION

GE VIDEO COMPRESSION, LLC...

1. A decoder for reconstructing an array of spatially sampled video information encoded in a data stream, the decoder comprising:an extractor configured to:
extract, from the data stream, multi-tree structure information associated with the array and an inheritance syntax element, wherein
the multi-tree structure information specifies a primary subdivision associated with prediction coding of a video array and a subordinate subdivision associated with transform coding of the video array, and
the inheritance syntax element indicates whether inheritance is used, and if inheritance is used, an inheritance region of the prediction coding which includes a set of leaf regions of the transform coding obtained by sub-dividing the inheritance region via the sub-ordinate sub-division,
extract, from the data stream, a first intra-prediction mode syntax element and a second intra-prediction mode syntax element, wherein a type of the second intra-prediction mode syntax element depends on the first intra-prediction mode syntax element and the second intra-prediction mode syntax element represents an intra-prediction coding parameter used in an intra mode of the prediction coding associated with the inheritance region, and
copy the intra-prediction coding parameter associated with the inheritance region into a subset of coding parameters for each of the set of leaf regions of the transform coding;
a residual reconstructor configured to:
decode a respective residual signal for each of the set of leaf regions of the transform coding; and
a predictor configured to:
calculate a respective intra prediction signal for each of the set of leaf regions according to the intra mode of the prediction coding using the intra-prediction coding parameter copied from the inheritance region, and a reconstructed reference signal of already reconstructed neighboring leaf regions of the multi-tree structure,
wherein each of the set of leaf regions within the inheritance region is reconstructed by combining the respective intra prediction signal and the respective residual signal.

US Pat. No. 10,341,689

WEIGHTED RUNLENGTH ENCODING

Moddable Tech, Inc.

1. A non-transitory computer readable medium comprising an encoded bitstream when decoded by a computer system producing an image for display, the encoded bitstream comprising any one or more of the following:a) a skip command stored into a single nybble, the skip command indicating how many pixels other than solid pixels which are inserted into a decoded bitstream, wherein there are at least a minimum number of transparent pixels and no more than a maximum number of transparent pixels;
b) a solid command stored into a single nybble, the solid command indicating how many solid pixels should be inserted into the decoded bitstream, wherein there are at least a minimum number of solid pixels and no more than a maximum number of solid pixels; and
c) a quote command stored into a single nybble, the quote command indicating how many quoted pixels should be inserted into the decoded bitstream, wherein there are no more than a maximum number of quoted pixels; and
wherein the skip, solid or quote commands each comprises a command portion and a count portion in the single nybble, the count portion indicating how many of the pixels of the respective command encodes.

US Pat. No. 10,341,688

USE OF FRAME CACHING TO IMPROVE PACKET LOSS RECOVERY

Microsoft Technology Lice...

1. A method performed with a video decoder, the method comprising:receiving a sequence of frames comprising intra-coded key frames and predicted frames, the predicted frames being encoded with reference to a respective reference frame, at least one of the frames being marked as a cached frame;
retaining the cached frame in a frame cache at the decoder;
sending a signal indicating the cached frame;
detecting loss for a frame sent to the decoder, the lost frame being encoded with reference to a first reference frame different than the cached frame;
responsive to the detecting loss, receiving a new frame, the new frame being encoded with reference to the cached frame;
decoding the new frame with reference to the cached frame; and
producing a reconstructed frame sequence based on the decoded new frame.

US Pat. No. 10,341,687

ASSIGNING VIDEOS TO SINGLE-STREAM AND MULTI-STREAM DECODERS

Google LLC, Mountain Vie...

1. A method comprising:assigning, by a processing device, a first plurality of videos to a plurality of hardware decoders having a first configuration, wherein the hardware decoders in the first configuration are to decode the first plurality of videos for concurrent presentation on a display of a user device, wherein each of the hardware decoders in the first configuration is configured in a mode selected from a plurality of modes comprising a single-stream mode and a multi-stream mode;
receiving an indication that a second plurality of videos are to be presented on the display of the user device;
determining a second configuration of the hardware decoders based on an estimated delay penalty for the second configuration, wherein the estimated delay penalty is indicative of a time delay of decoding the second plurality of videos at the hardware decoders configured in the second configuration; and
assigning, by the processing device, the second plurality of videos to the hardware decoders for decoding according to the second configuration.

US Pat. No. 10,341,686

METHOD FOR DYNAMICALLY ADAPTING THE ENCODING OF AN AUDIO AND/OR VIDEO STREAM TRANSMITTED TO A DEVICE

Harmonic, Inc., San Jose...

1. A method for dynamically adapting the lossy encoding of an audio and/or video stream transmitted by a first device to a remote device, comprising:the remote device (1) receiving, from said first device, said audio and/or video stream, called the incoming stream, (2) decoding the incoming stream received from the first device, and (3) transmitting an outgoing stream from the remote device to the first device,
wherein said outgoing stream comprises at least one indicator translating a power saving rate desired by said remote device, said at least one indicator corresponding to a reduction rate in decoding operations for the incoming stream into the remote device or in processor P processing cycles; and
the first device (1) receiving the outgoing stream from the remote device, (2) extracting said at least one indicator from the outgoing stream, and (3) adapting the encoding of the incoming stream, prior to transmission to the remote device, according to said at least one extracted indicator.

US Pat. No. 10,341,685

CONDITIONALLY PARSED EXTENSION SYNTAX FOR HEVC EXTENSION PROCESSING

ARRIS Enterprises LLC, S...

10. An apparatus for decoding a plurality of pictures, each picture processed at least in part according to a picture parameter set, the apparatus, comprising:a processor;
a memory, communicatively coupled to the processor, the memory storing a plurality of instructions comprising instructions for:receiving a bitstream comprising the plurality of pictures and a picture parameter set;parsing the picture parameter set to determine for a picture in the plurality of pictures whether a pps_extension_present_flag signaling flag specifies presence of syntax structure pps_extension_Xbits at a picture level for the picture,
wherein the pps_extension_present_flag and pps_extension_Xbits signaling flags, when present, are adaptable per picture in the plurality of pictures, and
wherein the pps_extension_Xbits signaling flag is represented in the picture parameter set by multiple bits, where X=the number of said bits;
parsing the pps_extension_Xbits syntax structure to determine if any pps_extension_data_flag syntax structures are present in the picture parameter set;
wherein pps_extension_Xbits shall be equal to 0 for bitstreams conforming to High Efficiency Video Coding (HEVC) profiles, and
wherein pps_extension_4 bits not equal to 0 causes pps_extension_data_flag syntax structures in a picture parameter set NAL unit to be ignored during decoding.

US Pat. No. 10,341,684

HIGH DEFINITION SURVEILLANCE IMAGE STORAGE OPTIMIZATION APPARATUS AND METHODS OF RETENTION TRIGGERING

Eagle Eye Networks, Inc.,...

1. A method for operation of a high definition video surveillance storage optimization apparatus comprises:receiving a high definition video stream;
transforming the video stream into segments with retention meta data headers;
receiving extrinsic events and sensor measurements;
receiving retention policies and metric thresholds;
determining retention metrics and flags;
receiving a purging directive;
evaluating calendar and policy constraints on purging;
masking video file segments protected by retention meta data from purging; and
removing pointers to video file segments available to purging.

US Pat. No. 10,341,683

APPARATUS AND METHOD TO REDUCE AN AMOUNT OF COORDINATE DATA REPRESENTING AN OBJECT TAKEN BY AN IMAGING DEVICE IN A THREE DIMENSIONAL SPACE

FUJITSU LIMITED, Kawasak...

1. An information processing apparatus comprising:a memory; and
a processor coupled to the memory and configured to:
obtain a second straight line by mapping a first straight line that passes a projection center of a target image taken by a first imaging device in a three dimensional space and a point representing an object in a projection plane of the first imaging device, onto each of a plurality of reference images respectively taken by a plurality of second imaging devices, and generate a reference line-segment representing an existing range of the object on the second straight line for each of the plurality of reference images;
transform, for each of the plurality of reference line-segments respectively generated on the plurality of reference images, a coordinate value of a first endpoint of the reference line-segment into a difference between a coordinate value of the first end point and a coordinate value of a second endpoint of the reference line-segment;
store the coordinate value of the second endpoint and the difference in the memory;
restore the coordinate value of the first endpoint from the coordinate value of the second endpoint and the difference stored in the memory; and
map the coordinate value of the second endpoint and the restored coordinate value of the first endpoint, onto a depth-direction line that is perpendicular to the projection plane of the first imaging device, and determine overlap of a plurality of line-segments on the depth-direction line whose endpoints are mapped from each of the plurality of reference line-segments on the plurality of reference images.

US Pat. No. 10,341,682

METHODS AND DEVICES FOR PANORAMIC VIDEO CODING AND DECODING BASED ON MULTI-MODE BOUNDARY FILL

Peking University Shenzhe...

1. A method for encoding panoramic videos based on multi-mode boundary fill, comprising:dividing a current image into a plurality of image blocks;
obtaining a predicted image block of a current image block using inter-frame prediction, wherein the inter-frame prediction comprises a step of boundary filling, comprising:
when a reference sample of a pixel in the current image block is outside a boundary of a corresponding reference image, adaptively selecting a boundary fill method comprising a cyclic fill method for filling a horizontal image boundary or a vertical image boundary, according to coordinates of the reference sample to obtain a sample value of the reference sample, wherein the cyclic fill method includes:
identifying a reference sample having a first coordinate within the reference image and a second coordinate outside of the reference image, wherein the reference image has a first dimension along direction of the first coordinate and a second dimension along direction of the second coordinate; and
assigning a sample value of a sample point within the reference image to the reference sample, wherein the sample point has a first coordinate the same as the reference sample and a second coordinate determined by a modulo operation of the second coordinate of the reference sample based on the second dimension of the reference image;
subtracting the current image block from the predicted image block to obtain a residual block; and
transforming, quantizing, and entropy encoding the residual block to obtain an encoded stream, wherein the boundary fill method selected in the step of boundary filling is written in a sequence header or an image header of the encoded stream.

US Pat. No. 10,341,681

3D-VIDEO CODEC SUPPORTING INTER-COMPONENT PREDICTION

Fraunhofer-Gesellschaft z...

1. 3D video decoder comprisinga video decoding core configured to decode a sequence of layers of a video from a data stream using intra-layer prediction, each layer representing depth or texture of a respective one of a plurality of views, the video decoding core supporting, for layers representing texture, inter-view texture prediction from layers representing texture of a different view and depth-to-texture prediction from layers representing depth,
an inter-component prediction switch configured to
read a first parameter set from the data stream, the first parameter set relating to a temporal portion of the video relating to a timestamp of a current picture, and derive therefrom, for a current picture of a current layer which represents texture, a texture reference layer set of layers representing texture,
read a second parameter set from the data stream, the second parameter set relating to the current picture or a portion of the current picture, and derive therefrom, for coding units within the current picture or the portion of the current picture, a selected texture reference layer set of layers representing texture from the texture reference layer set,
if an intersection of
a potentially available set of layers representing depth of views the texture of which is represented by any of the selected texture reference layer set, on the one hand and
a depth reference layer set of layers representing depth determined by the first parameter set, on the other hand
equals the potentially available set, then read a flag from the data stream, the flag relating to the current picture or the portion of the current picture and indicating whether the depth-to-texture prediction is enabled or disabled for the coding units within the current picture or the portion of the current picture, and
if the intersection is unequal to the potentially available set, then infer the flag relating to the coding units within the current picture or the portion of the current picture as indicating that the depth-to-texture prediction is disabled for the coding units,
wherein the video decoding core is configured to be responsive to the flag in order to, depending on the flag, apply or not apply depth-to-texture prediction for a current coding unit among the coding units within the current picture or the portion of the current picture.

US Pat. No. 10,341,680

DATA ENCODING AND DECODING APPARATUS, METHOD AND STORAGE MEDIUM

Sony Corporation, Tokyo ...

1. A video decoder comprising:circuitry configured to:
receive as input data both lossy-encoded compressed video data and lossless-encoded compressed video data,
control the operation of a processor dependent upon whether input data is lossy-encoded compressed video data or lossless-encoded compressed video data,
perform decoding of the lossy-encoded compressed video data, and
perform decoding of the lossless-encoded compressed video data by interpreting a value of a flag in the input data which is associated with controlling residual differential pulse code modulation decoder circuitry differently for decoding of the lossless-encoded compressed video data than for decoding of the lossy-encoded compressed video data.

US Pat. No. 10,341,679

ENCODING SYSTEM USING MOTION ESTIMATION AND ENCODING METHOD USING MOTION ESTIMATION

SK PLANET CO., LTD., Seo...

1. An encoding method using motion estimation with an encoding apparatus, the encoding method comprising:determining an image unit in a frame for processing a plurality of image blocks included in the image unit independently or in parallel;
obtaining information on candidate motion vectors relating to a first image block which is one of the plurality of image blocks included in the image unit;
determining a motion vector relating to the first image block based on the information on candidate motion vectors;
generating a prediction signal relating to the first image block by performing an inter prediction based on the determined motion vector; and
encoding a residual signal relating to the first image block by performing a quantization on the residual signal, the residual signal being a difference between an original signal relating to the first image block and the prediction signal,
wherein the image unit comprises the plurality of image blocks, the information on candidate motion vectors for the first image block generated without using motion information of other image blocks included in the image unit,
wherein the information on candidate motion vectors for the first image block is generated by using motion information of at least a second image block included in another image unit,
wherein both the image unit and the other image unit are at different locations within the same time frame,
wherein both the first image block and the second image block are encoded by inter prediction, and
wherein when the candidate motion vectors for the first image block include a fixed value, candidate motion vectors of other image blocks included in the image unit also include an identical value as the fixed value.

US Pat. No. 10,341,678

SYSTEMS AND METHODS FOR PLAYER INPUT MOTION COMPENSATION BY ANTICIPATING MOTION VECTORS AND/OR CACHING REPETITIVE MOTION VECTORS

ZeniMax Media Inc., Rock...

1. A computer-implemented method for caching motion vectors comprising:transmitting a previously generated motion vector library from a server to a client, wherein the motion vector library is configured to be stored at the client;
transmitting an instruction to the client to monitor for input data from a user;
transmitting an instruction to the client to calculate a motion estimate from the input data;
transmitting an instruction to the client to update the stored motion vector library based on the input data, wherein the client is configured to apply the stored motion vector library to initiate motion in a graphic interface prior to receiving actual motion vector data from the server; and
transmitting an instruction to apply one or more scaling factors to the motion vector library, wherein the scaling factor is calculated based on the general equation:

US Pat. No. 10,341,677

METHOD AND APPARATUS FOR PROCESSING VIDEO SIGNALS USING INTER-VIEW INTER-PREDICTION

LG ELECTRONICS INC., Seo...

1. A method for decoding a video signal by a decoding apparatus, the method comprising:deriving, by the decoding apparatus, an inter-view motion vector of a current texture block by searching inter-view motion vector candidates in an order of an inter-view motion vector of a temporal neighboring block of the current texture block, an inter-view motion vector of a spatial neighboring block of the current texture block, and a disparity vector derived by the decoding apparatus using depth data of a depth block; and
performing, by the decoding apparatus, inter-view inter-prediction for the current texture block using the derived inter-view motion vector of the current texture block,
wherein deriving the inter-view motion vector of the current texture block includes:
determining, by the decoding apparatus, whether the temporal neighboring block of the current texture block is coded using inter-view inter-prediction, wherein the inter-view motion vector of the temporal neighboring block is derived as the inter-view motion vector of the current texture block when the temporal neighboring block is coded using inter-view inter-prediction,
determining, by the decoding apparatus, whether the spatial neighboring block of the current texture block is coded using inter-view inter-prediction, wherein the inter-view motion vector of the spatial neighboring block is derived as the inter-view motion vector of the current texture block when the temporal neighboring block is not coded using inter-view inter-prediction and the spatial neighboring block is coded using inter-view inter-prediction, and
deriving, by the decoding apparatus, the disparity vector as the inter-view motion vector of the current texture block when the temporal neighboring block of the current texture block and the spatial neighboring block of the current texture block are not coded using inter-view inter-prediction.

US Pat. No. 10,341,676

VIDEO ENCODING APPARATUS AND A VIDEO DECODING APPARATUS

KABUSHIKI KAISHA TOSHIBA,...

1. A video decoding apparatus, comprising:processing circuitry configured to:
acquire available blocks of blocks having motion vectors from decoded blocks adjacent to a to-be-decoded block and a number of the available blocks, wherein acquiring the available blocks comprises searching for available block candidates from decoded blocks neighboring the to-be-decoded block and having motion vectors, determining a block size of motion compensated prediction of the available block candidates, determining whether the available block candidates are in a prediction mode of a unidirection prediction or a bidirectional prediction, and extracting available blocks from the available block candidates based on the determination of the block size and the determination of the prediction mode;
select one selection code table from a plurality of code tables depending on the number of the available blocks and decode selection information for specifying one selection block using the one selection code table;
select the one selection block from the available blocks; and
subject the to-be-decoded block to motion compensated prediction coding using a motion vector of the one selection block as a motion vector of the to-be-decoded block.

US Pat. No. 10,341,675

VIDEO ENCODING METHOD AND VIDEO ENCODER SYSTEM

AXIS AB, Lund (SE)

1. A method of encoding digital video data corresponding to a sequence of input video frames, wherein said input video frames are encoded into a sequence of output video frames by a video encoding apparatus, the method comprising:encoding a first input video frame of the sequence of input video frames, in a first encoder instance of the video encoding apparatus, using intra-frame encoding to produce a first intra-frame, the first encoder including an output that discards intra-frames including the first intra-frame, instead of transmitting the intra-frames to a buffer for subsequent storage or display,
decoding said first intra-frame to produce a first decoded frame by a decoder of the video encoding apparatus, the decoder providing decoded intra-frames to a second encoder instance,
encoding said first decoded frame in the second encoder instance to produce a first output video frame, wherein the second encoder instance outputs a video stream using intra mode and inter mode prediction, wherein the video stream includes the first output video frame.

US Pat. No. 10,341,674

METHOD AND DEVICE FOR DISTRIBUTING LOAD ACCORDING TO CHARACTERISTIC OF FRAME

SAMSUNG ELECTRONICS CO., ...

1. A method for distributing a load, the method comprising:identifying characteristics of each of frames included in a received bit stream; and
distributing loads of a plurality of cores based on the characteristics of each of the frames whenever the frames are decoded,
wherein the identifying comprises identifying a reference relationship between the frames included in the bit stream, and
wherein the reference relationship is distance information indicating a relative distance between a first frame to be encoded and a second frame, which is a reference frame, in a sequential picture order of the frames included in the received bit stream.

US Pat. No. 10,341,673

APPARATUSES, METHODS, AND CONTENT DISTRIBUTION SYSTEM FOR TRANSCODING BITSTREAMS USING FIRST AND SECOND TRANSCODERS

INTEGRATED DEVICE TECHNOL...

1. An apparatus comprising:an interconnect configured to (i) provide encoded video data from an encoder to a decoder and (ii) receive an input bitstream including the encoded video data from the encoder, the interconnect comprising:
a communication link;
a first transcoder configured to (i) detect a type of lossless coding methodology used to generate the input bitstream, (ii) generate intermediate video data by (a) transcoding the encoded video data using a second lossless coding methodology responsive to detecting that the input bitstream is generated using a first lossless coding methodology, (b) copying the encoded video data responsive to detecting that the input bitstream is generated using the second lossless coding methodology and (c) copying the encoded video data responsive to detecting that the input bitstream is generated using a third coding methodology and (iii) transmit the intermediate video data in the communication link, wherein the second lossless coding methodology is different than the third coding methodology; and
a second transcoder located proximate to the decoder and in a different facility remote from the first transcoder, and configured to (i) receive the intermediate video data from the communication link, (ii) receive a signal from the decoder, (iii) detect a type of lossless coding methodology used to generate the intermediate video data and (iv) generate output video data by (a) transcoding the intermediate video data to recreate the input bitstream with the first lossless coding methodology responsive to the signal indicating that the decoder is capable of decoding with the first lossless coding methodology, (b) copying the intermediate video data with the second lossless coding methodology responsive to the signal indicating that the decoder is capable of decoding with the second lossless coding methodology and (c) skipping the transcoding of the intermediate video data responsive to determining that both (1) the intermediate video data is generated using the third coding methodology and (2) the signal indicates that the decoder is capable of decoding with the third coding methodology.

US Pat. No. 10,341,672

METHOD AND SYSTEM FOR MEDIA SYNCHRONIZATION

KOREA ADVANCED INSTITUTE ...

1. A method for media synchronization, comprising: collecting stream source information; generating network delay information between stream sources by performing a delay test between the stream sources; setting synchronization information of a stream source corresponding to a specific channel based on the collected stream source information and the network delay information; measuring network delay with at least one user terminal to receive the stream source; updating the synchronization information based on the measured network delay; and performing time synchronization with the at least one user terminal based on a time clock of the at least one user terminal comprisingrequesting the time clock of a corresponding terminal to each of a plurality of user terminals when the plurality of user terminals requests to provide the stream source; receiving the time clock of a corresponding terminal from each of the plurality of user terminals in response to the requesting of the time clock; and performing time synchronization between the plurality of user terminals based on the received time clock comprising
generating a common time stamp based on the time clock of the corresponding terminal and identifier
information of the corresponding terminal; and providing a stream inserted with the generated common time stamp to each of the plurality of user terminals.

US Pat. No. 10,341,671

METHOD AND SYSTEM FOR IMAGE COMPRESSION

XEROX Corporation, Norwa...

1. An image compression method comprising:compressing an input image with a first compression method to generate a first compressed image;
compressing the same input image with a second compression method to generate a second compressed image;
comparing the generated first compressed image and the generated second compressed image and, based on the comparison, generating:
a first residual layer comprising first pixels corresponding to foreground pixels that are present in only the second of the first and second compressed images, and
a second residual layer comprising second pixels corresponding to foreground pixels that are present in only the first of the first and second compressed images;
identifying connected components, where present, in the at least one of the first and second residual layers, each connected component comprising a group of first or second pixels in the respective first or second residual layer that, when mapped to the second compressed image is connected, in at least one of first and second directions, to foreground pixels in the second compressed image;
generating an output compressed image comprising at least one of:
for a connected component identified in the first residual layer, removing at least one corresponding foreground pixel from the second compressed image, and
for a connected component identified in the second residual layer, adding at least one corresponding foreground pixel to the second compressed image.

US Pat. No. 10,341,670

VIDEO ENCODER BIT RATE STABILIZATION

AMAZON TECHNOLOGIES, INC....

1. A method of adjusting a bit rate of a portion of a video stream, the method comprising:determining a first frame of the video stream to be encoded and sent over a network to a recipient computing device;
determining a first quantization value of an encoder, wherein the first quantization value was used to encode a previous frame of the video stream, prior to the first frame;
determining a first estimated compressed frame size of the first frame when encoded with the first quantization value;
determining that the first estimated compressed frame size is less than a target frame size tolerance band, wherein the target frame size tolerance band represents a range of frame sizes suitable to maintain a target bit rate of the video stream;
determining a second quantization value, wherein the second quantization value is less than the first quantization value;
determining a second estimated compressed frame size of the first frame when encoded with the second quantization value;
determining that the second estimated compressed frame size is within the target frame size tolerance band;
generating a compressed first frame by encoding the first frame of the video stream with the second quantization value; and
sending the compressed first frame over the network to the recipient computing device.

US Pat. No. 10,341,669

TEMPORALLY ENCODING A STATIC SPATIAL IMAGE

Intel Corporation, Santa...

1. A system for temporally encoding static spatial images, the system comprising:electronic circuitry; and
a memory including instructions that, when executed by the electronic circuitry, cause the electronic circuitry to:
obtain a static spatial image, the static spatial image defining pixel values over an area;
select a scan path, the scan path defining:
a path across the area of the static spatial image; and
a duration path, the duration path being a non-linear function that defines progression of the scan path in time;
scan a window in accordance with the scan path on the static spatial image to produce changes in a portion of the window over time; and
record the changes in the portion of the window with respective times of the changes.

US Pat. No. 10,341,668

CODING OF SIGNIFICANCE MAPS AND TRANSFORM COEFFICIENT BLOCKS

GE VIDEO COMPRESSION, LLC...

1. An apparatus for decoding a transform coefficient block encoded in a data stream, comprising:a decoder configured to extract, from the data stream, syntax elements via context-based entropy decoding, wherein each of the syntax elements indicates whether a significant transform coefficient is present at a corresponding position within the transform coefficient block, and extract information indicating values of the significant transform coefficients within the transform coefficient block; and
an associator configured to associate each of the syntax elements with the corresponding position within the transform coefficient block in a scan order,
wherein the decoder is configured to use, for context-based entropy decoding of at least one syntax element of the syntax elements, a context which is selected for the at least one syntax element based on a size of the transform coefficient block, a position of the at least one syntax element within the transform coefficient block, and information regarding prior syntax elements previously extracted from a neighborhood of the position of the at least one syntax element, wherein contexts for different syntax elements are selected based on different combinations of the size of the transform coefficient block, the position of the respective syntax element, and the information regarding the respective prior syntax element previously extracted.

US Pat. No. 10,341,667

ADAPTIVE PARTITION CODING

GE VIDEO COMPRESSION, LLC...

1. A non-transitory computer-readable medium for storing data associated with a video, comprising:a data stream stored in the non-transitory computer-readable medium, the data stream comprising encoded information of a reference block of a texture picture of the video, wherein the reference block is co-located to a block of a depth map of the video, wherein the block of the depth map is decoded using a plurality of operations including:
reconstructing the reference block of the texture picture based on the encoded information from the data stream;
determining a texture threshold based on sample values of the reconstructed reference block of the texture picture;
segmenting the reference block of the texture picture by thresholding the texture picture within the reference block using the texture threshold to acquire a bi-segmentation of the reference block into first and second portions of the reference block;
spatially transferring the bi-segmentation of the reference block of the texture picture onto the block of the depth map so as to acquire first and second portions of the block of the depth map; and
decoding the block of the depth map in units of the first and second portions.

US Pat. No. 10,341,666

METHOD AND DEVICE FOR SHARING A CANDIDATE LIST

Electronics and Telecommu...

1. A method for encoding a video signal comprising:deriving, based on a position and a size of a coding block, at least one merging candidate relating to a first prediction block, the coding block comprising the first prediction block and a second prediction block when a size of a block to which parallel merge processing is applicable is equal to or greater than a first size and a size of the coding block is equal to a second size;
generating a first merging candidate list for the first prediction block based on the merging candidate; and
encoding motion information of the first prediction block based on the generated first merge candidate list,
wherein the first merging candidate list for the first prediction block is equivalent to a second merging candidate list for the second prediction block included in the coding block with the first prediction block.

US Pat. No. 10,341,665

METHOD OF PROVIDING RANDOM ACCESS FOR VIDEO DATA BASED ON RANDOM ACCESSIBLE P-FRAME

INNODEP Co., LTD., Seoul...

1. A method of providing random access for video data based on random accessible P-frame, wherein the video data includes a series of frame data, the method comprising:generating at least one I-frame by performing intraframe encoding on specific frame data out of the series of frame data;
generating a series of P-frames by performing interframe encoding on the remaining frame data of the series of frame data with reference to each corresponding previous frame data;
identifying a P-frame out of the series of P-frames as a random accessible P-frame of the video data;
identifying the closest preceding I-frame for the random accessible P-frame;
identifying a reference frame data by a frame data corresponding to a predetermined spacing step from the random accessible P-frame;
generating a random access reference frame for the random accessible P-frame by performing interframe encoding on the reference frame data with reference to the closest preceding I-frame; and
inserting the random access reference frame in user defined fields of header area of video data packets, wherein the video data packets are prepared for transmitting the I-frame and the series of P-frames.

US Pat. No. 10,341,664

CONFIGURABLE INTRA CODING PERFORMANCE ENHANCEMENTS

Intel Corporation, Santa...

1. A computer-implemented method for video coding comprising:determining, for a current block of video data, processing performance costs for a plurality of intra modes, wherein the processing performance costs are based on one or more reference blocks associated with the plurality of intra modes and a processing order of the one or more reference blocks with respect to the current block;
selecting an intra coding mode for the current block based at least in part on the processing performance costs for the plurality of intra modes; and
encoding the current block into a bitstream based at least in part on the selected intra coding mode.

US Pat. No. 10,341,663

COEFFICIENT CODING HARMONIZATION IN HEVC

SONY CORPORATION, Tokyo ...

1. A decoding device, comprising:circuitry configured to:
apply, as a condition that diagonal scan is applied to a first transform block of a plurality of transform blocks and a second transform block of the plurality of transform blocks, the diagonal scan to the plurality of transform blocks of a plurality of variable block sizes,
wherein the first transform block is of a first block size of the plurality of variable block sizes and the second transform block is of a second block size of the plurality of variable block sizes, and
wherein 4×4 sub-blocks of both the first transform block and the second transform block are diagonally scanned, and the diagonal scan is applied inside each of the 4×4 sub-blocks; and
apply same multi-level significance map decoding to the first transform block of the first block size and the second transform block of the second block size.

US Pat. No. 10,341,662

METHODS AND SYSTEMS FOR ENTROPY CODER INITIALIZATION

Velos Media, LLC, Dallas...

1. A method for decoding a video bitstream comprising: decoding, in a slice header associated with a picture, a first syntax element with an integer value indicating a number of a plurality of entropy slices defining a first slice, wherein each of the entropy slices contains a plurality of largest coding units (LCUs); decoding a second syntax element in the slice header indicating an offset with an index i, wherein the index i has as range from 0 to the integer value of the first syntax element minus 1 and the offset indicates, in a unit of bytes, a distance between (i) one of the plurality of the entropy slices in the first slice in the video bitstream and (ii) an entropy slice preceding the one of the plurality of the entropy slices in the video bitstream;decoding a third syntax element in the slice header indicating a slice type of the first slice;
in circumstances where the third syntax element indicates the slice type of the first slice is a B slice, decoding a flag in the slice header indicating an initialization method of a Context-Adaptive Binary Arithmetic Coding (CABAC) context;
in circumstances where the decoded flag indicates a first value, initializing the CABAC context using a first initialization method at the first LCU of each of the plurality of entropy slices in the B slice;
in circumstances where the decoded flag indicates a second value, initializing the CABAC context using a second initialization method at the first LCU of each of the plurality of entropy slices in the B slice; and
initializing the CABAC context of a P slice using at least one of the first initialization method and the second initialization method.

US Pat. No. 10,341,661

METHOD AND DEVICE FOR ENCODING/DECODING IMAGES

Electronics and Telecommu...

1. A method of encoding a video using an encoding apparatus, the method comprising:obtaining, using the encoding apparatus, transform coefficients of a current block by performing inverse-quantization on quantized transform coefficients of the current block;
obtaining, using the encoding apparatus, a residual sample of the current block by performing an inverse-transform on the transform coefficients of the current block based on a transform type of the current block;
obtaining, using the encoding apparatus, a prediction sample of the current block;
reconstructing, using the encoding apparatus, a reconstructed sample of the current block using the residual sample and the prediction sample; and
encoding using the encoding apparatus, a first information indicating whether the residual sample for the current block is present and a second information indicating whether the inverse-transform is performed on the residual sample of the current block,
wherein the transform type is determined to be a Discrete Cosine Transform (DCT) or a Discrete Sine Transform (DST),
wherein in response to a size of the current block not being equal to 4×4, the transform type is determined to be the DCT, and
wherein the transform type is determined independently of an intra prediction mode of the current block.

US Pat. No. 10,341,660

VIDEO COMPRESSION APPARATUS AND VIDEO PLAYBACK APPARATUS

Kabushiki Kaisha Toshiba,...

1. A video compression apparatus comprising:a first compressor configured to compress a first video in accordance with a first target bit rate to generate a first bitstream;
a second compressor configured to set regions in a second video and compress the regions in accordance with a second target bit rate larger than the first target bit rate so as to enable each region to be independently decoded, to generate a second bitstream;
a partitioner configured to partition the second bitstream according to the set regions to obtain a partitioned second bitstream; and
a communicator configured to receive region information indicating a specific region that corresponds to one or more regions and select and transmit a bitstream corresponding to the specific region from the partitioned second bitstream, wherein the region information is generated so that the specific region is selected in descending order of priority in each region, a first priority in a first region is higher than a second priority in a second region, and a first distance from the first region to one or more request regions requested by a user is smaller than a second distance from the second region to the request regions.

US Pat. No. 10,341,659

SYSTEMS AND METHODS OF SWITCHING INTERPOLATION FILTERS

QUALCOMM Incorporated, S...

14. An apparatus, comprising:a memory configured to store video data; and
a processor configured to:
obtain the video data;
determine, for a coding unit, a subset of interpolation filters from a set of interpolation filters, the subset of interpolation filters including a plurality of interpolation filters, wherein the subset of interpolation filters is determined based on information, in the video data, associated with the coding unit;
encode the coding unit, wherein encoding the processing includes selecting an interpolation filter for motion estimation and motion compensation, and wherein the interpolation filter is selected from the subset of interpolation filters; and
generate an encoded video bitstream, wherein the encoded video bitstream includes the encoded coding unit.

US Pat. No. 10,341,658

MOTION, CODING, AND APPLICATION AWARE TEMPORAL AND SPATIAL FILTERING FOR VIDEO PRE-PROCESSING

Intel Corporation, Santa...

1. A computer-implemented method for video coding comprising:applying adaptive temporal and spatial filtering to pixel values of video frames of input video to generate pre-processed video, wherein the adaptive temporal and spatial filtering comprises, for an individual pixel value of a block of pixels of an individual video frame of the input video:
spatial-only filtering the individual pixel value when the block is a motion block; and
blending spatial and temporal filtering of the individual pixel value when the block is a non-motion block by determining a spatial filtering output value and a temporal filtering output value for the individual pixel and generating a weighted average of the spatial and temporal filtering output values, wherein determining the temporal filtering output value comprises determining a previous pixel filtering weight for the individual pixel value and generating a weighted average of a previous pixel value from a second video frame and the individual pixel value based on the previous pixel filtering weight, wherein the previous pixel value is co-located with the individual pixel value, and wherein the previous pixel filtering weight is determined using a monotonic increasing function of a quantization parameter, a global noise level, and a visual index corresponding to the individual pixel;
encoding the pre-processed video to generate a video bitstream; and
storing the video bitstream.

US Pat. No. 10,341,657

SYSTEM AND METHOD FOR MITIGATING MOTION ARTIFACTS IN A MEDIA STREAMING NETWORK

Telefonaktiebolaget LM Er...

1. A media processing method operative at a network node, the method comprising:separating a video component and an audio component from an incoming source media input;
determining static object grid (SOG) coordinate information for still areas identified in the video component;
encoding the video component at different bitrates to generate a plurality of adaptive bitrate (ABR) representations of the video component;
scaling the SOG coordinate information with respect to each of the bitrate representations of the video component;
encoding the audio component to generate an encoded audio stream; and
multiplexing each bitrate representation of the video component with corresponding scaled SOG coordinate information and the encoded audio stream to generate a plurality of multiplexed media outputs for distribution to one or more subscriber stations.

US Pat. No. 10,341,656

IMAGE DECODING METHOD USING INTRA PREDICTION MODE

INFOBRIDGE PTE. LTD., Si...

1. A method of image decoding, the method comprising:generating a quantization block by inversely scanning quantization coefficient information;
generating a transform block by inversely quantizing the quantization block using a quantization step size;
generating a residual block by inversely transforming the transform block;reconstructing an intra prediction mode group indicator and a prediction mode index of a current block;constructing a first group including three intra prediction modes using valid intra prediction modes of left and top blocks of the current block;
determining an intra prediction mode corresponding to the prediction mode index in the first group as an intra prediction mode of the current block when the intra prediction mode group indicator indicates the first group;generating a prediction block on the basis of the determined intra prediction mode of the current block; andgenerating a reconstructed block using the residual block and the prediction block,
when only one of the intra prediction modes of the left and top blocks of the current block is available, two intra prediction modes are added to the first group,
wherein when the intra prediction mode of the left block is not equal to the intra prediction mode of the top block, and the intra prediction mode of the left block and the intra prediction mode of the top block are planar mode and DC mode, the first group includes the intra prediction modes of the left and top blocks and a vertical mode, and
wherein a lowest prediction mode index is assigned to the planar mode when the intra prediction mode of the current block does not belong to the first group and the planar mode is not included in the first group.

US Pat. No. 10,341,655

HEVC ENCODING DEVICE AND METHOD FOR DETERMINING INTRA-PREDICTION MODE USING THE SAME

AJOU UNIVERSITY INDUSTRY-...

1. A high efficiency video coding (HEVC) encoding device for determining an intra-prediction mode of an image, the HEVC encoding device comprising:a candidate group updater configured to select a plurality of representative modes as a candidate group from among intra-prediction modes and update the candidate group using a plurality of minimum modes selected from the candidate group, the plurality of representative modes representing a range where there is an optimal mode; and
an optimal mode selector configured to select any one mode as an optimal mode from among a plurality of minimum modes selected from the updated candidate group,
wherein, upon the penultimate update of the candidate group, the candidate group updater updates a candidate group before the penultimate update by using a DC mode and a planar mode in addition to a plurality of minimum modes selected from the candidate group before the penultimate update,
wherein the candidate group updater updates the candidate group by adding and subtracting variable mode values to/from each of the plurality of minimum modes, while proceeding from a second update of the candidate group to a penultimate update of the candidate group,
wherein the variable mode value is decreased by a predetermined ratio as a number of update repetition is increased.

US Pat. No. 10,341,654

COMPUTING DEVICE FOR CONTENT ADAPTIVE VIDEO DECODING

1. A non-transitory computer-readable device having instructions stored which, when executed by a processor, cause the processor to perform operations for decoding a bitstream encoded via a plurality of encoders, the operations comprising:identifying, from the bitstream, a first portion of a video covering a first period of time in the video and a second portion of the video covering a second period of time in the video, wherein each of the first portion of the video and the second portion of the video comprises a plurality of entire video frames;
identifying the first portion of the video covering the first period of time as comprising a first degree of action associated with an action scene;
identifying the second portion of the video covering the second period of time as comprising a second degree of action associated with a slow scene;
identifying, for the first portion of the video, a first model chosen from a plurality of predefined models based on the first portion of the video being the action scene, wherein each of the plurality of predefined models comprises a coding model associated with a set of composition features used for generating one of the first portion or the second portion and wherein the coding model defines a coding tool for encoding and decoding a portion of the bitstream comprising the set of composition features;
identifying, for the second portion of the video, a second model chosen from the plurality of predefined models based on the second portion of the video being the slow scene;
routing the first portion of the video to a first decoder of a plurality of decoders based on the first model;
decoding the first portion of the video by the first decoder according to the first model;
routing the second portion of the video to a second decoder based on the second model, wherein the plurality of decoders comprises a generic decoder and wherein the routing of each of the first portion and the second portion of the video to one of the plurality of decoders comprises routing one of the first portion or the second portion to the generic decoder when one of the first portion or the second portion is associated with a generic model and when content of one of the first portion of the video or the second portion of the video does not match a predetermined model; and
decoding the second portion of the video by the second decoder according to the second model, wherein the first model and the second model are each a different model.

US Pat. No. 10,341,653

METHOD AND SYSTEM FOR REDUCING SLICE HEADER PARSING OVERHEAD IN VIDEO CODING

Texas Instruments Incorpo...

1. A method for decoding a slice of picture from a bit stream when weighted prediction is enabled for the slice, the method comprising:decoding a first sequence of luminance weight flags corresponding to a first plurality of reference pictures from the bit stream;
decoding a first sequence of chrominance weight flags corresponding to the first plurality of reference pictures from the bit stream, wherein the encoded first sequence of chrominance weight flags follows the encoded first sequence of luminance weight flags in the bit stream;
decoding luminance weighting factors for each luminance weight flag of the first sequence of luminance weight flags that is set to indicate weighted prediction of a luminance component of a corresponding reference picture is enabled, wherein the encoded luminance weighting factors follow the encoded first sequence of chrominance weight flags in the bit stream; and
decoding chrominance weighting factors for each chrominance weight flag of the first sequence of chrominance weight flags that is set to indicate weighted prediction of chrominance components of a corresponding reference picture is enabled, wherein the encoded chrominance weighting factors follow the encoded luminance weighting factors in the bit stream.

US Pat. No. 10,341,652

IMAGE CODING METHOD, IMAGE DECODING METHOD, IMAGE CODING APPARATUS, IMAGE DECODING APPARATUS, AND IMAGE CODING AND DECODING APPARATUS

SUN PATENT TRUST, New Yo...

1. An image coding apparatus, comprising:a processor; and
a memory storing thereon a computer program, which when executed by the processor, causes the processor to perform operations including:
performing Sample Adaptive Offset (SAO) processing on a luminance signal, a chrominance Cb signal, and a chrominance Cr signal which are included in a target block which is locally decoded;
performing arithmetic coding on a first flag indicating whether or not an SAO parameter for the target block is identical to an SAO parameter for a left neighboring block immediately left of the target block, the SAO parameter for the target block indicating details of the SAO processing;
performing arithmetic coding on the SAO parameter for the target block, when the SAO parameter for the target block is different from the SAO parameter for the left neighboring block; and
performing arithmetic coding on a second flag indicating whether or not the SAO parameter for the target block is identical to an SAO parameter for an upper neighboring block immediately above the target block,
wherein, in the performing of the arithmetic coding on the first flag, the arithmetic coding is performed on the first flag for the luminance signal, the chrominance Cb signal, and the chrominance Cr signal by using only a single first context,
in the performing of the arithmetic coding on the first flag and the performing of the arithmetic coding on the second flag, a same context determination method is used to determine both: the single first context to be used in the arithmetic coding on the first flag; and a single second context to be used in the arithmetic coding on the second flag,
the same context determination method is used to determine each of the single first context and the single second context to be shared for luminance signals, chrominance Cb signals, and chrominance Cr signals which are included in a same picture,
the single first context is used in common for coding a plurality of the first flag in a same picture,
the single second context is used in common for coding a plurality of the second flag in a same picture, and
in the performing of the SAO processing,
each of pixels included in the target block is classified to one of categories,
the each of the pixels is added with an offset value corresponding to the classified one of the categories, and
the SAO parameter includes: information indicating a method of classifying to the categories; and information indicating the offset value.

US Pat. No. 10,341,651

IMAGE CODING METHOD, DECODING METHOD, CODING DEVICE AND DECODING DEVICE

TONGJI UNIVERSITY, Shang...

1. An image coding method comprising:determining a coding mode of a coding block; and
performing hybrid coding on the coding block using a plurality of coding modes, comprising performing coding on pixel sample segments in the coding block using one of two coding modes which are palette coding and string copy coding,
wherein, the coding block is a coding region of an image, comprising at least one of the following: a largest coding unit, LCU, a coding tree unit, CTU, a coding unit, CU, a sub-region of the CU, a prediction unit, PU, a transform unit, TU, and an asymmetric partition, AMP,
the pixel sample segments comprise any one of the following: a pixel, a pixel component, and a pixel index; wherein
when performing coding on any one of the pixel sample segments in the coding block using palette coding, the method comprises:
constructing or acquiring a palette and performing palette coding on the pixel sample segments to generate palette parameters related to palette decoding;
when performing coding on any one of the pixel sample segments in the coding block using string copy coding, the method comprises:
performing string copy coding on any one of the pixel sample segments to generate copy parameters related to string copy coding, obtaining a string of reference pixel samples matching with the pixel sample segments from a set of the reconstructed reference pixel samples according to a copy path shape mode of the string copy coding of the coding block.

US Pat. No. 10,341,650

EFFICIENT STREAMING OF VIRTUAL REALITY CONTENT

ATI TECHNOLOGIES ULC, Ma...

1. A method of processing Virtual Reality (VR) content, the method comprising:receiving tracking information including at least one of user position information and eye gaze point information;
using one or more processors to:
predict, based on the user tracking information, a user viewpoint of a next frame of a sequence of frames including video data to be displayed,
estimate, for a video portion in a previously encoded frame, a corresponding location of the video portion in the next frame based on the user tracking information, wherein the video portion in the previously encoded frame is encoded using a first encoding mode;
render the video portion in the next frame to be displayed at the estimated corresponding location in the next frame;
identify, based on the estimated corresponding location of the video portion in the next frame, the video portion in the previously encoded frame;
encode the video portion in the next frame using the first encoding mode; and
encode another portion of the next frame using a second encoding mode determined from a prediction mode map.

US Pat. No. 10,341,648

AUTOMATED DETECTION OF PROBLEM INDICATORS IN VIDEO OF DISPLAY OUTPUT

Amazon Technologies, Inc....

1. A system comprising:an electronic data store that stores a digital video, wherein the digital video comprises a plurality of frames, wherein the digital video comprises a recording of display output from a computing device; and
a hardware processor in communication with the electronic data store, the hardware processor configured to execute computer-executable instructions to at least:
select a set of frames from the digital video;
for each of a plurality of frame pairings within the set of frames, determine a pixel change value for each of a plurality of pixel locations, wherein the pixel change value for each pixel location represents a mathematical difference between an intensity value at the pixel location in a first frame of the frame pairing and an intensity value at the pixel location in a second frame of the frame pairing;
determine adjusted pixel change values for each of the plurality of pixel locations, wherein the adjusted pixel change value for each individual pixel location comprises the lowest pixel change value determined within a window that includes the individual pixel location and one or more adjacent pixel locations;
determine, for each pixel location, a final pixel change value for the set of frames, wherein the final pixel change value for each individual pixel location comprises the highest adjusted pixel change value determined for the individual pixel location in any frame pairing within the set of frames;
generate a target motion score representing motion in a target area over the set of frames, wherein the target area is smaller than a size of each frame within the set of frames and has a fixed position across each of the set of frames, wherein the target motion score is based on the highest final pixel change value determined for any pixel location within the target area;
generate a peripheral motion score representing motion outside of the target area over the set of frames, wherein the peripheral motion score is based on the highest final pixel change value determined for any pixel location outside of the target area;
determine that isolated motion occurred within the target area over the set of frames based at least in part on a comparison of the target motion score to the peripheral motion score;
based at least in part on the final pixel change values of pixel locations within the target area, identify that a shape of the motion within the target area matches a target motion shape associated with a problem indicator; and
store an indication that a problem indicator appeared in the digital video during at least a portion of the set of frames.

US Pat. No. 10,341,647

METHOD FOR CALIBRATING A CAMERA AND CALIBRATION SYSTEM

ROBERT BOSCH GMBH, Stutt...

1. A method for calibrating a dynamic vision sensor (DVS) camera, the method comprising:detecting a two-dimensional imaging trajectory of a moving calibration object by the DVS camera;
detecting a three-dimensional reference trajectory of the moving calibration object by a detection device that determines the reference trajectory on the basis of a plurality of accelerations of the moving calibration object at a plurality of detection times and corresponding calibration object positions, as measured by an acceleration sensor on the moving calibration object and received by the detection device from a transmitter on the moving calibration object, the imaging trajectory representing a trajectory of the calibration object imaged in image coordinates of the DVS camera and the reference trajectory representing the trajectory in world coordinates;
reading in the imaging trajectory and the reference trajectory by an interface device that supplies the imaging trajectory and the reference trajectory to an ascertainment device that includes a processing unit; and
calibrating the DVS camera by the ascertainment device on the basis of the imaging trajectory and the reference trajectory.

US Pat. No. 10,341,646

VARIABLE FOCAL LENGTH LENS SYSTEM WITH OPTICAL POWER MONITORING

Mitutoyo Corporation, Ka...

1. A variable focal length (VFL) lens system, comprising:a VFL lens;
a VFL lens controller that controls the VFL lens to periodically modulate an optical power of the VFL lens over a range of optical powers at an operating frequency;
an objective lens that inputs workpiece light arising from a workpiece surface during a workpiece imaging mode and transmits the workpiece light along an imaging optical path that passes through the VFL lens;
a camera that receives the workpiece light transmitted by the VFL lens along the imaging optical path during the workpiece imaging mode and provides a corresponding workpiece image exposure; and
an optical power monitoring configuration comprising a monitoring beam generator comprising a light source and a beam pattern element that inputs light from the light source and outputs a monitored beam pattern, wherein:
the optical power monitoring configuration transmits the monitored beam pattern along at least a portion of the imaging optical path to travel through the VFL lens to the camera during an optical power monitoring mode;
the camera provides at least a first monitoring image exposure including the monitored beam pattern during at least a first phase timing of the periodic modulation of the VFL lens during the optical power monitoring mode; and
a dimension of the monitored beam pattern in the first monitoring image exposure is related to an optical power of the VFL lens during the first phase timing.

US Pat. No. 10,341,645

BACKLIGHT AND IMAGE DISPLAY DEVICE USING THE SAME

LG DISPLAY CO., LTD., Se...

1. A backlight unit for a display device, comprising:a light path conversion sheet;
a light source above a first side of the light path conversion sheet; and
a reflecting plate above a second side opposite to the first side of the light path conversion sheet, the reflecting plate reflecting light directly emitted from the light source without intervening any optical members in substantially parallel with a light travel direction between the light source and a center portion of the reflecting plate,
wherein the light path conversion sheet directs the light reflected from the reflecting plate in a direction substantially perpendicular to the light travel direction between the light source and the center portion of the reflecting plate.

US Pat. No. 10,341,644

OMNISTEREO CAPTURE AND RENDER OF PANORAMIC VIRTUAL REALITY CONTENT

GOOGLE LLC, Mountain Vie...

1. A system comprising:at least one processor;
memory storing instructions that, when executed by the at least one processor, cause the system to perform operations including:
receiving a set of images based on captured video streams collected from at least one stereo pair of cameras;
calculating optical flow between images from the set of images to generate a plurality of image frames that are not part of the set of images, the calculating of the optical flow including analyzing image intensity fields for selected columns of pixels associated with the set of images;
interleaving the plurality of image frames into the set of images at the respective selected columns of pixels and stitching together a portion of the plurality of image frames and the set of images based at least in part on the optical flow; and
generating, using the portion of the plurality of image frames and the set of images, an omnistereo panorama.

US Pat. No. 10,341,643

PROCESS AND SYSTEM FOR ENCODING AND PLAYBACK OF STEREOSCOPIC VIDEO SEQUENCES

3DN, LLC, Ottawa (CA)

1. A method for displaying images, comprising:in a conventional two-dimensional (2D) viewing mode:
driving a 2D image of a plurality of 2D images to a head mountable display; and
in a stereoscopic three-dimensional (3D) viewing mode:
driving to the head mountable display a left image of a plurality of left images for left eye viewing, wherein at least a portion of the plurality of left images are generated using time interpolation to increase a frame rate of the display, and a right image of a plurality of right images for right eye viewing, the left image and the right image being time-synchronized with parallax for perception of depth in the stereoscopic 3D viewing mode, wherein the driving comprises simultaneous dual presentation of the left image and the right image.

US Pat. No. 10,341,642

DISPLAY DEVICE, CONTROL METHOD, AND CONTROL PROGRAM FOR STEREOSCOPICALLY DISPLAYING OBJECTS

KYOCERA CORPORATION, Kyo...

1. A display device, comprising:a display configured to three-dimensionally display a predetermined object, by displaying images respectively corresponding to both eyes of a user when the display device is worn;
a sensor configured to detect displacement of a real body in a display space of the object; and
a processor configured to
determine a material of the object, and
cause the display to display the object according to the displacement of the real body detected by the sensor and the determined material of the object,whereinin response to that movement of the real body in which the real body comes in contact with the object at a contact position and then moves away therefrom without maintaining contact with the object is detected by the sensor, the processor is configured to execute processing corresponding to the contact position of the object.

US Pat. No. 10,341,641

METHOD FOR PERFORMING IMAGE PROCESS AND ELECTRONIC DEVICE THEREOF

Samsung Electronics Co., ...

1. An electronic device comprising:a first image sensor configured to calculate a packet rate output in dynamic vision sensor and to measure a motion speed of a subject by the packet rate output in the dynamic vision sensor;
a second image sensor configured to be synchronized with a system clock; and
a processor operatively coupled to the first image sensor and the second image sensor, wherein the processor is configured to:
identify at least one subject having a motion based on first information obtained using the first image sensor, the motion determined as a function of a comparison between a current image frame obtained from the first image sensor and a previous image frame obtained from the first image sensor,
determine at least one region of interest (ROI) corresponding to the motion of the at least one subject from the current image frame,
after determining the at least one ROI, obtain second information corresponding to the at least one ROI, using the second image sensor,
identify a motion speed of the subject included in each of the at least one ROI based on the second information,
determine one ROI among the at least one ROI based on the motion speed,
identify a motion of the subject included in the at least one ROI, and
perform a function corresponding to the motion.

US Pat. No. 10,341,640

MULTI-WAVELENGTH PHASE MASK

The Board of Trustees of ...

1. An apparatus comprising:a phase mask configured and arranged with optics in an optical path to provide modification of a shape of light simultaneously for each of a plurality of wavelengths of the light concurrently passed from a plurality of objects, wherein the modification is simultaneous for each of the plurality of wavelengths of the light concurrently passed from the plurality of objects, independent of one another and from the optical path; and
circuitry configured and arranged to characterize a three-dimensional image of the plurality of objects, that are associated with different colors, based on each of the modified shapes of light and the respective wavelengths that are labeled using different colors.

US Pat. No. 10,341,639

AUTOMATICALLY SCANNING AND REPRESENTING AN ENVIRONMENT WITH COLLISION AVOIDANCE

ABB Schweiz AG, Baden (C...

1. A computer-implemented method comprising:obtaining a first representation of an environment employing a 3D camera coupled to a robotic arm based on a first scanning path;
simulating a second scanning path of the 3D camera comprising movements of the robotic arm using the first representation of the environment;
determining the 3D camera cannot reach a portion of the second scanning path so as to complete the second scanning path in response to simulating the second scanning path;
adjusting the second scanning path in response to determining the robotic arm cannot reach the portion of the second scanning path;
obtaining a second representation of the environment employing the 3D camera based on the adjusted second scanning path; and
wherein the second representation of the environment is different from the first representation of the environment.

US Pat. No. 10,341,638

METHOD AND APPARATUS OF DEPTH TO DISPARITY VECTOR CONVERSION FOR THREE-DIMENSIONAL VIDEO CODING

MediaTek Inc., Hsin-Chu ...

1. A method for three-dimensional or multi-view video coding, the method comprising:receiving input data associated with a conversion region of a current picture in a current dependent view, wherein the conversion region comprises a grid of pixels;
receiving depth data for the grid of pixels associated with the conversion region;
determining the conversion region is partitioned into multiple motion prediction sub-blocks;
determining a single converted DV (disparity vector) from the depth data associated with the conversion region based on at least (a) first depth data associated with a first motion prediction sub-block from the multiple motion prediction sub-blocks, and (b) second depth data associated with a second motion prediction sub-block from the multiple motion prediction sub-blocks; and
processing each of the multiple motion prediction sub-blocks of the conversion region using the single converted DV.

US Pat. No. 10,341,636

BROADCAST RECEIVER AND VIDEO DATA PROCESSING METHOD THEREOF

LG ELECTRONICS INC., Seo...

1. A method for transmitting a signal in a transmitter, the method comprising:generating the signal including a first video sequence and a second video sequence and viewpoint information,
wherein the first video sequence includes first video sections having frames of left viewpoint for first scenes and second video sections having frames of right viewpoint for second scenes,
wherein the first video sections and the second video sections are multiplexed in the first video sequence,
wherein the second video sequence includes third video sections having frames of right viewpoint of the first scenes and fourth video sections having frames of left viewpoint for the second scenes, and
wherein the third video sections and the fourth video sections are multiplexed in the second video sequence; and
transmitting the signal;
wherein the viewpoint information specifies whether viewpoints of the frames correspond to a left viewpoint or a right viewpoint.

US Pat. No. 10,341,635

STEREOSCOPIC IMAGING METHOD AND DEVICE

National Chiao Tung Unive...

1. A stereoscopic imaging method for generating, based on a pair of a first image and a second image that respectively correspond to different viewing angles, a stereoscopic image on a display screen for a viewer, said stereoscopic imaging method comprising:acquiring viewer-related information that includes a pupil distance between pupils of the viewer, a first parameter associated with a negative disparity condition, and a second parameter associated with a positive disparity condition;
upon receipt of positional information associated with a convergence position on the display screen at which the viewer is looking, acquiring, by a processor based on the positional information, a convergence disparity value from an original disparity map that corresponds to the first and second images, the convergence disparity value corresponding to a pixel of the display screen at the convergence position;
generating a disparity transformation model by the processor based on at least the convergence disparity value and the viewer-related information;
transforming, by the processor, the original disparity map into a transformed disparity map based on the disparity transformation model; and
synthesizing, by the processor, the first image and the second image into the stereoscopic image based on the transformed disparity map.

US Pat. No. 10,341,634

METHOD AND APPARATUS FOR ACQUIRING IMAGE DISPARITY

SAMSUNG ELECTRONICS CO., ...

1. A method of acquiring an image disparity by one or more hardware processors, the method comprising:acquiring, from dynamic vision sensors, a first image having a first view of an object and a second image having a second view of the object;
calculating a cost within a preset disparity range of an event of first image and a corresponding event of the second image, wherein the event of the first image and the corresponding event of the second image are generated when an intensity of lighting is greater than a preset threshold;
calculating an intermediate disparity of the event of the first image and an intermediate disparity of the event of the second image based on the cost;
determining whether the event of the first image is a matched event based on the intermediate disparity of the event of the first image and the intermediate disparity of the event of the second image; and
in response to the event of the first image being determined as the matched event, predicting optimal disparities of all events of the first image based on the intermediate disparity of the event of the first image.

US Pat. No. 10,341,633

SYSTEMS AND METHODS FOR CORRECTING ERRONEOUS DEPTH INFORMATION

QUALCOMM Incorporated, S...

7. A method for generating a corrected depth map by an electronic device, comprising:obtaining a first depth map, wherein the first depth map comprises first depth information of a first portion of a scene sampled by a depth sensor at a first sampling;
obtaining a second depth map, wherein the second depth map comprises second depth information of a second portion of the scene sampled by the depth sensor at a second sampling;
obtaining displacement information indicative of a displacement of the depth sensor between the first sampling and the second sampling;
transforming the first depth map based on the displacement information to produce a transformed depth map;
determining a spatio-temporal interpolation between at least two depths of the transformed depth map and at least two depths of the second depth map;
detecting erroneous depth information by comparing one or more depths of the second depth map with a value that is based on at least the spatio-temporal interpolation and a threshold; and
generating a corrected depth map by correcting the erroneous depth information of the second depth map based on the transformed depth map.

US Pat. No. 10,341,632

SPATIAL RANDOM ACCESS ENABLED VIDEO SYSTEM WITH A THREE-DIMENSIONAL VIEWING VOLUME

1. A method for displaying an environment from a viewpoint, the method comprising:at an input device, receiving user input designating the viewpoint within a viewing volume;
at one or more processors, identifying, from among a plurality of vantages within the viewing volume, a subset of the vantages nearest to the viewpoint comprising at least two of the vantages, each of which has associated video data;
at a data store, retrieving the video data from the subset of the vantages;
at the one or more processors, combining the video data from the subset of the vantages to generate viewpoint video data depicting the environment from the viewpoint; and
at a display device, displaying the viewpoint video data, wherein identifying the subset comprises identifying four vantages of the plurality of vantages that define a tetrahedral shape around the viewpoint.

US Pat. No. 10,341,631

CONTROLLING MODES OF SUB-TITLE PRESENTATION

Harmonic, Inc., San Jose...

1. One or more non-transitory computer-readable storage mediums storing one or more sequences of instructions for creating a sub-titles stream or file composed of sub-titles elements, wherein execution of the one or more sequences of instructions by one or more processors causes:for each sub-titles element in said sub-titles elements, performing:
inserting a sub-titles element into the sub-titles stream or file;
determining whether at least one end-of-block condition, of a set of two or more end-of-conditions, related to a mode of presentation of sub-titles is satisfied by the inserted sub-titles element; and
upon satisfying said at least one end-of-block condition, inserting into the sub-titles stream or file a datum representative of an end of a block according to each mode of presentation of sub-titles that is satisfied by the inserted sub-titles element.

US Pat. No. 10,341,630

CREATING TIME LAPSE VIDEO IN REAL-TIME

IDL CONCEPTS, LLC, Los A...

1. A computer-implemented method for creating time lapse video in real-time, comprising:receiving a real-time video feed from a video input device;
receiving one or more user configurations related to time lapse video, wherein the one or more user configurations comprise a duration of input that corresponds to a span of content covered by the real-time video feed;
automatically generating frames according to the one or more predetermined user configurations, wherein said generating comprises:
buffering frames of the real-time video feed; and automatically selecting buffered frames according to a frequency set by the one or more user predetermined configurations; and
outputting a time lapse video file, wherein outputting the time-lapse video file comprises outputting the selected buffered frames,
wherein a duration of the time lapse video file is less than a duration of the real-time video feed while span of content covered is substantially equivalent for both the time lapse video file and the real-time video feed.

US Pat. No. 10,341,629

TOUCH SCREEN WIFI CAMERA

HIPCAM LTD., Givat Shmue...

1. An in-place imaging device, comprising:(a) a housing, said housing comprising:
(i) a camera;
(ii) a wireless communication module;
(iii) a processing unit; and
(iv) a touch sensitive screen configured for displaying content and receiving input, wherein a lens of said camera and said touch sensitive screen are disposed on a same surface; and
(b) a support stand mechanically and electronically coupled to said housing, said support stand including:
(i) a temperature sensor embedded in said support stand, for sensing the temperature in an immediate area surrounding the in-place imaging device and displaying said temperature on said touch sensitive screen;
wherein the imaging device is adapted to capture footage with said camera and transmitting said footage via said wireless communication module to a network access point.

US Pat. No. 10,341,628

MONOCHROME-COLOR MAPPING USING A MONOCHROMATIC IMAGER AND A COLOR MAP SENSOR

Google LLC, Mountain Vie...

1. A method for providing a color, high-resolution image of a scene comprising:capturing a high-resolution monochromatic image of the scene;
capturing a low-resolution color image of the same scene, a resolution of the captured low-resolution color image being lower than a resolution of the captured high-resolution monochromatic image;
mapping, using a nonlinear grayscale that is downscaled to a color scale based on a number of pixels of the captured high-resolution monochromatic image of the scene and another number of pixels of the captured low-resolution color image of the same scene, monochrome luminosities from the captured high-resolution monochromatic image to color luminosities corresponding to colors from the captured low-resolution color image of the scene to produce a map, the map correlating the monochrome luminosities in the captured high-resolution monochromatic image to the color luminosities corresponding to colors from the captured low-resolution color image of the scene;
using the map, colorizing the captured high-resolution monochromatic image of the scene with the colors corresponding to the color luminosities of the captured low-resolution color image of the scene, the colorizing providing a color, high-resolution image of the scene; and
providing the color, high-resolution image of the scene.

US Pat. No. 10,341,627

SINGLE-HANDED FLOATING DISPLAY WITH SELECTABLE CONTENT

INTERMEC IP CORP., Fort ...

1. A barcode scanning device comprising:a stabilized component having a projector;
a nonstabilized component having a light emitting component; and
a camera adapted to:
scan a work item including a barcode;
wherein the barcode scanning device is adapted to:
decode the barcode scanned by the camera to identify the work item;
project, via the projector and in response to decoding the barcode, a first user interface on a surface of the identified work item such that the projected first user interface is within a field of view of the camera, wherein the first user interface comprises one or more commands and/or options associated with the identified work item that are selectable by the barcode scanning device based on a position of the stabilized component relative to a position of the nonstabilized component, wherein the one or more commands and/or options of the projected first user interface are selected based on a position of a light indicator projected by the light emitting component within the projected first user interface;
detect a position of the light indicator relative to the projected first user interface by recognizing presence of the light indicator within a particular portion of a grid structure associated with the field of view of the camera and correlating the position of the light indicator detected by the camera with a command or option of the one or more commands and/or options of the projected first user interface based on the grid structure, wherein the grid structure is generated using grid technology and is independent of the projected first user interface;
receive an indication of a selection of the command or option based on a position of the selection indicator relative to a position of the projected first user interface; and
project a second user interface corresponding to the command or option in the first user interface in response to receiving the selection.

US Pat. No. 10,341,626

IMAGE PROJECTION SYSTEM, PROJECTOR, AND CONTROL METHOD FOR IMAGE PROJECTION SYSTEM

SEIKO EPSON CORPORATION, ...

1. An image projection system comprising:a first projector; and
a second projector, wherein
the first projector includes:
a first projecting section configured to project a first image; and
a first imaging section configured to capture a range including at least a part of the first image projected by the first projecting section and at least a part of a second image projected by the second projector,
the second projector includes a second projecting section configured to project the second image, and
the image projection system sets a target color on the basis of a first captured image obtained by capturing, with the first imaging section, at least a part of the first image projected by the first projecting section and calculates, on the basis of a second captured image obtained by capturing, with the first imaging section, at least a part of the second image projected by the second projecting section, first correction data for correcting a color of a projected image of the second projector to the target color, the correcting occurring based upon lattice points, the image projection system generating a correspondence table in which specified coordinates of the lattice points in data of the first captured image and coordinates of stored lattice points in pattern image data are registered in association with each other, wherein:
the first correction data is data for correcting a color of a first point in the second captured image to the target color, the first correction data being a first value, the first value being a difference between the target color and an imaging value of the first point in the second captured image,
the second projector includes a second imaging section configured to capture a third captured image, the third captured image being obtained by capturing at least a part of the second image projected by the second projecting section, and
the image projection system calculates, on the basis of the third captured image and the first correction data, second correction data for correcting a color of a second point to the target color by adding the first value and a second value that is a difference between an imaging value of the first point in the third captured image and an imaging value of the second point in the third captured image.

US Pat. No. 10,341,625

DISPLAY APPARATUS, DISPLAY CONTROL METHOD, AND COMPUTER-READABLE STORAGE MEDIUM

CASIO COMPUTER CO., LTD.,...

1. A display apparatus comprising:a display screen for displaying a video based on video data, wherein the display screen comprises:
a light source configured to be controlled to emit light and to temporarily stop emission of the light, based on the video data; and
a light modulator configured to modulate the light emitted by the light source; and
a processor configured to:
control the light source to emit the light based on the video data and the light modulator to modulate the light emitted by the light source, to display the video; and
while controlling the light source to emit the light based on the video data to display the video:
determine whether a time-out period set to a first length has passed;
in response to determining that the time-out period set to the first length has passed, perform a first determination, based on the video data, of whether to temporarily stop emission of the light so that the display screen is black; and
in response to determining, in the first determination, to temporarily stop emission of the light so that the display screen is black:
control the light source to temporarily stop emission of the light;
set the time-out period to a second length shorter than the first length;
determine whether the time-out period set to the second length has passed;
in response to determining that the time-out period set to the second length has passed, perform a second determination, based on the video data, of whether to temporarily stop emission of the light; and
control the light source based on a result of the second determination.

US Pat. No. 10,341,624

PROJECTION IMAGE DISPLAY APPARATUS

PANASONIC INTELLECTUAL PR...

1. A projection image display apparatus comprising:a plurality of light sources including a first light source and a second light source;
a light combiner combining light rays emitted from the plurality of light sources into a combined light;
a light modulation element modulating the combined light; and
a projection optical system projecting an image emitted from the light modulation element,
wherein the plurality of light sources are each controlled by pulse width modulation signals, each of which contains a plurality of pulses, and duty ratios of the plurality of pulses of the pulse width modulation signals differ from each other,
the duty ratios of the plurality of pulses of the pulse width modulation signals to the light sources are set so that a total luminance value of the light sources, which is determined based on the duty ratios of the plurality of pulses of the pulse width modulation signals to the light sources, becomes a target luminance value of an image projected from the projection optical system,
the duty ratios of the plurality of pulses of the pulse width modulation signals to the light sources increase as the target luminance value increases,
when the target luminance value is less than a predetermined value, which is less than 100%, a duty ratio of the first light source increases faster than a duty ratio of the second light source increases, as the target luminance value increases, and
when the target luminance value is equal to or greater than the predetermined value, the duty ratio of the first light source increases slower than the duty ratio of the second light source increases, as the target luminance value increases.

US Pat. No. 10,341,623

OPTICAL PROJECTION SYSTEM AND ENERGY CONTROL METHOD THEREFOR USING SELECTION UNIT

Coretronic Corporation, ...

1. An optical projection system, comprising:a light source module capable of emitting at least one light beam;
an optical engine for receiving the light beam and modulating the light beam according to at least one image signal to form an image beam;
a thermoelectric generator for absorbing heat in the optical projection system and converting the heat into electrical energy;
a storage unit for storing the electrical energy, wherein a state of charge of the storage unit is allowed to reach at least a first level or a second level, and the second level is larger than the first level;
a first electronic device and a second electronic device for receiving the electrical energy stored in the storage unit, wherein the first electronic device has a first threshold voltage, the second electronic device has a second threshold voltage, the first threshold voltage is a minimum voltage required to make the first electronic device operable, the second threshold voltage is a minimum voltage required to make the second electronic device operable, and the second threshold voltage is larger than the first threshold voltage; and
a selection unit, wherein the selection unit outputs a first selection signal to the storage unit to turn on the first electronic device having the first threshold voltage when the state of charge of the storage unit reaches the first level and turn on the second electronic device having the second threshold voltage when the state of charge of the storage unit reaches the second level, and the selection unit outputs a second selection signal to the storage unit to selectively shut down at least one of the first electronic device and the second electronic device according to (1) the state of charge of the storage unit and (2) the first threshold voltage of the first electronic device and the second threshold voltage of the second electronic device, wherein the second electronic device with the second threshold voltage is set by the selection unit to have higher priority to be shut down over the first electronic device with the first threshold voltage when the first electronic device and the second electronic device are in an operating state simultaneously.

US Pat. No. 10,341,622

MULTI-HALF-TONE IMAGING AND DUAL MODULATION PROJECTION/DUAL MODULATION LASER PROJECTION

Dolby Laboratories Licens...

1. A method for reducing halo artifacts in a final image rendered upon a dual modulation display system by creating a Point Spread Function (PSF) of a desired size with a first, pre-modulator that illuminates a second, primary modulator comprising the step of preparing a dual modulation energization signal comprising a pre-modulator energization signal comprising a plurality of half-tone images, each of the plurality of half-tone images to be displayed or energized on a pre-modulator of a dual modulation display system over a plurality of sub-frames time periods during a single frame time period, wherein further the single frame period is a modulation period of the primary modulator and the sub-frame time period is a modulation period of the pre-modulator and the pixel elements of the primary modulator are switched on/off once during the single frame period and the pixel elements of the pre-modulator are switched on/off once during a single sub-frame period and further that the single frame period comprises a plurality of sub-frame time periods, thereby increasing the number of levels in the pre-modulator without increasing PSF size, in synchronization with a primary modulator signal comprising an image to be displayed or energized on a primary modulator of the dual modulation display system, the primary modulator signal further comprising a bit sequence of N bits per pixel wherein the higher order bits are spread out across the frame synchronized with the pre-modulator energization signal.

US Pat. No. 10,341,621

SYSTEMS AND METHODS FOR CREATING FULL-COLOR IMAGE IN LOW LIGHT

Chromatra, LLC., Beverly...

1. A color imaging system, comprising:a radiation sensitive sensor configured to generate, in response to incident electromagnetic radiation from a scene:
a first set of electrical signals indicative of a first channel comprising a first spectrum of wavelengths of electromagnetic radiation, a first array of radiation sensitive pixels that are enabled to detect the first spectrum of wavelengths of electromagnetic radiation, and the first array of radiation sensitive pixels comprises clear, unfiltered pixels;
a second set of electrical signals indicative of a second channel comprising a second spectrum of wavelengths of electromagnetic radiation, and a second array of radiation sensitive pixels that are enabled to detect the second spectrum of wavelengths of electromagnetic radiation, and the second array of radiation sensitive pixels comprises a filter; and
an image processor coupled to the radiation sensitive sensor and having circuitry configured to:
receive the first set of electrical signals indicative of the first channel and the second set of electrical signals indicative of the second channel;
derive an output based on the first and second sets of electrical signals to channels of a red-green-blue (RGB) display to generate a full-color image of the scene based on the first and second sets of electrical signals by combining the first and second sets of electrical signals into a set of color vectors, wherein a color vector comprises an ordered set of numbers describing a color;
translate the set of color vectors into colors in a color space with at least one of using a color lookup table that records associations between color vectors and colors, or using predetermined formulas based on definitions of the first and second channels; and
display the full-color image.

US Pat. No. 10,341,620

IMAGE SENSOR AND IMAGE-CAPTURING DEVICE

NIKON CORPORATION, Tokyo...

1. An image sensor comprising:a first microlens;
a first filter that is transmissive to a first wavelength of light having passed through the first microlens;
a first photoelectric converter that generates charges by performing photoelectric conversion of the first wavelength light transmitted through the first filter;
a second filter that is transmissive to a second wavelength of the light having passed through the first microlens;
a second photoelectric converter that generates charges by performing photoelectric conversion of the second wavelength light transmitted through the second filter;
a second microlens;
a third filter that is transmissive to the first wavelength of light having passed through the second microlens;
a third photoelectric converter that generates charges by performing photoelectric conversion of the first wavelength light transmitted through the third filter;
a fourth filter that is transmissive to the second wavelength of light having passed through the second microlens;
a fourth photoelectric converter that generates charges by performing photoelectric conversion of the second wavelength light transmitted through the fourth filter;
a first output unit that outputs a first signal based upon at least one of the charges generated by the first photoelectric converter and the charges generated by the third photoelectric converter; and
a second output unit that outputs a second signal based upon at least one of the charges generated by the second photoelectric converter and the charges generated by the fourth photoelectric converter.

US Pat. No. 10,341,619

METHODS, SYSTEMS, AND PRODUCTS FOR EMERGENCY SERVICES

1. A system, comprising:a connection for communicating with a network;
a processor in communication with the connection; and
memory storing instructions that cause the processor to effectuate operations, the operations comprising:
receiving an indication that an emergency notification was sent from a communications device;
determining a first location of the communications device;
querying for devices connected to the network that have a view of the first location;
based on the querying, identifying a nearby mobile device located at a second location;
in response to the querying, receiving a list of nearby devices, the list indicative of nearby device locations associated with the nearby devices, wherein the nearby devices comprise the nearby mobile device;
determining that a first nearby device of the nearby devices has a first camera resolution value; and
based on the first camera resolution value not meeting a minimum requirement, excluding the first nearby device from the list.

US Pat. No. 10,341,618

INFRASTRUCTURE POSITIONING CAMERA SYSTEM

Trimble Inc., Sunnyvale,...

1. A camera system comprising:a housing;
a first imaging system integrated with the housing, the first imaging system comprising:
a first lens, and
a first image sensor for detecting light from the first lens to generate first data, wherein:
the first data is image data,
the first imaging system has a first field of view, and
the first field of view is between 20 degrees and 40 degrees;
a second imaging system integrated with the housing, the second imaging system comprising:
a second lens, and
a second image sensor for detecting light from the second lens to generate second data, wherein:
the second data is image data,
the second imaging system has a second field of view,
the second field of view is between 20 degrees and 40 degrees,
the first field of view and the second field of view combine to form a third field of view,
the third field of view is larger than the first field of view, and
the third field of view is larger than the second field of view;
a processor, integrated with the housing, configured to generate third data from the first data and/or the second data, wherein the third data corresponds to position information of an object; and
a communication interface configured to transmit the third data.

US Pat. No. 10,341,617

PUBLIC SAFETY CAMERA IDENTIFICATION AND MONITORING SYSTEM AND METHOD

Purdue Research Foundatio...

10. A system for determining a travel path, comprising:a network of at least one camera;
a communication hub coupled to the network of at least one camera;
at least one electronic communication device;
a data processing system coupled to the communication hub, the data processing system comprising one or more processors configured to:
(a) establish an interface with a 3rd-party mapping system via the electronic communication device,
(b) receive a start point and an end point by a user on the interface for a preselected zone,
(c) generate input data for the 3rd-party mapping system based on the start and end points,
(d) provide the input data to the 3rd-party mapping system,
(e) receive output data from the 3rd-party mapping system associated with a path from the start point to the end point,
(f) identify waypoints in the output data,
(g) identify a camera from a predetermined list of cameras of the preselected zone closest to a line between each of the two consecutive waypoints,
(h) determine the center of a viewing angle of the identified camera from a list of predetermined viewing angles for each of the cameras in the list of cameras of the preselected zone,
(i) calculate a path from the start point through each of the viewing angle centers to the end point,
(j) set the view angle center between each of the two consecutive waypoints as a new start point and iterating steps (c) through (i) until the end point is one of the two consecutive waypoints, at which iteration the incremental path is calculated from a viewing angle center representing the last pair of consecutive waypoints to the end point, and
(k) display the calculated path on the electronic communication device,
wherein the predetermined list of cameras is determined by:
(A) receiving name of an organization,
(B) identifying a range of internet protocol (IP) addresses associated with the organization,
(C) querying each IP address in the range of the IP addresses,
(D) receiving a response from the IP addresses in response to the queries,
(E) verifying the received response is from a camera by obtaining an image file from the IP address and analyzing the image file, and
(F) adding the IP address to the predetermined list of cameras, and
wherein location of each camera is determined by:
(A) using an IP address to physical address translator, and
(B) verifying the location information by using a street-view of a 3rd-party mapping software.

US Pat. No. 10,341,616

SURVEILLANCE SYSTEM AND METHOD OF CONTROLLING THE SAME

HANWHA TECHWIN CO., LTD.,...

1. A network camera comprising:a camera configured to capture images of a monitoring area;
a communication interface configured to communicate with a server and a beacon terminal; and
a processor configured to transmit a first beacon signal to the beacon terminal to receive a second beacon signal corresponding to the first beacon signal from the beacon terminal, generate beacon information and image information, in response to receiving the second beacon signal, and transmit the beacon information and the image information to the server via the communication interface,
wherein the beacon information comprises location information of the beacon terminal, and the image information comprises an image of a monitoring target that carries the beacon terminal.

US Pat. No. 10,341,615

SYSTEM AND METHOD FOR MAPPING OF TEXT EVENTS FROM MULTIPLE SOURCES WITH CAMERA OUTPUTS

Honeywell International I...

1. A video surveillance system comprising:a CCTV keyboard having a controller that groups each of a plurality of surveillance cameras into a respective one of a plurality of zones, wherein each of the plurality of zones contains a respective one of a plurality of transaction devices within a respective field of view of a respective one of the plurality of surveillance cameras associated with the respective one of the plurality of zones, and wherein each of the plurality of zones is a respective physical location where the respective one of the plurality of transaction devices is located;
a capture unit that receives transaction data from the plurality of transaction devices; and
a switching unit connected to the CCTV keyboard that selectively switches between the plurality of surveillance cameras to display respective video from the respective one of the plurality of surveillance cameras associated with a first one of the plurality of zones in response to a first selection from the CCTV keyboard selecting the first one of the plurality of zones,
wherein, responsive to the first selection, the capture unit and the switching unit display the respective video of live transactions from the respective one of the plurality of surveillance cameras associated with the first one of the plurality of zones,
wherein, when the first one of the plurality of zones includes more than one of the plurality of transaction devices, the CCTV keyboard is configured to receive a second selection selecting a selected transaction device of the more than one of the plurality of transaction devices,
wherein, responsive to the second selection, the capture unit and the switching unit display text corresponding to the transaction data associated with the live transactions from the selected transaction device superimposed on the respective video of the live transactions from the respective one of the plurality of surveillance cameras associated with the first one of the plurality of zones; and
wherein superimposition of the text on the respective video of the live transactions includes placing the text corresponding to the transaction data associated with the live transactions on top of the respective video of the live transactions.

US Pat. No. 10,341,614

BIOLOGICAL IMAGING DEVICE

NEC CORPORATION, Tokyo (...

1. A biological imaging device, comprising:an emitting unit that emits parallel light to a first part of a finger;
an imaging unit that images the first part and a second part connected to the first part, wherein the first part is a fingerprint part between a distal interphalangeal joint and a fingertip and the second part is a part between a proximal interphalangeal joint and the distal interphalangeal joint; and
a finger root guide on which a root of the finger is placed
wherein the imaging unit captures blood vessel of the second part, the parallel light emitted by the light propagating inside the finger and radiated from the second part.

US Pat. No. 10,341,613

VIDEO SHARING PLATFORM PROVIDING FOR POSTING CONTENT TO OTHER WEBSITES

Crackle, Inc., Culver Ci...

1. A method for use in providing content, comprising:hosting a network site on a computer network, where the network site is remote from a plurality of client computers and accessible by the client computers over the computer network;
displaying on the network site links to one or more videos uploaded over the network from multiple client computers of the plurality of client computers;
generating one or more video files from the uploaded one or more videos in a format that is supported for playback on one or more portable video players;
displaying on the network site a tool for searching through the one or more videos available through the network site and accessible over the computer network;
displaying on the network site a result of a search through the one or more videos;
displaying on the network site procedures for allowing downloading of video that is representative of the result of the search on one or more portable video players;
causing downloading of one or more generated video files, that is representative of the result of the search to one of the portable video players in response to the procedures being followed, wherein each transferred video file is playable on the portable video player, and wherein the downloading is performed in pieces from two or more client computers on the network;
updating the video that is representative of the result of the search in the portable video player;
displaying on the network site an option to be activated by a user to create a film strip widget that is representative of the result of the search, wherein the film strip widget includes display of the still images for the corresponding plurality of videos, code comprising identifiers that are used to identify one or more video files to be represented in the film strip widget and a command to start an on-demand playback of the created on-demand video clip for any video included in the film strip widget;
displaying on the network site an option to create an RSS (really simple syndication) feed corresponding to a search term, wherein the RSS feed is configured to provide notifications to the user of updates to the result of the search corresponding to at least the search term;
subscribing the user to the created RSS feed;
identifying when new video is shared that corresponds to the search term;
including the new video in the RSS feed;
identifying the user as being subscribed to the RSS feed; and
notifying the user, in response to the including the new video in the RSS feed and identifying the user as being subscribed to the RSS feed, when the new video is available; and
posting the film strip widget that is representative of the result of the search to a different network site in response to the option being selected.

US Pat. No. 10,341,612

METHOD FOR PROVIDING VIRTUAL SPACE, AND SYSTEM FOR EXECUTING THE METHOD

COLOPL, INC., Tokyo (JP)...

1. A method, comprising:defining a first virtual space, wherein the first virtual space is associated with a first user to be associated with a first head-mounted device (HMD);
defining a second virtual space, wherein the second virtual space is associated with a second user, different from the first user, to be associated with a second head-mounted device (HMD);
specifying a plurality of first potential match users as candidates to be matched with the first user, wherein the plurality of first potential match users comprise the second user;
presenting in the first virtual space information representing the plurality of first potential match users;
detecting a first input from the first user;
detecting a first period during which the first user designates the second user in accordance with the detected first input;
specifying a plurality of second potential match users as candidates to be matched with the second user, wherein the plurality of second potential match users comprise the first user;
presenting in the second virtual space information representing the plurality of second potential match users;
detecting a second input from the second user;
detecting a second period during which the second user designates the first user in accordance with the detected second input; and
matching the first user and the second user in accordance with the first period and the second period satisfying a predetermined relation.

US Pat. No. 10,341,611

SYSTEM AND METHOD FOR VIDEO CONFERENCING

Inuitive Ltd., RaAnana (...

1. A system for video conferencing, comprising:a data processor configured for receiving from a remote location a stream of imagery data of a remote user and displaying an image of said remote user on a display device, receiving a stream of imagery data of an individual in a local scene in front of said display device, extracting a head orientation of said individual, and varying a view of said image responsively to said head orientation;
wherein said variation of said view comprises varying a displayed orientation of said remote user such that a rate of change of said orientation matches a rate of change of said head orientation;
wherein said stream of imagery data comprises range data in a form of a depth map providing depth information at a lower resolution for a group of pixels of the image data.

US Pat. No. 10,341,610

METHOD AND APPARATUS USING AN INTEGRATED FEMTOCELL AND RESIDENTIAL GATEWAY DEVICE

1. A device, comprising:a processing system including a processor; and
a memory that stores executable instructions that, when executed by the processing system, facilitate performance of operations, comprising:
sending a cellular phone number to a gateway device associated with the device, wherein the gateway device comprises a femtocell, and wherein the sending is performed when a cellular phone associated with the cellular phone number is within a communication range of the femtocell;
determining whether to use a first service associated with the cellular phone number or a second service associated with a voice over internet protocol phone number when initiating an outgoing call or when answering an incoming call as presented options on a user interface, wherein the determining is performed based on an input received through the user interface as a selection from among the presented options, wherein the voice over internet protocol phone number is provided to the femtocell;
receiving the incoming call or initiating the outgoing call associated with one of the cellular phone number or the voice over internet protocol phone number through the femtocell responsive to receiving the selection and according to the selection from among the presented options;
performing an outgoing voice over internet protocol call when receiving a user initiated phone call signal from one of a voice over internet protocol phone or the device by processing the user initiated phone call as a voice over internet protocol phone call; and
modifying a presentation of a video program being presented by the device for a duration of the outgoing call or the incoming call.

US Pat. No. 10,341,609

GROUP VIDEO SYNCHRONIZATION

MOTOROLA SOLUTIONS, INC.,...

1. An apparatus comprising:a first Push-to-Talk (PTT) button;
a display configured to display video;
an over-the-air receiver(s) configured to receive a PTT audio transmission along with first video synchronization information, wherein the PTT audio transmission is initiated by a user pressing a second PTT button on a PTT radio, and wherein the PTT audio transmission is not part of the video;
an over-the-air transmitter(s) configured to transmit second video synchronization information;
logic circuitry configured to receive the first video synchronization information and advance or retard the video based on the first video synchronization information, and also configured to receive a trigger that the first PTT button was pressed, and in response to the received trigger, determine the second video synchronization information for the video, and instruct the transmitter to transmit the second video synchronization information via an over-the-air transmission to other radios.

US Pat. No. 10,341,608

COMMUNICATION SYSTEM AND COMMUNICATION METHOD

1. A communication system comprising:a plurality of different base sites communicably connected to each other, the plurality of different base sites including first and second base sites;
a video display device disposed in the first base site, the video display device being configured to display a video image at a first area corresponding to a first flow line on a floor of the first base site, the first flow line corresponding to a first walking route in the first base site;
an imaging device configured to capture the video image of a second area of the second base site, the second area corresponding to a second flow line on a floor of the second base site, the second flow line corresponding to a second walking route in the second base site; and
a communication device configured to transmit the video image captured by the imaging device to the video display device,
wherein the video display device is configured to display the video image at the first area, and
the video image is displayed at the first area by linking the second flow line to the first flow line.

US Pat. No. 10,341,607

DISPLAY DEVICE

MITSUMI ELECTRIC CO., LTD...

1. A display device that includesan attachment part mountable on a head of a user,
a control device to control the attachment part, and
a transmission cable to connect the attachment part with the control device,
the display device comprising:
an imager;
a first converter configured to convert a digital signal from the imager into an analog signal;
a second converter configured to convert the analog signal into a video signal;
a laser light generator configured to generate a laser light modulated depending on the video signal;
an optical scanner configured to scan the laser light; and
an optical projection system configured to project the scanned laser light to form an image,
wherein the imager, the first converter, the optical scanner, and the optical projection system are placed in the attachment part,
wherein the second converter and the laser light generator are placed in the control device, and
wherein the analog signal and the laser light are transmitted via the transmission cable.

US Pat. No. 10,341,606

SYSTEMS AND METHOD OF TRANSMITTING INFORMATION FROM MONOCHROME SENSORS

SA Photonics, Inc., Los ...

1. An imaging system comprising:a plurality of cameras comprising a plurality of optical sensor arrays, different sensor arrays of the plurality of optical sensor arrays configured to obtain a monochrome image of a scene and produce image data,
a multiplexing unit configured to multiplex image data obtained by different sensor arrays of the plurality of optical sensor arrays and generate a single image stream; and
a transmission line configured to accept the generated single image stream,
wherein the plurality of optical sensor arrays comprises three optical sensor arrays, the three optical sensor arrays comprising respective arrays of pixels, a pixel in the array of pixels associated with a unique set of coordinates designating the position of the pixel in the array of pixels, and
wherein the single image stream comprises a plurality of multiplexed pixels comprising information from co-located pixels of the three optical sensor arrays, the information from the co-located pixels of the three optical sensor arrays being time synchronized.

US Pat. No. 10,341,605

SYSTEMS AND METHODS FOR MULTIPLE-RESOLUTION STORAGE OF MEDIA STREAMS

WatchGuard, Inc., Allen,...

1. A method comprising, by a computer system:continuously receiving, from a plurality of cameras, raw video frames at an initial resolution, wherein the plurality of cameras are arranged to provide a 360-degree view relative to a point of reference;
for each camera of the plurality of cameras, for each raw video frame, as the raw video frame is received:
downscaling the raw video frame to a first resolution to yield a first scaled video frame;
downscaling the raw video frame to a second resolution distinct from the first resolution to yield a second scaled video frame;
identifying a location of a target in at least one of the raw video frame, the first scaled video frame, and the second scaled video frame;
cropping at least one video frame selected from among the raw video frame, the first scaled video frame, and the second scaled video frame based, at least in part, on the location of the target;
downscaling the cropped at least one video frame to a third resolution to yield a third scaled video frame; and
storing the first scaled video frame, the second scaled video frame, and information related to the cropped at least one video frame as part of a first video stream, a second video stream, and a third video stream, respectively; and
blending together a video stream of each of the plurality of cameras into a 360-degree video stream, wherein the video stream of each of the plurality of cameras comprises at least one of the first video stream, the second video stream, and the third video stream.

US Pat. No. 10,341,604

CAMERA SYSTEM, CAMERA, INTERCHANGEABLE LENS, AND STORAGE MEDIUM STORING CONTROL PROGRAM FOR CAMERA SYSTEM THEREON

Olympus Corporation, Tok...

1. A camera system, comprising a lens-changeable camera body and an interchangeable lens, the camera system including:a usage history information collecting unit configured to collect a plurality of usage history information related to a usage state of the camera system;
a usage history information storage unit configured to store the collected usage history information;
an information extracting unit configured to extract, from a plurality of the usage history information stored in the usage history information storage unit, usage history information related to the interchangeable lens being attached; and
a lens-related information storage unit configured to store the extracted usage history information related to the interchangeable lens.

US Pat. No. 10,341,603

IDENTIFYING AND TRACKING DIGITAL IMAGES WITH CUSTOMIZED METADATA

Lifetouch Inc., Eden Pra...

1. A method of generating a digital image file, the method comprising:receiving a digital image of a subject;
receiving first barcode data;
receiving second barcode data, at least one of the first barcode data and the second barcode data including information identifying the subject in the digital image;
storing the first barcode data and the second barcode data in a plurality of metadata fields in the digital image file, wherein at least some of the first barcode data or the second barcode data are fragmented between metadata fields; and
storing the digital image file in a memory device of a digital camera, wherein the digital image file is a single digital data file encoded in the memory device according to a standard digital image file format.

US Pat. No. 10,341,602

TV POWER SUPPLY

SHENZHEN CHINA STAR OPTOE...

1. A TV power supply, comprising a main power supply and a voltage conversion circuit electrically connected to the main power supply;the main power supply comprising a first output terminal for outputting a backlight driving voltage and a second output terminal for outputting a motherboard driving voltage;
the voltage conversion circuit being configured to convert the backlight driving voltage or a motherboard driving voltage to a standby voltage, the voltage conversion circuit comprising an input terminal electrically connected to the first output terminal or the second output terminal and a third output terminal for outputting the standby voltage;
wherein the voltage conversion circuit comprises a first resistor, a second resistor, a third resistor, a first transistor, a second transistor, and a zener diode;
one terminal of the first resistor is electrically connected to a first node and the other terminal of the first resistor is electrically connected to an emitting electrode of the second transistor;
one terminal of the second resistor is electrically connected to the first node and the other terminal of the second resistor is electrically connected to one terminal of the third resistor;
the other terminal of the third resistor is electrically connected to a base electrode of the first transistor;
a collect electrode of the first transistor is electrically connected to the terminal of the third resistor, the emit electrode of the first transistor is electrically connected to a second node;
the base electrode of the second transistor is electrically connected to the terminal of the third resistor and the collect electrode of the second transistor is electrically connected to the second node;
a cathode of the zener diode is electrically connected to the base electrode of the first transistor and the anode of the zener diode is grounded;
the first node is the input terminal of the voltage conversion circuit, the second node is the third output terminal of the voltage conversion circuit;
the voltage difference between a stable voltage of the zener diode and the conduction voltage drop of the emitter junction of the first transistor is equal to the standby voltage.

US Pat. No. 10,341,601

IMAGE PROCESSING APPARATUS, IMAGING APPARATUS, AND IMAGE PROCESSING METHOD

CANON KABUSHIKI KAISHA, ...

1. An image processing apparatus that receives an input of an image signal obtained by imaging an object, and performs signal conversion on the image signal to output the image signal to a display apparatus, the image processing apparatus comprising:at least one processor or circuit configured to function as following units:
a calculation unit configured to calculate an absolute luminance value of the object from a luminance value of the object acquired from the image signal and an exposure parameter in the imaging;
a determination unit configured to determine a predetermined absolute luminance code for the luminance value of the object according to input-output characteristics of the display apparatus so that the object is displayed at the absolute luminance value on the display apparatus;
a conversion unit configured to perform signal conversion for converting the image signal based on a relationship between the luminance value of the object and the absolute luminance code; and
a correction unit configured to perform gamma correction on the image signal output from the conversion unit,
wherein the determination unit determines the predetermined absolute luminance code for the luminance value of the object in the image signal after the gamma correction.

US Pat. No. 10,341,600

CIRCUIT APPLIED TO TELEVISION AND ASSOCIATED IMAGE DISPLAY METHOD

MSTAR SEMICONDUCTOR, INC....

1. A circuit, applied to a television comprising a memory and a display panel, comprising:an image processing circuit, processing image data to generate processed image data, wherein the image processing circuit processes at least one of scaling, color adjustment, brightness adjustment and noise reduction on the image data to generate the processed image data;
a control circuit, generating a control signal according to a switching signal;
an image capturing circuit, capturing the processed image data as predetermined image data according to the control signal and storing the predetermined image data to the memory;
a first output circuit, transmitting the predetermined image data to the display panel according to the control signal; and
a second output circuit, superimposing an interface image onto the processed image data and transmitting the processed image data superimposed with the interface image to the display panel, and stopping superimposing the interface image onto the processed image data and stopping transmitting the processed image data superimposed with the interface image to the display panel according to the control signal;
wherein the memory stores the interface image the switch signal contains a resolution switching signal, and the second output circuit further sets a parameter corresponding to the resolution switching signal according to the control signal.