US Pat. No. 10,171,929

POSITIONAL AUDIO ASSIGNMENT SYSTEM

Lightbox Video Inc., Tor...

1. A method performed by one or more electronic devices, the method comprising:obtaining data representing a video viewable to a user through a head-mounted device in an immersive virtual reality environment and that identifies spatial positions assigned to one or more objects within the video, and
obtaining audio data associated with the video that (i) encodes one or more audio streams corresponding to each of the one or more objects and (ii) identifies, for each of the one or more audio streams, a frame of the video representing a start point of an audio stream;
receiving, from a computing device of a user, an indication of playback of a particular frame representing a start point of a particular audio stream from among the one or more audio streams;
providing, for display in a field-of-view of the video that is viewable to the user on the computing device, a visual notification representing metadata associated with a particular object corresponding to the particular audio stream, the visual notification being displayed in a particular spatial position within the field-of-view;
receiving, from the computing device of the user, user input data associated with playback of the video;
determining a gaze point of the user based on the received user input data;
evaluating the gaze point of the user with respect to the particular spatial position within the field-of-view; and
based on evaluating the gaze point with respect to the particular spatial position within the field-of-view, selectively adjusting audio data provided to the computing device of the user.

US Pat. No. 10,171,928

BINAURAL SYNTHESIS

Facebook, Inc., Menlo Pa...

1. A method comprising:obtaining one or more sets of initial filter coefficients, each set of initial filter coefficients corresponding to an angular position defined by an azimuth angle and an elevation angle; and
adjusting each set of initial filter coefficients with a timbre compensation filter to reduce artefacts associated with a binaural audio output resulting from a binaural synthesis filter;
wherein the adjusted sets of filter coefficients are provided to the binaural synthesis filter to synthesise the binaural audio output based on a monaural input, wherein synthesising the binaural audio output comprises convolving at least one of the adjusted sets of filter coefficients with the monaural audio input.

US Pat. No. 10,171,927

METHOD FOR PROCESSING AN AUDIO SIGNAL FOR IMPROVED RESTITUTION

AXD Technologies, LLC, L...

1. A method for processing an audio signal of N.x channels, N being greater than 1 and x being greater than or equal to 0, comprising:processing the audio signal by a multichannel convolution with a predefined imprint, the predefined imprint being formulated at least by the capture of a reference sound by a set of speakers disposed in a reference space,
wherein the method further comprises:
selecting two or more imprints from a plurality of imprints previously formulated in a plurality of different sound contexts; and
combining the selected imprints formulated in different sound contexts to create a new imprint representing a virtual environment.

US Pat. No. 10,171,926

SOUND PROCESSING APPARATUS AND SOUND PROCESSING SYSTEM

Sony Corporation, Tokyo ...

1. A sound processing apparatus, comprising:first gain calculating circuitry configured to calculate output gains of a virtual sound outputting unit and two sound outputting units of at least four sound outputting units located close to a sound image localization position as a target position, wherein the first gain calculating circuitry is configured to calculate the output gains of the virtual sound outputting unit and the two sound outputting units based on a positional relationship among the virtual sound outputting unit, the two sound outputting units, and the sound image localization position;
second gain calculating circuitry configured to calculate output gains of other two of the sound outputting units than the two sound outputting units, wherein the second gain calculating circuitry is configured to calculate the output gains of the other two of the sound outputting units based on a positional relationship among the other two of the sound outputting units and the virtual sound outputting unit; and
gain adjusting circuitry configured to:
perform gain adjustment on sounds to be output from the at least four sound outputting units based on the output gains of the at least four sound outputting units; and
output gain adjusted sound signals to the at least four sound outputting units so as to cause the at least four sound outputting units to output sound to a listener.

US Pat. No. 10,171,925

MEMS DEVICE

INFINEON TECHNOLOGIES AG,...

1. A method, comprising:patterning a first conductive material to form a first electrode on a first bonding layer;
depositing a first dielectric layer over the first electrode;
patterning a second conductive material over the first dielectric layer to form a membrane spaced apart from the first electrode by the first dielectric layer;
depositing a second dielectric layer over the second conductive material;
patterning a third conductive material over the second dielectric layer to form a second electrode; and
removing portions of the first dielectric layer and the second dielectric layer disposed over a central portion of the membrane, wherein an overlapping area of a fixed portion of the membrane with the second electrode is less than a maximum overlapping.

US Pat. No. 10,171,923

BINAURAL HEARING SYSTEM AND METHOD

CIRRUS LOGIC, INC., Aust...

1. A system for binaural signal processing, the system comprising:a first speaker and a second speaker respectively configured to be mounted proximal to, and to deliver respective first and second acoustic signals to, the left and right ears of a user;
a first microphone and a second microphone respectively configured to be mounted proximal to the left and right ears of a user; and
a binaural processing device for receiving respective first and second acoustic signals from each of the first and second microphones and for modifying each of the first and second acoustic signals to produce the modified first and second acoustic signals, wherein sound captured at both ears is used to modify the first acoustic signal to produce the modified first signal and sound captured at both ears is used to modify the second acoustic signal to produce the modified second signal, and wherein the binaural processing device is operable when distal from the left and right ears of the user;
wherein the first and second speakers, the first and second microphones and the binaural processing device are connected by a signal network configured to pass signals from the first and second microphones to the binaural processing device and from the binaural processing device to the speakers,
wherein the signal network comprises a single wire chained bus loop having a chained configuration in which data from upstream on the single wire chained bus loop is recovered by each of the first and second speakers and the first and second microphones and re-modulated downstream onto the single wire chained bus loop, and
wherein the first and second speakers are positioned downstream of the binaural processing device on the single wire chained bus loop, and the first and second microphones are positioned downstream of the first and second speakers on the single wire chained bus loop.

US Pat. No. 10,171,922

HEARING ASSISTANCE SYSTEM WITH OWN VOICE DETECTION

Starkey Laboratories, Inc...

2. An apparatus configured to be worn by a wearer, comprising:a first microphone configured to produce a first microphone signal;
a second microphone configured to produce a second microphone signal;
a voice detector including an adaptive filter configured to produce a filter output signal using the second microphone signal and an error signal produced by subtracting the filter output signal from the first microphone signal, the voice detector configured to:
detect a voice of the wearer by comparing a power of the error signal to a power of the first microphone signal; and
produce an indication of detection in response to the voice of the wearer being detected;
a sound processor configured to produce an audio output signal using the second microphone signal and the indication of detection; and
a speaker configured to produce an audible signal using the audio output signal.

US Pat. No. 10,171,921

MICROPHONE MATCHING UNIT AND HEARING AID

1. A method for performing a microphone matching of a hearing aid comprising a first microphone, a second microphone and a receiver in a predetermined spatial arrangement relative to each other, the method comprising the stepsgenerating an output sound signal by means of the receiver;
picking up a first input sound signal by the first microphone and a second input sound signal by the second microphone while the output sound signal is generated;
converting the first input sound signal into a first electrical microphone output signal by means of the first microphone and the second input sound signal into a second electrical microphone output signal by means of the second microphone;
determining a first microphone response of the first microphone, and a second microphone response of the second microphone at a given point in time;
determining a microphone response difference between the first microphone response and the second microphone response;
determining a matching difference between the microphone response difference and a predetermined reference microphone response difference; and
adapting at least a first microphone gain of the first microphone according to the matching difference to reduce the matching difference between the microphone response difference and the predetermined reference microphone response difference,
wherein the first microphone response is determined from a first estimate of a first feedback path from the receiver to the first microphone and wherein the second microphone response is determined from a second estimate of a second feedback path from the receiver to the second microphone.

US Pat. No. 10,171,920

TEST APPARATUS FOR BINAURALLY-COUPLED ACOUSTIC DEVICES

ETYMONIC DESIGN INCORPORA...

1. An acoustic coupler assembly for carrying an acoustic device, the acoustic coupler assembly comprising:a coupler body extending in length from a lateral outer body end to a lateral inner body end, the body having a sound test cavity extending laterally between the lateral inner and outer body ends and the sound test cavity having lateral inner and outer test cavity openings and a laterally extending sound test cavity centerline;
an acoustic device speaker mount covering the lateral outer body end and having a speaker mount opening sized to grasp a speaker of an acoustic device received in the speaker mount opening, the speaker mount opening abutting the lateral outer test cavity opening; and
an acoustic device microphone mount connected to the coupler body, the acoustic device microphone mount including a microphone mount clip sized to grasp a microphone assembly of an acoustic device when the microphone assembly is received in the microphone mount clip.

US Pat. No. 10,171,919

THERMAL AND THERMOACOUSTIC NANODEVICES AND METHODS OF MAKING AND USING SAME

The Regents of the Univer...

1. A nanodevice comprising:a solid substrate;
a first solid supporting material block and a second solid supporting material block, wherein the first and second supporting material blocks are in physical contact with the same surface of the solid substrate,
wherein the section of the solid substrate defined inbetween the first and second supporting material blocks does not comprise an additional supporting material block; and
at least one ultrathin film block comprising a first face and an opposite second face, wherein:
the first face comprises a solid material nucleation layer,
the opposite second face comprises an electrically conducting layer,
a section of the first face of each ultrathin film block is in physical contact with the first supporting material block,
a distinct section of the first face of each ultrathin film block is in physical contact with the second supporting material block, such that each ultrathin film block spans the width of the section of the solid substrate defined inbetween the first and second supporting material blocks, and
the at least one ultrathin film block does not have physical contact with the solid substrate, such that the at least one ultrathin film block is suspended over the solid substrate;wherein the at least one ultrathin film block has an average thickness that is equal to or lower than about 50 nm.

US Pat. No. 10,171,918

MEMS MICROPHONE MODULES AND WAFER-LEVEL TECHNIQUES FOR FABRICATING THE SAME

Heptagon Micro Optics Pte...

1. A module comprising:a MEMS microphone module including:
a first substrate;
a second substrate on which is mounted a MEMS microphone device, wherein the second substrate is separated from the first substrate by a first spacer;
an integrated circuit device mounted on the first substrate and arranged to perform processing of signals from the MEMS microphone device; and
a cover separated from the second substrate by a second spacer;
an opening in the cover or in the second spacer through which sound can enter; and
a second module joined to the MEMS microphone module, wherein the second module and the MEMS microphone module are side-by-side, and wherein interior regions of the second module and the MEMS microphone module are separated from one another by the first and second spacers.

US Pat. No. 10,171,917

LATERAL MODE CAPACITIVE MICROPHONE

GMEMS Technologies Intern...

1. A capacitive microphone comprising a first electrical conductor and a second electrical conductor configured to have a relative spatial relationship therebetween,wherein a mutual capacitance can be generated between the first electrical conductor and the second electrical conductor;
wherein said relative spatial relationship and said mutual capacitance can both be varied by an acoustic pressure impacting upon the first electrical conductor and/or the second electrical conductor along a range of impacting directions in 3D space;
wherein said mutual capacitance is varied the most by an acoustic pressure impacting upon the first electrical conductor and/or the second electrical conductor along one direction among said range of impacting directions, said one direction being defined as the primary direction;
wherein the first electrical conductor has a first projection along said primary direction on a conceptual plane that is perpendicular to said primary direction;
wherein the second electrical conductor has a second projection along said primary direction on the conceptual plane;
wherein the first projection and the second projection have a shortest distance Dmin therebetween, and Dmin remains greater than zero regardless of whether the first electrical conductor and/or the second electrical conductor is (are) impacted by an acoustic pressure along said primary direction or not;
wherein the second electrical conductor, as one plate of a capacitor, moves up and down along the primary direction, and laterally moves over, or glides over, the first electrical conductor along the primary direction,
wherein the capacitive microphone further comprises a substrate, the substrate is viewed as said conceptual plane, and the first electrical conductor and the second electrical conductor are constructed above the substrate side-by-side;
wherein the first electrical conductor is fixed relative to the substrate, the second electrical conductor comprises a membrane that is movable relative to the substrate, and said primary direction is perpendicular to the membrane plane; and
wherein the capacitive microphone further comprises an air flow restrictor that restricts the flow rate of air that flows in/out of the gap between the membrane and the substrate, and the air flow restrictor comprises a groove and an insert that can insert into the groove.

US Pat. No. 10,171,916

SYSTEM AND METHOD FOR A HIGH-OHMIC RESISTOR

INFINEON TECHNOLOGIES AG,...

1. A circuit comprising:a high-resistance resistor comprising:
a plurality of semiconductor junction devices coupled in series, each semiconductor junction device of the plurality of semiconductor junction devices comprising a parasitic doped well capacitance configured to insert a parasitic zero in a noise transfer function of the high-resistance resistor, wherein each semiconductor junction device of the plurality of semiconductor junction devices comprises a diode connected transistor,
a plurality of additional capacitors, wherein ones of the plurality of additional capacitors are formed in parallel with corresponding ones of the plurality of semiconductor junction devices, and each additional capacitor of the plurality of additional capacitors are configured to adjust a parasitic pole in the noise transfer function of the high-resistance resistor in order to compensate for the parasitic zero,
a capacitive sensor configured to generate a signal output voltage, and
an amplifier coupled to the capacitive sensor and configured to receive the signal output voltage at a high impedance input of the amplifier, wherein the high-resistance resistor has a first terminal coupled to the capacitive sensor and the high impedance input of the amplifier.

US Pat. No. 10,171,915

DISPLAY DEVICE FOR GENERATING SOUND BY VIBRATING PANEL

LG Display Co., Ltd., Se...

1. A display device, comprising: a display panel configured to emit light; a support structure at a rear of the display panel; a sound generation actuator supported by the support structure and configured to vibrate the display panel to generate sound; and a cap member surrounding the sound generation actuator and secured to the support structure at an area of the support structure, the area being near the sound generation actuator and wherein the sound generation actuator includes a lower plate, a magnet disposed on the lower plate, a center pole disposed on the central region of the lower plate, a bobbin disposed to surround the center pole, and a coil wound around the bobbin.

US Pat. No. 10,171,913

SUSPENSION DEVICE FOR A LOUDSPEAKER, MANUFACTURING METHOD AND ASSOCIATED LOUDSPEAKERS

FOCAL JMLAB, La Talaudie...

1. Process for manufacturing a suspension device for a loudspeaker comprising:providing an annular outer edge able to fasten the suspension device to a frame, an annular inner edge able to fasten the suspension device to a membrane, a suspension hoop extending annularly between the outer and inner edges, said suspension hoop being able to absorb movement stresses produced at the inner edge by means of deforming the suspension hoop thus forming at least one resonance mode, the suspension hoop comprises at least one annular protuberance positioned in such a way as to minimize at least one suspension hoop resonance mode, the mass of at least one of these annular protuberances being between 150% and 400% of the mass of a part of the suspension hoop whereupon the annular protuberance is positioned;
exciting the inner edge of the suspension device,
measuring the movements of the suspension hoop in relation to a stable state of the suspension hoop during a characterization period,
detecting the position of the first local maximum of the movements of the suspension hoop in relation to a stable state of the suspension hoop, and
defining a position of a protuberance corresponding to a projection of the first local maximum on the suspension hoop in the stable state.

US Pat. No. 10,171,912

ANALOG DEVICE CONNECTION

Hewlett-Packard Developme...

1. A method, comprising:detecting, in a control device, an analog connection to an audio output device;
transmitting a first signal from the control device to the audio output device using the analog connection, wherein the first signal comprises a first resistance value applied by the control device across the analog connection within a predetermined time period after the analog connection is detected; and
selectively enabling a feature of the control device when a second signal is received by the control device from the audio output device using the analog connection, wherein the second signal comprises a second resistance value that is different from the first resistance value that is applied by the audio output device across the analog connection in response to the first signal, where the second signal indicates the audio output device is an approved audio output device for the feature.

US Pat. No. 10,171,911

METHOD AND DEVICE FOR OUTPUTTING AUDIO SIGNAL ON BASIS OF LOCATION INFORMATION OF SPEAKER

SAMSUNG ELECTRONICS CO., ...

1. A method of processing an audio signal, the method performed by a device and comprising:dividing the audio signal into a first signal and a second signal;
obtaining relative position information between a first speaker and a second speaker;
determining a first gain for the first signal and a second gain for the second signal, based on the relative position information;
obtaining a third signal by mixing the second signal, to which the second gain is applied, and the first signal;
obtaining a fourth signal by mixing the first signal, to which the first gain is applied, and the second signal;
outputting the third signal to the first speaker; and
outputting the fourth signal to the second speaker,
wherein the determining of the first gain for the first signal and the second gain for the second signal comprises:
setting a central axis based on positions of the first speaker and a user;
determining the first gain as a first value inversely proportional to a distance between the second speaker and the central axis; and
determining the second gain as a second value proportional to the distance between the second speaker and the central axis.

US Pat. No. 10,171,910

METHODS AND DEVICES FOR REPRODUCING STEREO AUDIO

D2A Audio LLC, Morgan Hi...

1. An audio system comprising:an input configured to receive left and right stereo input signals;
a left filter configured to receive the left stereo input signal and isolate left low frequency signal and left high frequency signal;
a right filter configured to receive the right stereo input signal and isolate right low frequency signal and right high frequency signal;
left and right high frequency speakers;
top and bottom low frequency speakers, positioned to output sound in opposite directions, wherein the bottom low frequency speaker is positioned to output sound toward an external supporting surface;
left high frequency amplifier configured to receive and amplify the left high frequency signal and drive the left high frequency speaker with the amplified left high frequency signal;
right high frequency amplifier configured to receive and amplify the right high frequency signal and drive the right high frequency speaker with the amplified right high frequency signal;
a summing amplifier configured to receive the left and right low frequency signals and generate a combined low frequency signal; and
a low frequency woofer amplifier coupled to the top and bottom low frequency speakers and configured to receive the combined low frequency signal, output an amplified combined low frequency signal and drive the top and bottom low frequency speakers with the amplified combined low frequency signal.

US Pat. No. 10,171,909

PROCESSING OF SIGNALS FROM LUMINAIRE MOUNTED MICROPHONES FOR ENHANCING SENSOR CAPABILITIES

General Electric Company,...

1. An outdoor luminaire comprising:a luminaire unit comprising LED modules;
a sensor module attached to the luminaire unit, wherein the sensor module comprises:
a housing and a plurality of microphones seated within the housing; and
a computing module operably connected to the plurality of microphones, the computing module comprising a processor and a memory, the memory storing program logic configured to cause the processor to:
receive information comprising a plurality of acoustic output signals from the corresponding plurality of microphones, and any of detection directionality and location for each of the plurality of microphones; and
process, using the received information, the plurality of acoustic output signals to:
select acoustic output signals which are above a predefined noise floor level associated with each of the plurality of microphones and stored in the memory of the computing module,
identify a desirable acoustic signal at least in one of the selected acoustic output signals using analysis of the received plurality of acoustic output signals, and
correlate the acoustic output signals with any of the detection directionalities and locations of the plurality of microphones.

US Pat. No. 10,171,908

RECORDING MEETING AUDIO VIA MULTIPLE INDIVIDUAL SMARTPHONES

EVERNOTE CORPORATION, Re...

1. A method of recording audio information from a meeting having a plurality of participants including a first participant and a second participant, comprising:at a computing system with one or more processors and memory:
establishing a first connection with a first audio input device of a plurality of audio input devices, the first connection configured to enable the computing system to receive a first audio stream recorded by the first audio input device;
establishing a second connection with a second audio input device of the plurality of audio input devices, the second connection configured to enable the computing system to receive a second audio stream recorded by the second audio input device;
determining that the first and second audio input devices correspond respectively to the first and second participants;
receiving the first and second audio streams during the meeting;
measuring relative volume levels of the first and second audio streams;
identifying from the first audio stream first audio fragments corresponding to speech by the first participant based on: (i) a stored voice profile of the first participant, or (ii) the relative volume levels;
storing as a first audio channel the first audio fragments;
identifying from the second audio stream second audio fragments corresponding to speech by the second participant based on: (i) a stored voice profile of the second participant, or (ii) the relative volume levels;
storing as a second audio channel the second audio fragments, the first and second audio channels being separate from each other and being associated with the first and second participants, respectively;
wherein, in response to the first and second participants speaking at the same time, storing the first audio fragments and the second audio fragments includes:
simultaneously storing the first audio fragments as the first audio channel and the second audio fragments as the second audio channel; and
filtering the first audio fragments and the second audio fragments to separate speech by the first participant from speech by the second participant; and
providing, at the computing system, a storyboard audio channel that includes the first and second audio fragments and identifies the first and second participants as speakers corresponding to the first and second audio fragments, respectively, wherein the identifying is based on which of the first and second audio channels contains the first and second audio fragments.

US Pat. No. 10,171,907

FAN NOISE CONTROLLING SYSTEM

Chung Yuan Christian Univ...

1. A fan noise controlling system, being connected to a fan device and comprising:a first cylinder-shaped limiter, being connected to the fan device, and located at a wind-exporting side of the fan device; wherein the inner wall of the first cylinder-shaped limiter is provided with a sound wave reflective surface thereon, such that the first cylinder-shaped limiter is used for limiting the transmission path of at least one fan noise signal made by the fan device;
a plurality of error sensors, being disposed on the inner wall of the first cylinder-shaped limiter;
a noise controlling module, being electrically connected to the plurality of error sensors; and
a first loudspeaker, being disposed on an axis of the fan device and electrically connected to the noise controlling module; moreover, the first loudspeaker located in the internal of the first cylinder-shaped limiter;
wherein the noise controlling module generates at least one anti-noise signal according to a plurality of error noise signal collected by the plurality of error sensors at consecutive previous time points, and the first loudspeaker being driven by the noise controlling module to broadcast the anti-noise signal in the first cylinder-shaped limiter, so as to attenuate a fan noise made by the fan device.

US Pat. No. 10,171,906

CONFIGURABLE MICROPHONE ARRAY AND METHOD FOR CONFIGURING A MICROPHONE ARRAY

1. A method for automatically configuring a microphone array, the microphone array comprising a plurality of microphone capsules, the method being performed by the microphone array and comprising:scanning sound signals from a plurality of directions by combining output signals of said plurality of microphone capsules;
detecting a sound signal from a first direction and detecting the first direction;
determining that the detected sound signal corresponds to a first predefined control sound signal, the first predefined control sound signal being one of a group of at least two predefined control sound signals and comprising a first tone sequence that is automatically generated;
decoding the first tone sequence by a configuration controller, wherein a first electronic control signal according to the first tone sequence is obtained; and
providing the first electronic control signal to a directivity controller of the microphone array, the directivity controller being adapted for configuring the microphone array according to the first electronic control signal;
wherein the configuring comprises:
eliminating the first direction from scanning sound signals when the first tone sequence is a first predefined tone sequence, and
cancelling an elimination of a second direction from scanning sound signals when the first tone sequence is a second predefined tone sequence different from the first predefined tone sequence, the second direction being different from the first direction.

US Pat. No. 10,171,905

HEADPHONES WITH FREQUENCY-TARGETED RESONANCE CHAMBERS

TRANSOUND ELECTRONICS CO....

1. A headphone device, comprising:a housing, the housing including a first chamber, a second chamber, a first through-hole, wherein the first chamber and the second chamber are separated by a first wall, and the first through-hole is in the first wall;
a loudspeaker assembly in the housing, the loudspeaker assembly including a yoke, a magnet, a washer, a voice coil, and a diaphragm, wherein the yoke, the magnet, the washer, and the voice coil are positioned corresponding to the first through-hole, the diaphragm being connected on the voice coil in the first chamber;
a first annular portion in the housing, wherein the first annular portion including a first auxiliary hole and a second auxiliary hole, each of the first auxiliary hole and the second auxiliary hole overlapping a portion of the first through-hole, wherein the first auxiliary hole is covered with a first sound-proof material and the second auxiliary hole is covered with a second sound-proof material and wherein the first sound-proof material filters a first frequency range and the second sound-proof material filters a second frequency range, the first frequency range being substantially different from the second frequency range.

US Pat. No. 10,171,904

WIRELESS NOSE-CANCELLING EARPLUG

QON OY, Kempele (FI)

1. A wireless noise-cancelling earplug comprising:a housing comprising a first cylindrical part and a second cylindrical part, within which an active noise cancellation (ANC) circuit is configured to produce anti-noise, a speaker is configured to emit the anti-noise as a sound wave, and a battery is configured to power the ANC circuit;
a sealing bud disposed about a portion of the second cylindrical part of the housing, the sealing bud and the housing forming a passive noise reduction unit configured to fully occlude an ear canal;
an audio cavity configured to guide the sound wave from the speaker out of the earplug;
at least one microphone configured to measure ambient noise and to feed the measured ambient noise to the ANC circuit,
wherein the earplug and the housing as viewed from one side is L-shaped comprising a stem portion that extends between outer extremities of the housing along a first axis and a bar portion that extends between an outer extremity of the housing and an outermost point of the passive noise reduction unit, wherein:
the stem portion has a length of 25 mm or less;
the bar portion has a length of 23 mm or less; and
an inner angle between the first axis and the second axis is 85 to 120 degrees,
wherein at least the ANC circuit, the speaker, and a first part of the audio cavity are arranged within the second cylindrical part.

US Pat. No. 10,171,903

PORTABLE BINAURAL RECORDING, PROCESSING AND PLAYBACK DEVICE

1. An accessory for binaural recording and playback for a multimedia device comprising:a headphone, said headphone having a left ear piece that houses an inwardly facing left speaker and an outwardly facing, left, non-directional recording microphone therein and a right ear piece that houses an inwardly facing right speaker and an outwardly facing, right, non-directional recording microphone therein;
a dongle, said dongle having a microprocessor and a memory;
an audio codec housed in said dongle and in communication with said microprocessor, said audio codec having audio signal processing functionality accomplished through components selected from the group consisting of microphone preamplifiers, microphone amplifiers, analog audio signal to digital audio signal convertors, digital audio signal processors, and digital audio signal to analog audio signal convertors;
an application program in said memory, executed by said microprocessor, communicating an operating system of said multimedia device to allow a video interface of said multimedia device to operate said audio signal processing functionality of said audio codec in said dongle;
a right three-conductor-wire analog transmission cable connected between said dongle and said right ear piece;
a left three-conductor-wire analog transmission cable connected between said dongle and said left ear piece;
a digital signal transmission cable operatively connected at a first end to said dongle, and configured at a second end for connection to a multimedia device;
wherein said audio codec is in communication with said headphone, and operatively powered by said multimedia device when connected;
wherein said dongle is a parasitically powered dongle without its own power source, receiving said parasitic power from said multimedia device when connected; and
wherein said right, non-directional recording microphone and said left, non-directional recording microphone receive sound and transmit an audio signal to said mutimedia device through said audio codec.

US Pat. No. 10,171,902

IN-EAR MONITOR

Campfire Audio LLC, Port...

1. A tunable in-ear monitor that produces sound when operationally connected to an external audio source comprising:an in-ear monitor housing;
at least one low frequency driver having a first outlet sound port;
at least one high frequency driver having a second outlet sound port;
at least one crossover component;
a spout extending outward from a face of said in-ear housing, said spout having an inner face and an outer face separated by a thickness, with at least one sound exit port formed through said thickness;
at least one sound tube having an input end and an output end, said input end affixed to at least one of said drivers and said sound tube output end affixed to said spout;
at least one sonic dampener affixed in said sound tube at an adjustable length for frequency response tuning, and wherein said sound tube's input end is affixed to said low frequency driver about said first outlet sound port and said output end affixed to said spout;
at least one tunable resonator box with a first end directly affixed to said high frequency driver's second outlet sound port, wherein said resonator box has an opposing side wall structure having an open proximal end and a distal end wall with an orifice therethrough, said orifice concentric with said high frequency driver's second outlet sound port; and
an electrical circuit operationally connected to provide input audio signals from said external audio source, directly, or indirectly through a crossover component, to all drivers in said housing, so as to enable the generation of an output sound from said drivers;
wherein said drivers are mechanically connected to said spout so as to transfer the driver's generated sound into said sound exit port; and
wherein said crossover component is a stacked metalized plastic film chip capacitor;
wherein said spout has at least one resonator box recess formed on said inner face connected to said sound exit port, and a second, output end of said resonator box is inserted and matingly engaged into said resonator box recess.

US Pat. No. 10,171,901

SOUND PICKUP DEVICE AND SOUND PROCESSING DEVICE

YAMAHA CORPORATION, Hama...

15. A sound processing device comprising:a housing;
a mounting mechanism configured to mount the housing to an object;
a sound pickup portion comprising at least one microphone;
a first output terminal that outputs a sound signal corresponding to a sound picked up by the at least one microphone;
a connector configured to mount the sound pickup portion to the housing;
a sensor that detects a vibration of the housing;
a second output terminal that outputs a vibration signal corresponding to the vibration detected by the sensor;
a sound signal processor configured to:
add a first sound effect to the sound signal output from the first output terminal;
produce a vibration sound signal based on the vibration signal output from the second output terminal; and
synthesize the sound signal with the added sound effect, with one of the vibration sound signal or a sound signal produced by adding a second sound effect to the vibration sound signal, to generate and output a synthesized sound signal.

US Pat. No. 10,171,900

SPEAKER AND SHOWER

Kohler Co., Kohler, WI (...

1. An assembly comprising:a speaker supportable for movement relative to a reference external to the speaker, the speaker including
a speaker housing, and
speaker components supported in the speaker housing and operable to produce an audio output;
a sensor operable to sense a direction of movement of the speaker during movement of the speaker relative to the external reference; and
control components operable to
determine the direction of movement of the speaker relative to the external reference, and
control the speaker components based on the direction of movement of the speaker relative to the external reference;
wherein, when the speaker is sensed to be moving in a first direction relative to the reference, an operational characteristic of the speaker components is controlled to increase or advance during the movement in the first direction, and wherein, when the speaker is sensed to be moving in a second direction relative to the reference different from the first direction, the operational characteristic of the speaker components is controlled to decrease or retreat during the movement in the second direction.

US Pat. No. 10,171,899

SOUND LIGHT AND SOUND COMBINATION

Dong Guan Bright Yinhuey ...

1. A light and sound combination comprising:a lighting module; and
an audio module detachably mounted on the lighting module;
wherein:
the audio module includes a main body and a sounding assembly mounted in the main body;
the main body has an outer face provided with a plurality of slideways and a plurality of locking slots corresponding to the slideways;
each of the locking slots has a substantially L-shaped profile and includes an upright section connected to one of the slideways and a transverse section connected to the upright section;
the lighting module includes a housing and an LED (light emitting diode) board mounted in the housing;
the housing has a bottom provided with a receiving groove and a top provided with a projection;
the LED board is mounted in the receiving groove of the housing; and
the projection of the housing has an inner face provided with a plurality of limit blocks each locked in the transverse section of one of the locking slots.

US Pat. No. 10,171,898

CUP

ATAKE DIGITAL TECHNOLOGY ...

1. A cup, which comprising:a cup body, a top part of the cup body comprising a rim,
a cup lid, the cup lid comprising a bottom cover and a top cover connected with a top end of the bottom cover, and a bottom end of the bottom cover being connected with the rim, the top cover being fixed with the top end of the bottom cover through a rotary clamping structure, the rotary clamping structure comprising a clamping block located at the bottom cover, and a sliding rail located at the top cover, the sliding rail comprising an importing part formed along an axial direction of the top cover and a clamping groove part formed along a periphery direction of the top cover, the clamping block being configured to slide into the importing part along the clamping groove part of the sliding rail, and the bottom cover and the top cover cooperatively forming a sealed mounting cavity; and
an audio broadcast device, the audio broadcast device being assembled in the mounting cavity.

US Pat. No. 10,171,897

SPEAKER MOUNT AND ASSEMBLY AND METHOD OF DISENGAGEMENT THEREOF

Swarm Holdings LLC, Salt...

1. A speaker mount, comprising:a. a speaker baffle;
b. a support member extending from the speaker baffle and having an elevated region spaced therefrom and a closer region closer to the speaker baffle than the elevated region;
c. a tab movably coupled to the support member such that it can travel between the elevated region and the closer region and including leeway in the coupling between the tab and the support member such that the tab can tip relative to the support member, the tab including a finger extending away from the support member and shaped to engage with a surface when the speaker mount is installed, thereby causing the tab to tip relative to the support member; and
d. wherein each of the tab and the support member include mating teeth facing each other that are positioned to press against each other and thereby engage when the tab tips relative to the support member when the finger engages with a surface and to be spaced apart and thereby not engage when the tab is not tipped, and when so engaged to each other when the tab tips due, to the finger engaging with a surface, prevent travel of the tab from the closer region to the elevated region.

US Pat. No. 10,171,896

ELECTRONIC DEVICE WITH SIDE SPEAKER HOLE

Samsung Electronics Co., ...

1. An electronic device comprising:a housing including a first face facing a first direction, a second face facing a second direction opposite the first direction, a side face facing a third direction perpendicular to each of the first and second directions and surrounding at least a portion of a space between the first and second faces;
a first plate disposed on the first face and exposed in the first direction; and
a second plate disposed on the second face and exposed in the second direction,
wherein the first plate includes a plurality of first edge regions, wherein at least one of the first edge regions includes, in at least a portion thereof, at least one first curved region that curves toward the second plate and/or toward a side face, and
at least one speaker hole disposed on the side face between the first curved region and the second plate, wherein the side face is not part of the first plate and is not part of the second plate.

US Pat. No. 10,171,895

HYDROPHOBIC MESH COVER

Apple Inc., Cupertino, C...

1. An acoustic module, comprising:an acoustic chamber having a tapered geometry such that a first end of the acoustic chamber is larger than a second end of the acoustic chamber;
a port comprising a plurality of openings, the port being adjacent an external environment and the first end of the acoustic chamber;
a semi-permeable barrier material disposed within the acoustic chamber; and
an audio component at the second end of the acoustic chamber, the audio component being configured to emit acoustic waves that move moisture within the acoustic chamber toward and through the semi-permeable barrier material and the port.

US Pat. No. 10,171,894

METHOD FOR ADJUSTING RECEPTION PARAMETER OF OPTICAL LINE TERMINAL AND OPTICAL LINE TERMINAL

Huawei Technologies Co., ...

1. A method for adjusting a reception parameter of an optical line terminal (OLT), comprising:determining a transmission rate of a to-perform-sending optical network unit (ONU);
generating a reset signal before the to-perform-sending ONU sends an optical signal, wherein the reset signal is used to trigger the OLT to perform a reset operation;
adjusting a signal characteristic of the reset signal according to the transmission rate, to generate an adjusted signal;
extracting a signal characteristic of the adjusted signal, and generating a first signal and a second signal according to the signal characteristic of the adjusted signal, wherein the first signal indicates the reset signal, and the second signal indicates the transmission rate of the to-perform-sending ONU;
performing the reset operation according to the first signal; and
after the reset operation is completed, adjusting the reception parameter of the OLT according to the second signal,
wherein the extracting the signal characteristic of the adjusted signal, and generating a first signal and a second signal comprises:
receiving, by a physical layer chip, the adjusted signal sent by a Media Access Control (MAC) layer chip;
extracting, by the physical layer chip, the signal characteristic of the adjusted signal, and generating the first signal and the second signal according to the signal characteristic of the adjusted signal;
sending, by the physical layer chip, the first signal to an optical receiving component, wherein the first signal is used to trigger the optical receiving component to perform the reset operation; and
after the optical receiving component completes the reset operation, sending, by the physical layer chip, the second signal to the optical receiving component.

US Pat. No. 10,171,893

BATTERY-POWERED WIRELESS LONG LIFE TEMPERATURE AND HUMIDITY SENSOR MODULE

Archimedes Controls Corp....

1. A battery-powered wireless sensor module for measuring air temperature and relative humidity, and wirelessly transmitting data to a sensor controller, comprising:a plastic shell holding components of the wireless sensor module;
an airflow grill for allowing air flow into the wireless sensor module;
a circuit board within the shell;
one or more lithium batteries;
a wireless module with a transmission frequency of less than 1 GHz, with a transmission distance of more than 100 meters;
a microcontroller unit processor, for executing an adaptive algorithm for controlling transmission of data from the wireless sensor module;
a temperature and humidity sensor, for sensing air temperature and relative humidity and sending the data to the microcontroller unit processor;
a power management circuit, for measuring the battery voltage and shutting down the power when it is lower than a threshold voltage; and
a memory comprising an adaptive algorithm for controlling the transmission of data from the wireless sensor module based on changes in the temperature or relative humidity, wherein the adaptive algorithm is configured to be executable by the microcontroller unit processor to cause the wireless sensor module to:
measure a current temperature and a current relative humidity;
determine whether an absolute value of a difference between a previous temperature and the current temperate is greater than a temperature set point;
determine whether an absolute value of a difference between a previous relative humidity and the current relative humidity is greater than a humidity set point;
if the absolute value of the difference between the previous temperature and the current temperate is greater than the temperature set point or if the absolute value of the difference between the previous relative humidity and the current relative humidity is greater than the humidity set point, transmit the data from the wireless sensor module; and
if the absolute value of the difference between the previous temperature and the current temperate is less than the temperature set point and if the absolute value of the difference between the previous relative humidity and the current relative humidity is less than the humidity set point, transmit the data from the wireless sensor module if a time since a last transmission of the data is greater than a maximum value.

US Pat. No. 10,171,892

SYSTEM AND METHOD FOR MONITORING WATER LEVEL ON A ROOF

1. A drain monitor for monitoring water level on a roof, the drain monitor comprising:a base for attaching to the roof;
a riser attached to the base and projecting from the roof;
a water level sensor for measuring water level on the roof, the water level sensor comprising an attachment member and a vertical member, wherein
the attachment member comprises a first attachment end and a second attachment end,
the first attachment end is adjustably attached to the riser such that the first attachment end is inserted into an attachment slot on the riser,
the attachment member is directed away from the riser,
the vertical member is attached to the second attachment end and extends downward towards the roof, and
the vertical member comprises a float sensor that floats up and down on the vertical member to measure water level; and
a communication system positioned on the riser for transmitting measurement data received from the water level sensor.

US Pat. No. 10,171,891

SENSOR DEPLOYMENT MECHANISM AT A MONITORED LOCATION

Senseware, Inc., Vienna,...

1. A device, comprising:a wireless node module board having a first controller, a first set of one or more sensors that generates first sensor data, and a wireless transceiver that transmits the first sensor data to a gateway device at a monitored location via wireless communication for delivery to a host system that is remote from the monitored location, the wireless node module board having a wired communication interface for adding sensor capabilities to the device, the wired communication interface including a serial data signaling connection and a serial clock signaling connection; and
an extension sensor module board connected to the wireless node module board via the wired communication interface, the extension sensor module board having a second controller that communicates second sensor data to the first controller via the wired communication interface for transmission to the gateway device, the second sensor data generated by a second set of one or more sensors supported by the extension sensor module board.

US Pat. No. 10,171,890

SYSTEM AND METHOD FOR BATTERY MANAGEMENT AND ANTENNA ELEVATION IN A PIT MOUNTED AUTOMATIC METER READING UNIT

Cooper Technologies Compa...

1. An automatic meter reading (AMR) system adapted to be mounted in a utility meter pit, the AMR system comprising:an AMR device including:
a meter connection configured to provide consumption data from a utility meter in the utility meter pit;
processing electronics configured to receive the consumption data via the meter connection and convert the consumption data into a transmittable signal;
an antenna configured to wirelessly transmit the transmittable signal to a remote device; and
an enclosure that houses the processing electronics and the antenna therein to provide protection thereto from ambient conditions in the utility meter pit;
wherein the enclosure defines a dome-shaped antenna compartment therein configured to house the antenna, the antenna compartment protruding out from a remainder of the enclosure so as to provide for positioning of the antenna at a location extended out therefrom; and
a cover adaptor mateable with the AMR device and with a cover of the utility meter pit, the cover adaptor comprising:
a flanged end portion; and
a projection portion protruding outwardly from the flanged end portion, the projection portion comprising a hollow interior formed therein that is open on an end of the projection portion that is distal from the flanged end portion;
wherein the projection portion is sized and constructed to so as to be positionable through a hole formed in the cover of the utility meter pit and so as to receive and secure the dome-shaped antenna compartment of the AMR device enclosure in the hollow interior thereof;
wherein the positioning of the projection portion of the cover adaptor through the hole in the cover of the utility meter pit and the securing of the dome-shaped antenna compartment of the AMR device enclosure in the hollow interior of the projection portion mounts the antenna of the AMR device at a height approximately flush with a top surface of the cover of the utility meter pit; and
wherein the dome-shaped antenna compartment comprises protrusions formed on an outer surface thereof and the projection portion comprises grooves formed thereon, with the protrusions mating with the grooves via a twist-lock type mating, so as to provide for selective mating and separation of the AMR device from the cover adaptor.

US Pat. No. 10,171,889

METHOD AND SYSTEM FOR MANAGING INTERNAL AND EXTERNAL CALLS FOR A GROUP OF COMMUNICATION CLIENTS SHARING A COMMON CUSTOMER IDENTIFIER

BCE INC., Verdun (CA)

1. A method of implementing a virtual private branch exchange (PBX) feature for a customer associated with a plurality of VoIP communication clients each capable of placing and receiving VoIP telephone calls, comprising:receiving at a control entity providing the virtual PBX feature information regarding a call comprising a source address/sub-address pair and a destination identifier;
at the control entity, consulting a database to determine using the information regarding the call if the call is an external inbound call directed to the customer or an internal call from one of the VoIP communication clients associated with the customer that identifies a different particular one of the VoIP communication clients, the database comprising a plurality of records each record storing a customer identifier and at least one address/sub-address pairs of at least one communication client of the associated customer, at least one of the stored records further storing a plurality of sub-address each associated with a respective alias of a communication device;
responsive to determining that the call is an external inbound call directed to the customer, causing the call to be routed to each of the plurality of VoIP communication clients associated with the customer without being passed through a PBX or key system;
responsive to determining that the call is an internal call by matching the source address/sub-address pair of the received call information to a stored record in the database and an alias in the received call information to an alias in the record, causing the call to be routed to the particular one of the VoIP communication clients associated with the alias without being passed through a PBX or key system.

US Pat. No. 10,171,888

VIDEO PROCESSING METHOD, TERMINAL AND SERVER

HUAWEI TECHNOLOGIES CO., ...

1. A video processing method, comprising:sending, to a server, a request for acquiring a media presentation description (MPD) file of a video;
receiving the MPD file from the server, the MPD file comprising region information of a region that can be independently decoded in the video;
determining, according to the region information, a region used for playback on a terminal from the region that can be independently decoded;
determining a to-be-acquired media segment according to the MPD file;
acquiring a location in which data content corresponding to the region for the playback on the terminal is stored in the media segment;
acquiring, according to the location in which the data content corresponding to the region for the playback on the terminal is stored in the media segment, the data content corresponding to the region for the playback on the terminal from the media segment stored in the server; and
playing, according to the data content corresponding to the region for the playback on the terminal, a picture of the region for the playback on the terminal, the media segment comprising at least two subsegments, and acquiring the location in which the data content corresponding to the region for the playback on the terminal is stored in the media segment comprises:
acquiring, from the server, a segment index and a subsample index of the media segment, the segment index indicating a location in which each subsegment comprised in the media segment is stored in the media segment, and the subsample index indicating is a location in which each subsample corresponding to the region that can be independently decoded is stored in each subsegment;
determining a to-be-acquired subsegment according to the segment index; and
determining, according to a location in which the subsegment is stored in the media segment and a location in which a subsample corresponding to the region for the playback on the terminal in the region that can be independently decoded is stored in the subsegment, the location in which the data content corresponding to the region for the playback on the terminal is stored in the media segment.

US Pat. No. 10,171,887

METHODS AND SYSTEMS FOR INTELLIGENT PLAYBACK

Comcast Cable Communicati...

1. A method, comprising:receiving, by a computing device, media content for playback;
determining, by the computing device based on an arrival rate of the received media content, a parameter relating to the received media content;
determining, by the computing device based upon the parameter, a safe point, wherein the safe point comprises a point in time when a remainder of the received media content can be presented at a constant pre-defined playback speed;
causing, by the computing device, output of the received media content at a first playback speed until the safe point is reached; and
when the safe point is reached, causing, by the computing device, output of the received media content at a second playback speed.

US Pat. No. 10,171,886

CHANNEL-BASED METHOD AND SYSTEM FOR RELAYING CONTENTS

KAKAO CORP., Jeju-si (KR...

1. A content relay method performed at a channel server,wherein the channel server includes at least one processor:
wherein the processor configured to:
select a channel among a plurality of channels generated at a channel operator terminal in response to a channel search request from a user terminal or a display device and provide the channel; and
request the display device to display a content execution screen of a user terminal connected to the channel,
wherein the channel is selected based on an option associated with the display device,
wherein the channel is provided to share the content execution screen of the user terminal which connects the channel through the display device,
wherein the channel is generated based on an installation area of a display device or a group that is present within a predetermined distance to the installation area of a display device,
wherein the channel operator terminal selects a user terminal for relaying the content execution screen from among a plurality of user terminals based on channel connection priority of each user terminal, random priority, or arbitrary designation, if the plurality of user terminals is connected to the channel.

US Pat. No. 10,171,885

APPARATUS AND METHODS FOR MANAGING DELIVERY OF CONTENT IN A NETWORK WITH LIMITED BANDWIDTH USING PRE-CACHING

Time Warner Cable Enterpr...

1. An apparatus for management and distribution of content in a content delivery network, said apparatus comprising:at least one interface configured to communicate with a plurality of computerized client devices operatively coupled to said content delivery network;
one or more storage apparatus configured to:
store a plurality of digitally rendered content for distribution to subsets of said plurality of computerized client devices;
store data representative of one or more rules to guide said distribution of individual ones of said plurality digitally rendered content; and
store classification data related to said individual ones of said plurality of said digitally rendered content; and
a processing unit in data communication with said at least one interface and said one or more storage apparatus, said processing unit comprising computerized logic configured to:
based at least in part on said classification data, identify individual ones of said plurality of digitally rendered content that are high probability of viewership (HpoV) content for one of said subsets of said plurality of computerized client devices;
identify data representative of one or more rules to guide said distribution of said HpoV content from among said data representative of one or more rules to guide said distribution of said individual ones of said plurality of said digitally rendered content;
cause transmission of both (i) said HpoV content and (ii) said data representative of one or more rules to guide distribution of said HpoV content to said one of said subsets of said plurality of computerized client devices, where said transmission to said one of said subsets of said plurality of computerized client devices is configured to occur when network resource demand is below a predetermined threshold; and
schedule, using at least the computerized logic, said transmission of said HpoV content and said data representative of one or more rules to a different future time when said network resource demand is above said predetermined threshold.

US Pat. No. 10,171,884

METHOD AND APPARATUS FOR SHARING INTERNET ASSETS OR CONTENT URLS VIA A SECOND DISPLAY DEVICE

SONY INTERACTIVE ENTERTAI...

1. A control display device for an IPTV content providing system having one or more IPTVs (internet protocol TVs) as first display devices, the control display device comprising:a storing device stores a list of IPTV recipients for sharing contents, the recipients being associated with the IPTV content providing system;
a content browsing device browses contents in the IPTV content providing system;
a sending device sends a share request to a server and sends therewith at least one IPTV recipient from the list of IPTV recipients for sharing contents;
a receiving device receives notification from the server that content is available for sharing on the IPTV recipient's IPTV device; and
a forwarding device forwards only a request to share the content to the server and not the content itself, and the server transmits the content, via a control display device of the IPTV recipient, to one of the first display devices of the IPTV recipient to display the content, wherein the server transmits the content based solely on the request and without an IPTV recipient interaction;
wherein when content to be shared is IPTV content, the control display device of the IPTV recipient lacks authentications necessary to preview the content on the control display device of the IPTV recipient before the control display device of the IPTV recipient sends the content to the IPTV recipient's first display device which is independent of the control display device.

US Pat. No. 10,171,882

BROADCAST SIGNAL FRAME GENERATION DEVICE AND BROADCAST SIGNAL FRAME GENERATION METHOD USING BOOTSTRAP INCLUDING SYMBOL FOR SIGNALING BICM MODE OF PREAMBLE AND OFDM PARAMETER TOGETHER

Electronics and Telecommu...

1. An apparatus for generating broadcast signal frame, comprising:a time interleaver configured to generate a time-interleaved signal by performing interleaving on a BICM output signal; and
a frame builder configured to generate a broadcast signal frame including a bootstrap and a preamble using the time-interleaved signal,
wherein the bootstrap includes a symbol for signaling a BICM mode and OFDM parameters of L1-Basic of the preamble, together.

US Pat. No. 10,171,881

BACKUP MODULE AND METHOD

MT Digital Media Limited,...

1. A method for operating a data processing apparatus to backup display of a sequence of interrupted content items through a module of the data processing apparatus using a CPU and a software program, comprising:identifying a series of user invoked interruptions, each interruption comprising a transition between the display of a first content item and the display of a second content item, wherein the first content item and the second content item are in a sequence of at least three interrupted content items and the sequence of interrupted content items include content items from at least two different content domains;
storing, at the data processing apparatus, interruption records each including a locator to a said first content item subject to a corresponding user invoked interruption, and further including a record of the order in which said interruptions occurred; and
initiating display in a last in first out order of the sequence of interrupted content items responsive to a sequence of backup signals, such that each of the sequence of backup signals causes display of a previous interrupted content item of the sequence of at least three interrupted content items.

US Pat. No. 10,171,880

SYSTEMS AND METHODS FOR MODELING AUDIENCE STABILITY OF A MEDIA ASSET SERIES

Rovi Guides, Inc., San J...

1. A method for modeling consistency of audiences viewing groups of media assets, the method comprising:receiving a data packet from a user equipment of a plurality of user equipment;
extracting, from the data packet, an indication of a first media asset;
identifying a first subset of the plurality of user equipment, the first subset comprising each user equipment on which the first media asset was generated for display;
identifying a second subset of the first subset, the second subset comprising each user equipment on which a second media asset was generated for display, wherein the first media asset and the second media asset are part of a group of media assets;
calculating a score for audience consistency for the group of media assets based on the number of user equipment in the second subset comprising each user equipment on which the first media asset and the second media asset were generated for display relative to the number of user equipment in the first subset comprising each user equipment on which the first media asset was generated for display;
ranking the group of media assets among a plurality of groups of media assets based on the calculated score for audience consistency for the group of media assets; and
selecting a group of the plurality of groups of media assets with the highest rank to target with an advertisement.

US Pat. No. 10,171,879

CONTEXTUAL ALERTING FOR BROADCAST CONTENT

INTERNATIONAL BUSINESS MA...

10. A computer usable program product comprising one or more computer-readable storage devices, and program instructions stored on at least one of the one or more storage devices, the stored program instructions comprising:program instructions to analyze, after receiving a content at a device usable to present the content to a user, a portion of the received content to identify a context present in the portion, the context comprising a type of a subject-matter of the portion;
program instructions to select, corresponding to the context of the portion, a contextual rating rule from a set of contextual rating rules;
program instructions to compute a rating value of the portion using a first rating value in the contextual rating rule, the rating value of the portion being distinct from a rating associated with the content by a distributor of the content;
program instructions to present, on a presentation device, the portion with the rating value of the portion;
program instructions to collect information related to the context of the portion;
program instructions to construct an overlay with the information, wherein the information is configured in the overlay to attract an attention of the user to the portion, and wherein the information for the overlay is selected based on content usage habits of the user;
program instructions to overlay the portion with the overlay during a presentation of the portion;
program instructions to determine that the portion has not yet been presented during a presentation of the content on the presentation device;
program instructions to construct a notification, the notification comprising the rating value of the portion;
program instructions to receive an image of the user during the presenting;
program instructions to analyze the image to determine that the user is not attentive during the presenting; and
program instructions to send a notification to the user prior to presenting the portion on the presentation device responsive to determining that the user is not attentive during the presenting.

US Pat. No. 10,171,878

VALIDATING DATA OF AN INTERACTIVE CONTENT APPLICATION

Comcast Cable Communicati...

1. A method comprising:receiving a first interactive content application for distribution, via a network, to one or more user devices;
evaluating data within the first interactive content application to determine an application type corresponding to the first interactive content application;
associating the first interactive content application with one or more application profiles for the application type, wherein the one or more application profiles comprise an accessibility setting that indicates the application type is accessible from a second interactive content application;
performing, by one or more computing devices, a validation on data of the first interactive content application by applying one or more validation rules established in the one or more application profiles;
based on a determination that the validation has failed, causing, by the one or more computing devices, the application type to be inaccessible from the second interactive content application by modifying the accessibility setting; and
transmitting, to at least one of the one or more user devices, content associated with the second interactive content application.

US Pat. No. 10,171,877

SYSTEM AND METHOD FOR DYNAMICALLY SELECTING SUPPLEMENTAL CONTENT BASED ON VIEWER EMOTIONS

DISH Network L.L.C., Eng...

1. A system, comprising:one or more cameras;
a database storing relationships between content categories and supplemental-content categories, and corresponding emotional responses;
one or more processors; and
a memory device storing computer instructions that, when executed by the one or more processors, cause the one or more processors to:
provide content having a content category to a content receiver for presentation to a viewer;
receive at least one first image of the viewer taken by the one or more cameras while the content is being presented to the viewer;
analyze the at least one first image to detect a first viewer emotion in reaction to the content currently being presented to the viewer;
select supplemental content having a supplemental-content category to provide to the viewer based on the detected first viewer emotion and the content category;
provide the supplemental content to the content receiver for presentation to the viewer;
receive at least one second image of the viewer taken by the one or more cameras while the supplemental content is being presented to the viewer;
analyze the at least one second image to detect a second viewer emotion in reaction to the supplemental content currently being presented to the viewer;
determine an emotional response of the viewer based on the detected first viewer emotion and the detected second viewer emotion; and
update a relationship between the content category and the supplemental-content category in the database based the determined emotional response of the viewer.

US Pat. No. 10,171,876

MEDIA SWITCH DEVICE, MEDIA SWITCH SYSTEM AND MEDIA SWITCH METHOD

ATEN INTERNATIONAL CO., L...

1. A media switching method, implemented in a media switch device, the media switch device comprising at least one media input port, a media input/extension composite port, and a media output port, the media input/extension composite port configured to be coupled to either a source device or to another media switch device, the media output port configured to be coupled to either a sink device or to yet another media switch device, the at least one media input port each configured to be coupled to a source device, the media switching method comprising:receiving a response command from the media input/extension composite port to determine whether the media input/extension composite port is coupled to a source device or another media switch device; and
receiving a query command from the media output port to determine whether the media output port is coupled to a sink device or yet another media switch device.

US Pat. No. 10,171,875

METHOD FOR PROVIDING PREVIOUS WATCH LIST OF CONTENTS PROVIDED BY DIFFERENT SOURCES, AND DISPLAY DEVICE WHICH PERFORMS SAME

LG ELECTRONICS INC., Seo...

1. A display device, comprising:a display;
an interface unit configured to receive a request from a remote controller; and
a controller configured to:
in response to a first request received from the remote controller, display a list of previously-viewed content including at least a first item corresponding to a first previously-viewed content, a second item corresponding to a second previously-viewed content and a third item corresponding a third previously-viewed content, wherein the first item, the second item, and the third item are displayed in an order that the previously-viewed content has been viewed,
in response to a second request received from the remote controller, display a fourth content on the display,
in response to a third request from the remote controller to display the list of previously-viewed content, delete the first item in the list of previously-viewed content and display the list of previously-viewed content including the second item, the third item and a fourth item corresponding to the fourth displayed content,
in response to a fourth request received from the remote controller, display a fifth content on the display, and
in response to a fifth request from the remote controller to display the list of previously-viewed content, delete the second item in the list of previously-viewed content and display the list of previously-viewed content including the third item, the fourth item and a fifth item corresponding to the fifth displayed content,
wherein the controller is further configured to display the list of previously-viewed content on an initial screen when the display device is turned on.

US Pat. No. 10,171,874

RECEIVING DEVICE, RECEIVING METHOD, AND PROGRAM

Saturn Licensing LLC, Ne...

1. A receiving device for receiving an audio/video (AV) content broadcast, comprising:processing circuitry configured to:
obtain from an information processing device, using a first uniform resource identifier (URI), a description document described in a predetermined computer language for displaying another content different from the AV content;
execute the description document;
determine, based on the first URI, whether the information processing device is associated with a broadcaster which broadcasts the AV content;
control execution of the description document according to a mode that is set based on the determination of whether the information processing device is associated with the broadcaster;
process a transition from the description document to a further description document; and
change between first and second modes based on a second URI of an information processing device from which the further description document is to be obtained,
wherein the processing circuitry is configured to
control the execution of the description document in the first mode in which the description document can perform a process when the information processing device is determined to be associated with the broadcaster, and
control the execution of the description document in the second mode in which the description document cannot perform the process when the information processing device is determined not to be associated with the broadcaster.

US Pat. No. 10,171,873

MULTIMEDIA SYSTEM FOR MOBILE CLIENT PLATFORMS

1. A computer program video player product stored on a non-transitory computer readable medium and loadable into the internal memory of a client computing device, comprising software code portions for performing, when the video player product is run on a computer, a method comprising:sequentially reading a plurality of distinctive Internet addresses associated with a plurality of discrete continuous media objects, wherein said discrete continuous media objects are formed from synchronized video and audio segments of a continuous synchronized audio and video;
determining a playback rate by executing software code portions stored exclusively on the memory of the client computing device and executed by a processor on the client computing device, based on varying wireless bandwidth, said playback rate being adjusted for each discrete continuous media object by the computer program video player product, acting autonomously, by selecting which discrete continuous media object is played being made by the computer program video player product, adjusting digital video decoding steps for playback performance resulting from varying bandwidth connection speeds;
further adjusting playback performance by using intrinsic player decoding algorithms that optimize digital video decoding in order to maintain visual continuity and playback;
playing back a video at the determined playback rate consisting of at least a subset of said plurality of discrete continuous media objects; wherein said discrete continuous media objects are individually decoded by said video player product during playback through digital video decoding optimizing algorithms for receiving, parsing and selecting the playback of a sequence of discrete continuous media objects;
wherein said discrete continuous media objects are created by transcoding an input continuous media object including a video segment forming part of a discrete audiovisual file into an optimal audiovideo format at an optimal encoding rate reflecting available cellular network bandwidth; dynamically decoding by the client computing device the transcoded continuous media objects into discrete files by splitting the transcoded stream into specified intervals and scanning after the specified intervals for a next I-frame, wherein each discrete interval is split at that next I-frame to create another discrete continuous media object; and assigning each of the discrete continuous media objects a distinctive Internet address;
and wherein said discrete continuous media objects are obtained by the player as discrete audiovisual files using protocols which access content through file and directory structures to the exclusion of synchronous or asynchronous bitstreaming;
wherein the continuous media objects are maintained by content servers serving the discrete continuous media objects to wireless clients during transmission to wireless devices
wherein said continuous media objects are audiovideo files including metadata.

US Pat. No. 10,171,872

METHODS AND SYSTEMS FOR IMPLEMENTING A LOCKED MODE FOR VIEWING MEDIA ASSETS

Rovi Guides, Inc., San J...

1. A method for implementing a locked mode in an interactive media guidance application, comprising:receiving, using control circuitry, a request to initiate a locked mode for a specified time period on a user equipment device, wherein a specified user of the user equipment device is only allowed access to media assets selected for the locked mode during the specified time period;
receiving, using the control circuitry, first information relating to a first plurality of media assets selected for viewing by a plurality of users having similar characteristics;
receiving, using the control circuitry, second information relating to a second plurality of media assets, the second plurality of media assets being presented to the specified user during a period of time when locked mode is not initiated;
determining, using the control circuitry, a media asset, of the plurality of media assets, that is of interest to the plurality of users based on the received first and second information;
determining, using the control circuitry, whether the specified user has characteristics similar to the plurality of users;
and
during the locked mode and in response to determining that the specified user has characteristics similar to the plurality of users, transmitting, using the control circuitry, an instruction to the interactive media guidance application to present the media asset to the specified user without receiving input from the specified user.

US Pat. No. 10,171,869

METHODS AND APPARATUS TO DETERMINE ENGAGEMENT LEVELS OF AUDIENCE MEMBERS

The Nielsen Company (US),...

13. An apparatus, comprising:a detector to analyze image data depicting an environment in which media is to be presented by a first media device to determine whether the environment includes a second media device emanating a glow, the image data captured with a sensor; and
a calculator to determine an engagement for a person in the environment with respect to the first media device, the calculator to determine the engagement based on a distance between the person and the second media device emanating the glow.

US Pat. No. 10,171,868

METHOD FOR PROCESSING AUDIO DATA, TERMINAL AND TELEVISION

Qingdao Hisense Electroni...

18. A television, comprising:an input interface configured to receiving audio data from a plurality of television channels;
a channel switcher configured to switching the plurality of television channels;
an audio capturer configured to capture audio data of the plurality of television channels;
a buffer configured to buffer the audio data captured by the audio capturer; and
a control reader configured to control to read the audio data in the buffer,
an audio post-processor configured to perform a preset sound processing to the audio data read by the control reader; and
an audio player configured to play the audio data processed by the audio post-processor,
wherein, the control reader is configured to:
set a first queue for managing at least one configured information control node of audio data to be played, wherein each of the at least one configured information control node contains attribute information for the audio data to be played;
unchain a unit of the at least one configured information control node from the first queue;
read audio data to be played corresponding to the unit of the at least one configured information control node according to the attribute information for the audio data to be played in the unit of the at least one configured information control node; and
chain the unit of the at least one configured information control node to a second queue which is used for managing at least one configured information control node of played audio data.

US Pat. No. 10,171,867

SERVICE GUIDE ENCAPSULATION

SHARP KABUSHIKI KAISHA, ...

1. A method for decoding a service guide associated with a video bitstream comprising:(a) receiving a service guide fragment within said service guide;
(b) receiving a service guide delivery unit structure that is a transport container for said service guide fragment and that is used for encapsulating service guide fragments within said video bitstream;
(c) receiving a unit header structure within said service guide delivery unit structure;
(d) receiving an extension offset field within said unit header structure, wherein said extension offset field is zero in said service guide delivery unit structure corresponding to a particular service guide delivery unit structure specification;
(e) receiving said extension offset field within said unit header structure, wherein said extension offset field is ignored for values other than zero in said service guide delivery unit structure corresponding to said particular service guide delivery unit structure specification; and
(f) decoding said service guide.

US Pat. No. 10,171,866

DISPLAY SYSTEM, DISPLAY DEVICE, HEAD-MOUNTED DISPLAY DEVICE, DISPLAY CONTROL METHOD, CONTROL METHOD FOR DISPLAY DEVICE, AND COMPUTER PROGRAM

SEIKO EPSON CORPORATION, ...

1. A display system comprising:a transmitting device configured to transmit video data; and
a first display device and a second display device configured to display videos on the basis of the video data transmitted by the transmitting device, wherein
the transmitting device includes a data transmitting section configured to wirelessly transmit the video data formed by continuous frames to the first display device and the second display device,
the first display device includes:
a first video receiving section configured to receive the video data transmitted by the transmitting device; and
a first display section configured to display a video on the basis of the video data received by the first video receiving section, to only a first eye of the user to visually recognize the video,
the second display device, that is separate from the first display device, includes:
a second video receiving section configured to receive the video data transmitted by the transmitting device; and
a second display section configured to display a video on the basis of the video data received by the second video receiving section, to only a second eye of the user to visually recognize a video, and
the display system detects deviation between (1) timing of displaying frames of the video displayed by the first display section and visually recognized by the first eye and (2) timing of displaying frames of the video displayed by the second display section and visually recognized by the second eye.

US Pat. No. 10,171,865

ELECTRONIC DEVICE AND COMMUNICATION CONTROL METHOD

KABUSHIKI KAISHA TOSHIBA,...

1. An electronic apparatus comprising:a memory;
one or more hardware processors configured to:
acquire content data comprising first encoded data of a video image and second encoded data of a user interface;
decode the second encoded data to generate second decoded data of the user interface without decoding the first encoded data; and
store the second decoded data of the user interface in the memory;
a transmitter configured to transmit the content data comprising the first encoded data and the second encoded data, to a first electronic apparatus, wherein the first encoded data and the second encoded data are decoded to generate third decoded data of the video image and fourth decoded data of the user interface at the first electronic apparatus, respectively; and
a receiver configured to receive, while the video image based on the third decoded data and the user interface based on the fourth decoded data are displayed on a first screen of the first electronic apparatus, first operation data regarding a first user operation that is performed on the user interface displayed on the first screen of the first electronic apparatus,
wherein the one or more hardware processors are further configured to:
specify a first process, inputted by the first user operation, to control playback of the video image displayed on the first screen of the first electronic apparatus based on both the second decoded data of the user interface stored in the memory and the first operation data; and
execute the first process.

US Pat. No. 10,171,864

INTERACTIVE TELEVISION APPLICATIONS

Sky CP Limited, (GB)

1. A method of providing an interactive application user interface simultaneously with a video program display within a display area of a television receiver, the method comprising:in response to a user requesting access to interactive applications through interfacing with the television receiver, opening a menu of interactive applications superimposed on the video program display, with the video program display occupying substantially the entire display area;
in response to selection of one of the interactive applications by the user from the menu of interactive applications, opening an interactive application window which in a first state comprises a plurality of menu items, with the application window and video program display arranged in a split-screen arrangement; and
in response to the user selecting one of the menu items, causing the application window to enter a second state in which content relating to the selected menu item is displayed, wherein a proportion of the display area occupied by the application window is greater in the second state than in the first state and the proportion of the display area occupied by the video program display in the second state is less than in the first state.

US Pat. No. 10,171,863

INTERACTIVE ADVERTISEMENT

1. A receiver comprising:at least one input component to receive audiovisual content;
at least one output component communicatively coupled with at least one display device;
a plurality of tuners;
one or more processors communicatively coupled with the at least one input component, at least one output component, and the plurality of tuners, the one or more processors configured to cause the receiver to perform:
outputting an advertising filter menu for display to the at least one display device, the advertising filter menu comprising menu items allowing for user specification of one or more product characteristics;
processing indicia of one or more selections made with one or more of the menu items of the advertising filter menu, the one or more selections indicating one or more specified product characteristics;
identifying a location corresponding to a user;
receiving a program service transmission, the program service transmission comprising content for at least one channel;
receiving plurality of product advertisements at the receiver and identifying respective location specifications associated with the plurality of product advertisements, the plurality of product advertisements for products shown on the at least one channel of the program service transmission;
processing the plurality of product advertisements and storing the plurality of product advertisements in memory;
selecting a subset of the plurality of product advertisements based at least in part on the one or more specified product characteristics and comparing the respective location specifications associated with the plurality of product advertisements with a threshold distance with respect to the location corresponding to the user, and eliminating from inclusion in the subset at least one product offering advertisement which does not satisfy the threshold distance;
outputting the at least one channel for display;
selecting at least a first product advertisement of the subset of the plurality of product advertisements and outputting the first product advertisement for display;
receiving a user input following the output of the first product advertisement of the subset of the plurality of product advertisements;
modifying subsequent advertisement selection so that a selection of at least a second product advertisement is based at least in part on the user input responsive to the output of the first product advertisement; and
outputting the second product advertisement for display.

US Pat. No. 10,171,862

INTERACTIVE VIDEO SEARCH AND PRESENTATION

1. An interactive video presentation search improvement method comprising:receiving, by a processor of a remote control device configured to control functions for a video presentation device, inquiry data comprising a plurality of video object based questions, wherein said remote control device comprises a memory device, a display device, and a light fidelity (Li-Fi) hardware device comprising circuitry, a transceiver, and a light source device;
storing, by said processor, said inquiry data within said memory device
receiving, by said computer processor from a user based on video data being presented via said video presentation device, a command associated with said inquiry data;
presenting, by said processor via said display device in response to said command, said plurality of video object based questions;
receiving, by said processor in response to said presenting, a selection for a first question of said plurality of video object based questions, said first question associated with a video object of said video data being presented via said video presentation device;
enabling, by said processor executing said circuitry, said light source device such that a light is visible on said video object being presented via said video presentation device;
identifying, by said processor based on results of said enabling, said video object with respect to said first question by:
retrieving via a video retrieval device of said remote control device, a visual image of said video object; and
transmitting via said LiFi hardware device to said video presentation device, said visual image, wherein said video presentation device analyzes said visual image and presents said information adjacent to said video object;
executing, by said processor based on results of said identifying and via said transceiver, an Internet based search associated with locating answers to said first question; and
presenting, by said processor to said user based on results of said executing, information associated with said first question with respect to said video object.

US Pat. No. 10,171,859

SYSTEMS, MEDIA, AND METHODS FOR PROVIDING AN ALGORITHMICALLY SORTED WATCHLIST OR WISHLIST

BLAB VENTURES LLC, Austi...

1. A computer-implemented system for maintaining an algorithmically sorted watchlist comprising:a) a digital processing device comprising an operating system configured to perform executable instructions and a memory;
b) a computer program including instructions executable by the digital processing device to create an application, the application configured for:
i) presenting an interface allowing a first user to create a watchlist comprising a plurality of digital media items, the watchlist having an order indicating a priority for the first user to consume each item;
ii) presenting an interface allowing the first user to rate media items they have consumed;
iii) presenting an interface allowing the first user to recommend one or more consumed media items to a second user, the second user having a social connection to the first user within a social network;
iv) presenting an interface allowing the first user to ask the second user a question pertaining to a media item;
v) presenting an interface allowing the first user to discuss a media item with the second user; and
vi) algorithmically updating the watchlist, the update based on social graph distance between the first user and the second user and user activity including: the second user adding a media item to a watchlist, the second user consuming a media item, the second user rating a media item, the second user recommending a media item, the second user discussing a media item, and aggregated activity of a community of users within the social network, wherein a scope of the community of users is customizable by the first user indicating a number of users, a distance between users on the social graph, one or more demographic characteristics, or one or more groups within the social network.

US Pat. No. 10,171,858

UTILIZING BIOMETRIC DATA TO ENHANCE VIRTUAL REALITY CONTENT AND USER RESPONSE

ADOBE SYSTEMS INCORPORATE...

1. In a digital medium environment for providing an immersive virtual reality experience, a computer-implemented method of customizing digital content based on user biometrics, comprising:identifying biometric data corresponding to a user of a virtual reality device;
determining baseline biometric characteristics of the user of the virtual reality device based on the biometric data;
determining a stimulus category for the user of the virtual reality device from a plurality of stimulus categories based on the baseline biometric characteristics by: clustering a plurality of users based on a plurality of baseline biometric characteristics for the plurality of users, generating biometric data metrics for each of the plurality of stimulus categories based on the clustered plurality of users, and comparing the baseline biometric characteristics of the user to the biometric data metrics corresponding to the plurality of stimulus categories;
in response to identifying additional biometric data corresponding to the user of the virtual reality device, selecting virtual reality content to provide to the user of the virtual reality device based on the stimulus category and the additional biometric data; and
providing the selected virtual reality content via the virtual reality device.

US Pat. No. 10,171,857

NETWORK DATA DELIVERY SPONSORSHIP SIGNATURES IN STREAMING MANIFEST FILES

Verizon Patent and Licens...

1. A method, comprising:retrieving, by a media playing device and from a campaign portal that stores information regarding toll-free or reduced toll data campaigns for content hosted by content servers, a manifest file associated with streaming content, wherein the manifest file includes a first sequence of multiple first Uniform Resource Identifiers (URIs) that identify first network storage locations of multiple first content segments of the streaming content that correspond to a first bitrate, and wherein a first sponsorship signature of multiple sponsorship signatures is appended to at least one of the multiple first URIs of the manifest file;
retrieving, by the media playing device and from the campaign portal, the multiple first URIs, and the first sponsorship signature, from the manifest file;
sending, by the media playing device, the first sponsorship signature to a charging node in a Public Land Mobile Network (PLMN) for controlling the charging of data delivery associated with the streaming content to one of a user of the media playing device or a sponsor of the streaming content based on the first sponsorship signature, wherein the charging node determines whether a campaign associated with the first sponsorship signature is currently valid;
determining whether a bitrate associated with the media playing device has changed from the first bitrate to a second bitrate;
when the bitrate associated with the media playing device has changed from the first bitrate to the second bitrate:
retrieving, by the media playing device and from the campaign portal, a second sequence of multiple second URIs that identify second network storage locations of multiple second content segments of the streaming content that correspond to the second bitrate; and
engaging, via the PLMN by the media playing device using the multiple first URIs, in a content streaming session to receive the streaming content when the bitrate associated with the media playing device has not changed from the first bitrate to the second bitrate.

US Pat. No. 10,171,856

VIEWER-AUTHORED CONTENT ACQUISITION AND MANAGEMENT SYSTEM FOR IN-THE-MOMENT BROADCAST IN CONJUNCTION WITH MEDIA PROGRAMS

FX NETWORKS, LLC, Los An...

1. A method for providing viewer-derived content for broadcast presentation in conjunction with a broadcast of a media program by a provider of the media program, comprising:(a) receiving viewer registration information uniquely associated with a viewer via an application executing on a viewer device, the application for collecting the viewer registration information, viewer-authored content and viewer-authored content metadata associated with the viewer-authored content;
(b) receiving the viewer-authored content and the viewer-authored content metadata in a content management system (CMS);
(c) processing the viewer authored content according to the viewer authored content metadata to generate the viewer-derived content;
(d) queuing the viewer-derived content with other viewer-derived content generated from viewer-authored content from other viewers for consideration for the broadcast presentation in conjunction with the broadcast of the media program;
(e) determining if the viewer-derived content complies with broadcast regulations or quality standards;
(f) selecting the viewer-derived content for broadcast presentation in conjunction with a live broadcast of the media program if the viewer-derived content complies with the broadcast regulations or the quality standards; and
(g) providing the viewer-derived content for broadcast in conjunction with the live broadcast of the media program;
wherein:
the viewer authored content comprises a plurality of independent media files, each media file comprising an intra-compressed image;
the step of processing the viewer authored content according to the viewer authored content metadata to generate the viewer derived content comprises the steps of:
generating an animated image file from all of the plurality of independent media files;
generating a compressed video file from the animated image file, the compressed video file having a size smaller than the animated image file and mimicking and serving as a proxy for the animated image file; and
transmitting the compressed video file to the viewer device for presentation by the application executing on the viewer device.

US Pat. No. 10,171,855

METHOD AND APPARATUS FOR SYNCHRONIZING VIDEO LIVE BROADCAST

Huawei Technologies Co., ...

1. A method, comprising:sending, by a user equipment, a video stream synchronization request to a first network side device, wherein the video stream synchronization request requests to acquire a live video of the first network side device, the acquired live video to be played synchronously by the user equipment with the live video of the first network side device, wherein the first network side device receives the video stream synchronization request after it is forwarded to the first network side device from a base station that connects the user equipment to a network;
receiving, by the user equipment, a video stream playback position synchronization parameter sent by the first network side device, wherein the video stream playback position synchronization parameter comprises a playback position parameter at a video stream sending moment and a system frame number (SFN) at the video stream sending moment, and wherein the SFN at the video stream sending moment is added to the video stream playback position synchronization parameter by the base station that connects the user equipment to the network, or the SFN is added to the video stream playback position synchronization parameter by a second network side device that receives the playback position parameter from the first network side device;
acquiring, by the user equipment, a SFN at a video stream receiving moment; and
adjusting, by the user equipment according to the SFN at the video stream sending moment and the SFN at the video stream receiving moment, the playback position parameter at the video stream sending moment.

US Pat. No. 10,171,854

CONTENT SUPPLY DEVICE, CONTENT SUPPLY METHOD, PROGRAM, TERMINAL DEVICE, AND CONTENT SUPPLY SYSTEM

SATURN LICENSING LLC, Ne...

1. A content supply device that supplies a plurality of pieces of streaming data that include content of a same subject and differ in bit rate through a same channel, according to an adaptive streaming technique, the content supply device comprising:processing circuitry configured to
generate a zapping stream by delimiting the streaming data according to a timing and a duration in common with another channel and to supply the zapping stream to a reception side via a network,
generate a viewing stream by delimiting the streaming data according to a duration of an integer multiple of the duration of the zapping stream and to supply the viewing stream to the reception side via the network; and
generate a metafile that is for the reception side to receive the zapping stream and the viewing stream, wherein
the metafile includes a media presentation description, and
the circuitry is configured to generate the media presentation description by introducing an attribute indicating that an adaptation set corresponding to the viewing stream is asymmetrically aligned with an adaptation set corresponding to the zapping stream.

US Pat. No. 10,171,853

SYSTEMS AND METHODS FOR MANAGING AVAILABLE BANDWIDTH IN A HOUSEHOLD

Rovi Guides, Inc., San J...

1. A method for managing available bandwidth in a household, the method comprising:receiving, from a user device, a request to stream a first media asset;
retrieving, from stored metadata associated with the first media asset, a minimum bandwidth value for streaming the first media asset;
comparing the minimum bandwidth value to a household bandwidth value in a household bandwidth state database, wherein the household bandwidth value indicates a bandwidth currently available in the household;
in response to determining that the minimum bandwidth value is greater than the household bandwidth value, identifying a stream of a second media asset that is consuming bandwidth in the household;
determining a time remaining for completing the stream of the second media asset;
comparing a duration value of a third media asset in a media asset database with the time remaining, wherein the third media asset has an associated bandwidth value less than the household bandwidth value; and
in response to determining that the duration value of the third media asset is greater than the time remaining, generating for display on the user device a message that indicates the bandwidth currently available in the household is insufficient to stream the first media asset and that has an option to stream the third media asset instead of the first media asset.

US Pat. No. 10,171,852

BROADCAST SIGNAL TRANSMISSION DEVICE, BROADCAST SIGNAL RECEPTION DEVICE, BROADCAST SIGNAL TRANSMISSION METHOD, AND BROADCAST SIGNAL RECEPTION METHOD

LG ELECTRONICS INC., Seo...

1. A method of providing a broadcast service, the method comprising:receiving a media content through an external input source, the media content including a video component having video watermarks and an audio component having audio watermarks;
extracting the audio watermarks and the video watermarks from the media content, wherein an audio watermark of the audio watermarks includes a watermark payload including server information and interval information;
generating a Uniform Resource Locator (URL) for a recovery data using the server information and the interval information;
requesting the recovery data to a recovery server using the generated URL, the recovery data including information on the media content; and
receiving the recovery data from the recovery server,
wherein the server information is used to identify the recovery server and the interval information identifies an interval of the media content in which the audio watermark is embedded,
wherein the recovery data includes an identifier of a broadcast stream for the media content and the interval information which was used to request the recovery data,
wherein the recovery data further includes a service element describing information about a broadcast service related to the media content,
wherein the service element includes a service identifier for identifying the broadcast service, version information indicating a version of service information for the broadcast service, Service Layer Signaling (SLS) protocol information and SLS protocol version information, and
wherein the SLS protocol information indicates whether a transport protocol used to transmit SLS of the broadcast service is a Real-Time Object Delivery over Unidirectional Transport (ROUTE) protocol or a MPEG Media Transport (MMT) protocol, and the SLS protocol version information indicates a version of the transport protocol.

US Pat. No. 10,171,851

VIDEO CONTENT DISTRIBUTION SYSTEM AND CONTENT MANAGEMENT SERVER

COLOPL, INC., Tokyo (JP)...

1. A video content distribution system comprising:a user terminal on which contents are viewable; and
a content management server connected to the user terminal via a communication network,
wherein the user terminal comprises:
a first reception unit configured to receive field-of-view video data from the content management server;
a display control unit configured to generate instructions for displaying on a display unit a field-of-view video based on the received field-of-view video data;
a viewpoint switch request signal generating unit configured to generate, in response to input operation on the user terminal, a viewpoint switch request signal for requesting a switch from a first viewpoint to a second viewpoint in the field-of-view video displayed on the display unit; and
a first transmission unit configured to transmit the generated viewpoint switch request signal to the content management server,
wherein the content management server comprises:
a second reception unit configured to receive the viewpoint switch request signal from the user terminal;
a viewing start time determining unit configured to determine a first viewing start time at which the display unit starts displaying the field-of-view video from the first viewpoint, and a second viewing start time at which the display unit starts displaying the field-of-view video from the second viewpoint;
a viewing stop time determining unit configured to determine a first viewing stop time at which the display unit stops displaying the field-of-view video from the first viewpoint, and a second viewing stop time at which the display unit stops displaying the field-of-view video from the second viewpoint;
a viewing period determining unit configured to determine a first viewing period in which the field-of-view video is displayed from the first viewpoint based on the first viewing start time and the first viewing stop time, and to determine a second viewing period in which the field-of-view video is displayed from the second viewpoint based on the second viewing start time and the second viewing stop time;
a total user charge amount calculating unit configured to determine a total amount to be charged to the user based on a combination of charges for a first viewing duration at the first viewpoint and a second viewing direction at the second viewpoint, wherein a charge per unit time for the first view point is different form a charge per unit time for the second viewpoint; and
a second transmission unit configured to transmit to the user terminal field-of-view video data that is associated with one of the first viewpoint or the second viewpoint,
wherein the content management server is configured to continue transmitting the field-of-view video data that is associated with the first viewpoint at least for a time period from a time when the first transmission unit transmits the viewpoint switch request signal to the second transmission unit to a time when the first reception unit receives the field-of-view video data that is associated with the second viewpoint from the content management server, or at least for a time period from a time when the second reception unit receives the viewpoint switch request signal to a time when the second transmission unit transmits the field-of-view video data that is associated with the second viewpoint, and
wherein the viewing stop time determining unit and the viewing start time determining unit are configured to determine the first viewing stop time and the second viewing start time, respectively, when the second reception unit receives the viewpoint switch request signal.

US Pat. No. 10,171,850

TRUNK MANAGEMENT METHOD AND APPARATUS FOR VIDEO SURVEILLANCE SYSTEMS

Hangzhou Hikvision System...

1. A trunk management method for a video surveillance system, the video surveillance system including a first video server, a plurality of clients each having a predetermined priority, and a plurality of surveillance equipment items, the first video server accessing the plurality of surveillance equipment items over a bandwidth-limited backbone network, the method comprising:sending a video service request to a first surveillance equipment item of the plurality of surveillance equipment items by a first client of the plurality of clients;
establishing, by the first client, a new video session between the first video server and the first client;
determining whether there is an existing video session between the first video server and the first surveillance equipment item;
(i) if there is not an existing video session between the first video server and the first surveillance equipment, determining whether a network bandwidth between the first video server and the plurality of surveillance equipment items reaches full load,
(a) if the network bandwidth does not reach full load, establishing, by the first video server, a new video session between the first surveillance equipment item and the first video server and updating a connection priority of the first surveillance equipment item as the priority of the first client, wherein the connection priority of each of the plurality of surveillance equipment items is a priority of the connection between the first video server and this surveillance equipment item, and
(b)if the network bandwidth has reached full load, querying a lowest connection priority among connection priorities of all surveillance equipment items connected to the first video server,
if the priority of the first client is higher than the lowest connection priority among the connection priorities of all surveillance equipment items connected to the first video server, disconnecting a connection between a surveillance equipment item having the lowest connection priority and the first video server, kicking away all clients connected to the surveillance equipment item having the lowest connection priority, and establishing, by the first video server, a new video session between the first surveillance equipment item and the first video server;
(ii) if there is an existing video session between the first video server and the first surveillance equipment item, querying priorities of all clients connected to the first surveillance equipment item;
(iii) determining whether the priority of the first client is higher than a highest priority among the priorities of all the clients connected to the first surveillance equipment item;
(a) if the priority of the first client is higher than a highest priority among the priorities of all the clients connected to the first surveillance equipment item, updating the connection priority of the first surveillance equipment item as the priority of the first client, and
(b) if the priority of the first client is not higher than the highest priority among the priorities of all the clients connected to the first surveillance equipment item, the existing connection priority of the first surveillance equipment item is not updated;
wherein the connection priority of each surveillance equipment item of the plurality of surveillance equipment items is set to the highest priority among the priorities of all clients that are connected to that surveillance equipment item.

US Pat. No. 10,171,849

BROADCAST SIGNAL TRANSMISSION DEVICE, BROADCAST SIGNAL RECEPTION DEVICE, BROADCAST SIGNAL TRANSMISSION METHOD, AND BROADCAST SIGNAL RECEPTION METHOD

LG ELECTRONICS INC., Seo...

1. A method of transmitting a broadcast signal by a broadcast signal transmitter, the method comprising:generating service signaling information for signaling a broadcast service and service data of the broadcast service, wherein the service data comprises service components included in the broadcast service and wherein one of the service components is a stereoscopic video which is encoded by Scalable High Efficiency Video Coding (SHVC);
generating a service list table, the service list table comprising bootstrap information for the service signaling information;
processing the service components, the service signaling information, and the service list table as Internet protocol (IP) packets;
processing the IP packets to generate a broadcast signal and transmitting the broadcast signal through a broadcast network,
wherein the stereoscopic video includes a multi-view view position Supplemental Enhancement Information (SEI) message indicating left and right view;
wherein view position information in the multi-view view position SEI message indicates orders of views from left to right; and
wherein the view position information is set to 0 for a left-most view and increasing by 1 for next view from left to right.

US Pat. No. 10,171,848

DIGITAL BROADCASTING RECEIVER AND METHOD FOR CONTROLLING THE SAME

LG Electronics Inc., Seo...

1. A method of processing data in a broadcast transmitting system, the method comprising:encoding data in at least one first data packet by an encoder,
wherein the least one first data packet includes a first header and a first payload,
wherein the first header includes stuffing indication information related to stuffing data in the at least one first data packet,
wherein the first payload includes at least one second data packet,
wherein the at least one second data packet has a second header and a second payload,
wherein the second payload includes two or more IP (Internet protocol) packets which carry service components of a service, and
wherein the second header includes information for indicating a number of the two or more IP packets included in the second payload; and
transmitting a transmission frame including the encoded data by a transmitter,
wherein the transmission frame further includes fast service acquisition information providing information necessary to locate service signaling information, service type information of the service, and channel information of the service, and
wherein the service signaling information includes access information of the service components.

US Pat. No. 10,171,847

INFORMATION DEVICE AND DISTRIBUTION DEVICE

FUNAI ELECTRIC CO., LTD.,...

1. An information device comprising:a communication component that communicates with an external device; and
a controller that downloads video data from the external device through the communication component, performs processing to convert a format of the video data into a playable format and executes an application for playing the video data,
the controller sending a download request for each divided video data to sequentially download the divided video data, with the divided video data being obtained by dividing up the video data,
the processing by the controller to convert the format of the video data into the playable format including dividing the divided video data into a plurality of divided files and producing a playlist file for a playback instruction of the divided files every time the divided video data is downloaded.

US Pat. No. 10,171,846

SYSTEM AND METHOD FOR ROUTING MEDIA

1. A method for managing streaming of video content to a client device, the method comprising:providing the video content to a content distribution network for storage in a plurality of geographically separated resources of the content distribution network;
dynamically selecting one or more advertisement media clips based on statistical information associated with a user of the client device;
receiving, from the client device via a packet-based telecommunication network, signaling to have the stored video content streamed to the client device;
and
in response to the received signaling, transmitting to the client device, via the packet-based telecommunication network and in one or more files having a format compatible with a media player on the client device, (i) an identification of one or more of the resources of the content distribution network available to facilitate streaming of one or more segments of the stored video content to the client device, the identification being dependent at least in part on a relationship between a geographic location of the client device and geographic locations of the resources of the content distribution network, and (ii) an identification of an advertising server, the identification of the advertising server being dependent at least in part on a relationship between the geographic location of the client device and a geographic location of the advertising server,
wherein the one or more files, when processed by the client device, cause the client device to communicate with the identified one or more resources of the content distribution network and the advertising server to cause the one or more segments of the stored video to be streamed to the client device by the identified one or more resources of the content distribution network and cause the one or more selected advertisement media clips to be streamed from the advertising server to the client device.

US Pat. No. 10,171,845

VIDEO SEGMENT MANAGER AND VIDEO SHARING ACCOUNTS FOR ACTIVITIES PERFORMED BY A USER ON SOCIAL MEDIA

International Business Ma...

1. A computer program product comprising:one or more computer readable storage media and program instructions stored on at least one of the one or more computer readable storage media, the program instructions comprising:
program instructions to identify a plurality of multimedia files that are of interest to a user based on historical activity of the user viewing multimedia files, wherein multimedia files of interest are determined based on metadata stored on one or more databases;
program instructions to determine a ranking of individual multimedia files within the plurality of multimedia files that are of interest to the user based upon an algorithm for generating a novel multimedia file, wherein determining further comprises using a criterion for each of the plurality of the user interested multimedia files;
program instructions to create a catalog of the identified plurality of multimedia files that are of interest to the user, wherein the catalog includes the identified plurality of multimedia files organized into one or more groups of multimedia files based on user preferences and characteristics of the multimedia files;
program instructions to analyze a plurality of catalogs that include multimedia files that are of interest to the user based upon an algorithm, wherein the plurality of catalogs includes the created catalog of the identified plurality of multimedia files that are of interest to the user;
program instructions to select one or more multimedia file segments from the catalog of the identified plurality of multimedia files that are of interest to the user;
program instructions to, responsive to receiving, from the user, a selection of one or more multimedia file frames from the plurality of catalogs that include multimedia files that are of interest to the user, determine a similarity value for the selected one or more multimedia file frames according to the algorithm;
program instructions to generate the novel multimedia file, wherein the novel multimedia file is generated by combining the selected one or more multimedia file segments;
sending, by one or more processors, the one or more novel multimedia file frames to another user;
program instructions to determine an order for the plurality of multimedia files that are of interest to the user according to user preferences associated with the user, wherein the user preferences dictate a truncation of user interested multimedia file frames comprising the plurality of multimedia files that are of interest to the user;
program instructions to truncate a user interested multimedia file according to user preferences, program instructions to display the truncated user interested multimedia file frames, wherein displaying further comprises presenting a searchable index of a plurality of novel multimedia files;
program instructions to, responsive to displaying the novel multimedia file, identify one or more novel multimedia file frames included in the novel multimedia file of interest to the user;
program instructions to send the one or more novel multimedia file frames to another user; and
program instructions to update user preferences information of an inputted keyword by the user, a description in one of the plurality of user interested multimedia files, and a user profile associated with the user in another application.

US Pat. No. 10,171,844

METHOD AND SYSTEM OF ADVERTISING

HARBIN INSTITUTE OF TECHN...

1. A method of advertising, the method comprising:1) recording a source video using a camera, storing the source video in a memory unit, segmenting the source video into individual scenes using a first CPU by a clustering-based approach, and storing the individual scenes in the memory unit;
2) obtaining relevant information about objects in the source video for each individual scene using a second CPU and a first GPU by region-wise convolutional characteristics based detection, and storing the relevant information about the objects in the source video in the memory unit;
3) searching, in a database, for advertisement objects matching the objects in the source video by garment retrieval and a category-based strategy using a second GPU and a third CPU;
4) performing optimization processing of the advertisement objects matching the objects in the source video to obtain a candidate advertisement using a fourth CPU;
5) adding the candidate advertisement into the source video, optimizing a distance between the candidate advertisement and a target object in the source video and an area of overlapping regions between the candidate advertisement and the objects in the source video using a fifth CPU to obtain a video that comprises the candidate advertisement; and
6) distributing the video that comprises the candidate advertisement to a plurality of displays and displaying the video that comprises the candidate advertisement on the plurality of displays,
wherein the segmenting a source video into individual scenes using the first CPU by the clustering-based approach comprises the following steps:
1.1) calculating a number N of categories for clustering using the first CPU according to a duration of a video, and transmitting the number N from the first CPU to the memory unit;
1.2) randomly selecting N frames as initial centers using the first CPU according to the duration, and transmitting the initial centers from the first CPU to the memory unit;
1.3) calculating a distance from each frame to its time-proximal center, updating the center using the first CPU to obtain an updated center, and transmitting the updated center from the first CPU to the memory unit; and
1.4) repeating 1.3) until convergence or reaching a maximum running number.

US Pat. No. 10,171,843

VIDEO SEGMENT MANAGER

International Business Ma...

1. A method comprising:identifying, by one or more processors, a plurality of multimedia files that are of interest to a user based on historical activity of the user viewing multimedia files;
determining, by one or more processors, an order for the plurality of multimedia files that are of interest to the user according to user preferences associated with the user, wherein the user preferences dictate a truncation of user interested multimedia file frames comprising the plurality of multimedia files that are of interest to the user;
creating, by one or more processors, a catalog of the identified plurality of multimedia files that are of interest to the user, wherein the catalog includes the identified plurality of multimedia files organized into one or more groups of multimedia files based on user preferences and characteristics of the multimedia files;
selecting, by one or more processors, one or more multimedia file segments from the catalog of the identified plurality of multimedia files that are of interest to the user;
generating, by one of more processes, a novel multimedia file, wherein the novel multimedia file is generated by combining the selected one or more multimedia file segments;
truncating, by one or more processors, a user interested multimedia file according to user preferences, and
displaying, by one or more processors, the truncated user interested multimedia file frames.

US Pat. No. 10,171,842

HRD DESCRIPTOR AND BUFFER MODEL OF DATA STREAMS FOR CARRIAGE OF HEVC EXTENSIONS

QUALCOMM Incorporated, S...

1. A method of processing video data, the method comprising:obtaining a data stream comprising a plurality of elementary streams and a High Efficiency Video Coding (HEVC) timing and Hypothetical Reference Decoder (HRD) descriptor, wherein the HEVC timing and HRD descriptor comprises a target schedule index syntax element indicating an index of a delivery schedule;
identifying, based on a set of parameters, a syntax element in an array of syntax elements in a video parameter set (VPS), wherein:
the VPS comprises a plurality of HRD parameters syntax structures, wherein each respective HRD parameters syntax structure of the plurality of HRD parameters syntax structures comprises a respective set of HRD parameters,
each respective syntax element of the array of syntax elements specifies an index of an HRD parameters syntax structure in the plurality of HRD parameters syntax structures, and
the set of parameters comprises a parameter having a value equal to a value of the target schedule index syntax element; and
identifying, based on an index specified by the identified syntax element, a particular HRD parameters syntax structure in the plurality of HRD parameters syntax structures as being applicable to a particular elementary stream that is part of the operation point, the plurality of elementary streams including the particular elementary stream.

US Pat. No. 10,171,841

METHOD AND DEVICE FOR ENCODING/DECODING VIDEO BITSTREAM

ZHEJIANG UNIVERSITY, Han...

1. A decoding method, comprising:Decoding, by a processor, a slice bitstream to obtain parameter set indication information carried in the slice bitstream and used for indicating a camera parameter set;
acquiring, by the processor, camera parameters from the camera parameter set indicated by the parameter set indication information; and
decoding, by the processor, the slice bitstream according to the acquired camera parameters;
wherein each of the camera parameter set comprises V*F*M camera parameters, V represents the number of the viewpoints comprised in the camera parameter set, F represents the number of the camera parameter subsets comprised in the camera parameter set, and M represents the number of the types of the camera parameters comprised in the camera parameter set, and V, F and M are positive integers, wherein the camera parameters corresponding to the different viewpoints at the same moment form a camera parameter subset.

US Pat. No. 10,171,840

METHOD FOR PRODUCING VIDEO CODING AND PROGRAMME-PRODUCT

SIEMENS AKTIENGESELLSCHAF...

1. Method for video coding with the procedural steps:provision of a prediction error matrix;
conversion of the prediction error matrix by coefficient sampling into a series of symbols; and
performing context-adaptive arithmetic encoding of the symbols on the basis of symbol frequencies, for which the distribution is selected depending on an already encoded symbol;
wherein:
the context-adaptive arithmetic encoding of the symbols includes, for a symbol being encoded, selecting from different predetermined distributions of symbol frequencies a particular predetermined distribution of symbol frequencies based on the symbol encoded immediately beforehand; and
the predetermined distribution of symbol frequencies indicates the likelihood of different types of symbols occurring immediately following the type of the symbol encoded immediately beforehand based on known statistical interdependencies between different types of symbols occurring in succession.

US Pat. No. 10,171,839

GENERATING TRANSFORMS FOR COMPRESSING AND DECOMPRESSING VISUAL DATA

Massachusetts Institute o...

1. A method for encoding data, the method comprising:encoding a residual of a first portion of an array of data to generate a first set of coefficients;
decoding the first set of coefficients to generate a decoded representation of the first portion;
computing an estimated covariance function for a residual of a second portion of the array of data based on a model that includes a gradient of a plurality of boundary data values located on a boundary of the decoded representation of the first portion;
computing a set of transform basis functions from the estimated covariance function; and
encoding the residual of the second portion using a first transform that uses the computed set of transform basis functions, including generating a predicted representation of the second portion, and applying the first transform to a difference between the second portion and the predicted representation of the second portion.

US Pat. No. 10,171,838

METHOD AND APPARATUS FOR PACKING TILE IN FRAME THROUGH LOADING ENCODING-RELATED INFORMATION OF ANOTHER TILE ABOVE THE TILE FROM STORAGE DEVICE

MEDIATEK INC., Hsin-Chu ...

1. A method for video encoding a frame divided into a plurality of tiles, each having a plurality of blocks, the method comprising:storing encoding-related information derived from a plurality of blocks in a last block row of a first tile of the frame into a storage device, wherein the encoding-related information comprises a plurality of encoding-related data derived from the blocks in the last block row of the first tile, respectively;
reading the encoding-related information from the storage device; and
performing entropy encoding upon blocks in a first block row of a second tile of the frame based at least partly on the encoding-related information read from the storage device;
wherein the first block row of the second tile is vertically adjacent to the last block row of the first tile, and the entropy encoding of the first block row of the second tile is started before entropy encoding of the last block row of the first tile is accomplished;
wherein the encoding-related information is stored in the storage device before the entropy encoding is performed upon any block in the frame;
wherein the frame is encoded using a first-stage encoding flow and a second-stage encoding flow following the first-stage encoding flow; each of the first-stage encoding flow and the second-stage encoding flow is applied to all blocks within the frame; entropy encoding is performed in the second-stage encoding flow only; the step of storing the encoding-related information into the storage device is performed in the first-stage encoding flow; and the step of reading the encoding-related information from the storage device is performed in the second-stage encoding flow;
wherein the first-stage encoding flow comprises generating a probability table for the frame; each of the blocks in the last block row of the first tile and the blocks in the first block row of the second tile is split into one or more partitions for coding; and the step of performing the entropy encoding upon blocks in the first block row of the second tile comprises:
when encoding a syntax element of a current partition in the first block row of the second tile, determining a table index based at least partly on encoding-related information of at least one specific partition in the last block row of the first tile, wherein the at least one specific partition is located above the current partition; and
selecting a probability set from the probability table for encoding the syntax element of the current partition according to the table index.

US Pat. No. 10,171,837

PREDICTIVE VALUE DATA SET COMPRESSION

HERE Global B.V., Eindho...

1. An apparatus comprising:at least one processor; and
at least one memory including computer program code and operable to store a data set comprising values for a plurality of pixels in an image, the values relating to relative distances of objects represented in the image;
the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus at least to:
identify an image of a geographic area;
identify a depth data set collected by a distance ranging device, the depth data set comprising depth values correlated to pixels in the image of the geographic area, wherein the depth data set corresponds to one or more objects in the image;
calculate a predicted value for an exponent for a particular point of the image based on values of neighboring points of the particular point and a predicted value for a mantissa of the particular point of the image based on values of neighboring points of the particular point, where the predicted value for the mantissa is calculated based on an identified subset of neighboring points of the image having an exponent value within a predetermined range of the exponent value of the particular point;
calculate a comparator between the predicted value for the exponent for the particular point and an actual value for the exponent for the particular point and a comparative value between an actual mantissa and the predicted mantissa of the particular point; and
at least one of store or communicate the comparator for the particular point to or from the memory.

US Pat. No. 10,171,836

METHOD AND DEVICE FOR PROCESSING VIDEO SIGNAL

LG ELECTRONICS INC., Seo...

1. A method for processing a video signal by a decoding apparatus, the method comprising:obtaining parallel processing information from the video signal, the parallel processing information indicating a size of a current parallel processing unit;
obtaining an inter-view motion vector of a current coding unit included in the current parallel processing unit using an inter-view motion vector of a neighboring block of the current parallel processing unit and the size of the current parallel processing unit, wherein the neighboring block is adjacent to the current parallel processing unit and not included in the current parallel processing unit, and wherein the current coding unit includes one or more current prediction units; and
obtaining motion vectors of the one or more current prediction units in parallel using the inter-view motion vector of the current coding unit, wherein the inter-view motion vector indicates a corresponding block of a current prediction unit, the corresponding block being positioned in a different view from a current view of the current prediction unit,
wherein obtaining the motion vectors of the one or more current prediction units in parallel includes:
generating a motion vector list for the current prediction unit, wherein a motion vector of the corresponding block is added in the motion vector list when a picture order count (POC) of a reference picture for the corresponding block in the different view is identical to a POC of a reference picture for the current prediction unit in the current view, and
wherein the motion vector of the corresponding block is not added in the motion vector list when the POC of the reference picture for the corresponding block in the different view is different from the POC of the reference picture for the current prediction unit in the current view, and
obtaining a motion vector of the current prediction unit from the motion vector list.

US Pat. No. 10,171,835

METHOD AND APPARATUS FOR ENCODING AND DECODING IMAGE

Samsung Display Co., Ltd....

1. A method of encoding video data comprising a plurality of pictures, the method comprising:storing data of at least one first picture in the video data that is already encoded; and
referring to the stored data and using intra-prediction to encode blocks in a current picture following the first picture,
wherein the storing the data comprises:
calculating k similarity values by comparing each pixel data of a first horizontal line of each of k previously encoded pictures with pixel data of a first horizontal line of the current picture, wherein k is a natural number greater than one;
selecting one of the k previously encoded pictures as corresponding to a biggest similarity value of the k calculated similarity values; and
storing pixel data of a first horizontal line of the selected previously encoded picture as third reference data, and
wherein the referring to the stored data and using intra-prediction to encode the blocks in the current picture comprises:
loading the third reference data; and
intra-predicting a block comprising pixels of the first horizontal line of the current picture based on the third reference data.

US Pat. No. 10,171,834

METHODS AND APPARATUS FOR INTRA PICTURE BLOCK COPY IN VIDEO COMPRESSION

MEDIATEK INC., Hsinchu (...

1. A method of intra picture block copy in video compression, comprising:identifying a first block of pixels of a picture as a reference block for reconstructing a second block of pixels of the picture;
determining an overlapped region of the second block that overlaps with the first block, the first block having a first corner, and the second block having a second corner corresponding to the first corner and overlapping the first block;
splitting the overlapped region into a first portion and a second portion along a division line that is parallel to a block vector or a diagonal line of the overlapped region, the block vector indicating a spatial relationship between the first corner of the first block and the second corner of the second block, and the diagonal line of the overlapped region being defined based on a third corner of the overlapped region that is at a same position as the second corner of the second block;
reconstructing pixels in the first portion of the overlapped region based on a first set of pixels of the first block in a manner that values of the reconstructed pixels in the first portion change in a direction from a border of the overlapped region adjacent to the first set of pixels of the first block to the division line; and
reconstructing pixels in the second portion of the overlapped region based on a second set of pixels of the first block in a manner that values of the reconstructed pixels in the second portion change in a direction from a border of the overlapped region adjacent to the second set of pixels of the first block to the division line,
wherein the first set of pixels of the first block is adjacent to the first portion of the overlapped region, and the second set of pixels of the first block is adjacent to the second portion of the overlapped region.

US Pat. No. 10,171,833

ADAPTIVE SWITCHING OF COLOR SPACES, COLOR SAMPLING RATES AND/OR BIT DEPTHS

Microsoft Technology Lice...

1. A computing device comprising:one or more buffers configured to store video in a sequence; and
a video encoder or image encoder configured to perform operations comprising:
encoding the video in the sequence, including:
switching color spaces, color sampling rates and/or bit depths spatially and/or temporally between at least some units of the video within the sequence during the encoding, the color spaces including an RGB-type color space and a YCoCg color space, wherein the encoder is configured to select between:
for lossy coding, using color space conversion operations to switch between the RGB-type color space and the YCoCg color space; and
for lossless coding, using invertible color space conversion operations to switch between the RGB-type color space and the YCoCg color space; and
selectively performing deblock filtering of previously reconstructed content according to one or more rules, including adjusting strength of the deblock filtering depending on whether primary components of two adjacent blocks have non-zero residual values:
outputting encoded data in a bitstream, the encoded data including one or more signals indicating how the color spaces, the color sampling rates and/or the bit depths switch between the at least some units of the video within the sequence.

US Pat. No. 10,171,832

MOVING PICTURE CODING DEVICE, MOVING PICTURE CODING METHOD, AND MOVING PICTURE CODING PROGRAM, AND MOVING PICTURE DECODING DEVICE, MOVING PICTURE DECODING METHOD, AND MOVING PICTURE DECODING PROGRAM

JVC KENWOOD Corporation, ...

1. A moving picture decoding device that decodes a bitstream in which a moving picture is coded using motion compensation in units of blocks acquired by dividing each picture of the moving picture, the moving picture decoding device comprising:a decoding unit configured to decode information representing a motion vector predictor to be selected from a motion vector predictor candidate list having a predefined number of motion vector predictor candidates, together with a motion vector difference;
a motion vector predictor candidate generating unit configured to derive a plurality of motion vector predictor candidates by making a prediction based on a motion vector of one of decoded blocks that are neighboring to a decoding target block in space or time and construct a motion vector predictor candidate list;
a motion vector predictor redundant candidate removing unit configured to compare whether values of vectors are the same among motion vector predictor candidates predicted from a decoded block neighboring in space and remove the motion vector predictor candidates having the same values of vectors from the motion vector predictor candidate list with at least one being left without comparing whether or not a value of vector of a motion vector predictor predicted from a decoded block that is neighboring in space and a value of vector of a motion vector predictor predicted from a decoded block neighboring in time are the same;
a motion vector predictor candidate adding unit configured to repeatedly add the motion vector predictor candidates to the motion vector predictor candidate list until the number of motion vector predictor candidates reaches the predefined number if the number of the motion vector predictor candidates in the motion vector predictor candidate list is smaller than the predefined number, whereby the number of the motion vector predictor candidates in the motion vector predictor candidate list reaches the predefined number;
a motion vector predictor candidate number limiting unit configured to remove the motion vector predictor candidates exceeding the predefined number from the motion vector predictor candidate list if the number of the motion vector predictor candidates in the motion vector predictor candidate list is greater than the predefined number, whereby the number of the motion vector predictor candidates in the motion vector predictor candidate list is limited to the predefined number;
a motion vector predictor selecting unit configured to select a motion vector predictor from the motion vector predictor candidate list based on information representing the decoded motion vector predictor to be selected; and
a motion vector calculating unit configured to calculate a motion vector used for motion compensation by adding the selected motion vector predictor and the motion vector difference together,
wherein the motion vector predictor candidate adding unit repeatedly adds more than one (0,0) motion vectors allowing duplication as the motion vector predictor candidates, and
wherein the motion vector predictor redundant candidate removing unit compares whether values of vectors are the same between a first motion vector predictor candidate predicted from a first decoded block neighboring in space and a second motion vector predictor candidate predicted from a second decoded block neighboring in space and removes, when the values of vectors are the same, the second motion vector predictor candidate from the motion vector predictor candidate list.

US Pat. No. 10,171,831

MOVING PICTURE CODING DEVICE, MOVING PICTURE CODING METHOD, AND MOVING PICTURE CODING PROGRAM, AND MOVING PICTURE DECODING DEVICE, MOVING PICTURE DECODING METHOD, AND MOVING PICTURE DECODING PROGRAM

JVC KENWOOD Corporation, ...

1. A moving picture decoding device that decodes a bitstream in which a moving picture is coded using motion compensation in units of blocks acquired by dividing each picture of the moving picture, the moving picture decoding device comprising:a decoding unit configured to decode information representing a motion vector predictor to be selected from a motion vector predictor candidate list having a predefined number of motion vector predictor candidates, together with a motion vector difference;
a motion vector predictor candidate generating unit configured to derive a plurality of motion vector predictor candidates by making a prediction based on a motion vector of one of decoded blocks that are neighboring to a decoding target block in space or time and construct a motion vector predictor candidate list;
a motion vector predictor redundant candidate removing unit configured to compare whether values of vectors are the same among motion vector predictor candidates predicted from a decoded block neighboring in space and remove the motion vector predictor candidates having the same values of vectors from the motion vector predictor candidate list with at least one being left without comparing whether or not a value of vector of a motion vector predictor predicted from a decoded block that is neighboring in space and a value of vector of a motion vector predictor predicted from a decoded block neighboring in time are the same;
a motion vector predictor candidate adding unit configured to repeatedly add the motion vector predictor candidates to the motion vector predictor candidate list until the number of motion vector predictor candidates reaches the predefined number if the number of the motion vector predictor candidates in the motion vector predictor candidate list is smaller than the predefined number, whereby the number of the motion vector predictor candidates in the motion vector predictor candidate list reaches the predefined number;
a motion vector predictor selecting unit configured to select a motion vector predictor from the motion vector predictor candidate list based on information representing the decoded motion vector predictor to be selected; and
a motion vector calculating unit configured to calculate a motion vector used for motion compensation by adding the selected motion vector predictor and the motion vector difference together,
wherein the motion vector predictor candidate adding unit repeatedly adds more than one (0,0) motion vectors allowing duplication as the motion vector predictor candidates.

US Pat. No. 10,171,830

MOVING PICTURE CODING DEVICE, MOVING PICTURE CODING METHOD, AND MOVING PICTURE CODING PROGRAM, AND MOVING PICTURE DECODING DEVICE, MOVING PICTURE DECODING METHOD, AND MOVING PICTURE DECODING PROGRAM

JVC KENWOOD Corporation, ...

1. A moving picture coding device that codes moving picture data in units of blocks acquired by partitioning each picture of the moving picture data, the moving picture coding device comprising:a candidate list constructing unit configured to derive motion information of a coded block included in a picture that is different in time from a picture including a coding target block that is a target for the coding, derive a temporal motion information candidate of the coding target block based on the derived motion information of the coded block, derive a plurality of candidates based on motion information of a plurality of coded neighboring blocks located at predetermined positions neighboring to the coding target block in space, derive spatial motion information candidates based on the plurality of derived candidates, and construct a list of motion information candidates including the derived temporal motion information candidate and the derived spatial motion information candidates; and
a coding unit configured to code information representing whether or not the coding is performed in a merging prediction mode, code an index designating a predetermined motion information candidate included in the list in a case where the coding is determined to be performed in the merging prediction mode, derive the motion information of the coding target block based on the motion information candidate designated by the coded index, and codes the coding target block,
wherein the candidate list constructing unit does not compare all possible combinations of the spatial motion information candidates with each other but compares predefined partial combinations of the spatial motion information candidates with each other and, in a case where there are candidates having the same moving information out of the candidates, derives one spatial motion information candidate from the candidates of which the motion information is the same.

US Pat. No. 10,171,829

PICTURE ENCODING DEVICE AND PICTURE ENCODING METHOD

JVC KENWOOD Corporation, ...

1. A picture encoding device that encodes a picture and encodes a difference quantization parameter in a unit of a quantization coding block which is divided from the picture and is a management unit of a quantization parameter, comprising:a quantization parameter derivation unit that derives a quantization parameter of a first quantization coding block;
a prediction quantization parameter derivation unit that derives a prediction quantization parameter using the quantization parameters of two quantization coding blocks which precede the first quantization coding block in order of encoding;
a difference quantization parameter derivation unit that derives a difference quantization parameter of the first quantization coding block, using a difference between the quantization parameter of the first quantization coding block and the prediction quantization parameter; and
an encoder that encodes the difference quantization parameter,
wherein the prediction quantization parameter derivation unit derives the prediction quantization parameter using two quantization parameters: one quantization parameter of a previous quantization coding block which immediately precedes the first quantization coding block to be encoded in order of encoding and another quantization parameter of a quantization coding block which precedes the previous quantization coding block in order of encoding and is not spatially neighboring the first quantization coding block to be encoded.

US Pat. No. 10,171,828

MODIFICATION OF UNIFICATION OF INTRA BLOCK COPY AND INTER SIGNALING RELATED SYNTAX AND SEMANTICS

ARRIS Enterprises LLC, S...

1. In a processing device for processing a video sequence having a plurality of pictures, each picture having a plurality of slices, a method of processing a slice of a current picture, comprising:determining when a slice of the current picture excludes any predictive coding derived from another picture;
when the slice of the current picture is designated to exclude any predictive coding derived from another picture, setting a flag to a first logic state;
when the slice of the current picture is not designated to exclude any predictive coding derived from another picture, setting the flag to a second logic state; and
bypassing at least a portion of predicted weight processing of inter picture processing of the slice of the current picture according to the logic state of the flag when coding if the flag is in the first logic state,
wherein the slice of the current picture is of one of an intra coding type (I-slice), a predictive coding type (P-slice) and bi-predictive coding type (B-slice),
wherein the processing of the slice is performed according to a slice a header having inter picture processing,
wherein bypassing at least a portion of the predicted weight processing of the inter picture processing of the current picture according to the logic state of the flag comprises:
skipping at least a portion of the inter picture processing of the slice of the current picture including the at least a portion of the predicted weight processing according to the flag and a determination that the slice is a P-type slice or a B-type slice, and
wherein the skipped at least a portion of the inter picture processing comprises:
B-slice motion vector difference signaling;
entropy coding method signaling processing;
collocated reference picture signaling;
weighted prediction signaling processing; and
integer motion vector signaling processing.

US Pat. No. 10,171,827

IMAGE CODING METHOD AND IMAGE DECODING METHOD

SUN PATENT TRUST, New Yo...

1. An image decoding device that decodes an image having a plurality of blocks, said image decoding device comprising:a processor; and
a memory having a program stored thereon, the program causing the processor to execute operations including
decoding the blocks sequentially based on probability information indicating a data occurrence probability,
wherein, in the decoding, the probability information is updated depending on data of a first target block to be decoded among the blocks, after decoding the first target block and before decoding a second target block to be decoded next among the blocks, and
wherein, in the decoding, a third target block in the blocks is decoded based on the probability information (i) which is updated depending on the data of the first target block, the first target block being a neighboring block above the third target block and (ii) which is not updated depending on the data of the second target block, and
wherein the third target block (i) is located on a left end of the image, (ii) is different from the second target block, and (iii) is decoded after decoding the first target block.

US Pat. No. 10,171,826

METHOD AND APPARATUS FOR ENCODING RESIDUAL BLOCK, AND METHOD AND APPARATUS FOR DECODING RESIDUAL BLOCK

SAMSUNG ELECTRONICS CO., ...

1. An apparatus for decoding an image, the apparatus comprising:a splitter which splits the image into a plurality of maximum coding units, hierarchically splits a maximum coding unit among the plurality of maximum coding units into a plurality of coding units based on split information of a coding unit, and determines a transformation residual block from a coding unit among the plurality of coding units based on split information of the transformation residual block, wherein the transformation residual block includes a plurality of sub residual blocks;
a parser which obtains, from a bitstream, a coded block flag indicating whether the transformation residual block includes at least one non-zero effective transformation coefficient,
when the coded block flag indicates that the transformation residual block includes at least one non-zero effective transformation coefficient, determines whether a current sub residual block is a left-upper residual block among a plurality of sub residual blocks in the transformation residual block,
when the current sub residual block is a left-upper sub residual block, obtains transformation coefficients of the left-upper sub residual block based on a significance map indicating a location of a non-zero transformation coefficient in the first sub residual block and level information of the non-zero transformation coefficient in the first sub residual block obtained from the bitstream,
when the current sub residual block is not a left-upper sub residual block, obtains, from the bitstream, an effective coefficient flag of the current sub residual block without considering an effective coefficient flag of another sub residual block, the effective coefficient flag of the current sub residual block indicating whether at least one non-zero effective transformation coefficient exists in the current sub residual block,
when the effective coefficient flag indicates that at least one non-zero transformation coefficient exists in the current sub residual block, obtains transformation coefficients of the current sub residual block based on a significance map indicating a location of the non-zero transformation coefficient in the current sub residual block and level information of the non-zero transformation coefficient in the current sub residual block obtained from the bitstream;
when the effective coefficient flag indicates that the at least one non-zero effective transformation coefficient does not exist in the current sub residual block, determines the transformation coefficients of the current second sub residual blocks as zero; and
an inverse-transformer which performs inverse-transformation on the transformation residual block including the current sub residual block,
wherein the transformation coefficients of the current sub residual block are a subset of transformation coefficients of the transformation residual block,
the transformation coefficients of the current sub residual block are obtained after or before transformation coefficients of another sub residual block among the plurality of sub residual blocks in the transformation residual block,
wherein a level information of a non-zero transformation coefficient includes information regarding a sign and an absolute value of the non-zero transformation coefficient,
when the split information of the coding unit of a current depth indicates a split, the coding unit of the current depth is split into the plurality of coding units of the lower depth, independently from neighboring coding units,
when the split information of the coding unit of the current depth indicates a non-split, one or more transformation residual blocks including the transformation residual block are obtained from the coding unit of the current depth, and
wherein the current sub residual block is one of a plurality of sub residual blocks which have same size with each other and square-shape, and are included in the transformation residual block.

US Pat. No. 10,171,825

PARALLEL COMPRESSION OF IMAGE DATA IN A COMPRESSION DEVICE

MATROX GRAPHICS INC., Do...

1. A method of compressing a stream of pictures in parallel in a compression device, wherein the compression device includes a plurality of components to be coupled in series to perform an image data compression process for compressing image data into compressed image data, and wherein each one of the plurality of components is to perform a partial compression operation that is part of the image data compression process, the method comprising:processing a first portion of a first picture of a stream of pictures in a first component from the plurality of components of the compression device, while simultaneously processing a second portion of a second picture of the stream of pictures in a second component from the plurality of components of the compression device wherein the processing of the first portion of the first picture is performed according to partial compression statistics associated with the second picture, and wherein the partial compression statistics result from the processing of one or more portions of the second picture in one or more of the plurality of components of the compression device when compression of the second portion of the second picture in the compression device is not yet completed.

US Pat. No. 10,171,824

SYSTEM AND METHOD FOR ADAPTIVE FRAME RE-COMPRESSION IN VIDEO PROCESSING SYSTEM

MEDIATEK INC., Hsinchu (...

1. A method of video decoding, comprising:receiving a video bitstream;
decoding the video bitstream to generate a reconstructed frame;
determining whether to re-compress the reconstructed frame for buffering based on a characteristic of the reconstructed frame that is provided in the video bitstream when a size of the reconstructed frame is greater than a first threshold; and
re-compressing the reconstructed frame and storing the re-compressed reconstructed frame into a buffer of a decoder system when the reconstructed frame is determined to be re-compressed for buffering, wherein
the characteristic of the reconstructed frame includes whether the reconstructed frame is a reference frame and an initial picture quantization parameter associated with the reconstructed frame, and
the determining whether to re-compress the reconstructed frame for buffering when the size of the reconstructed frame is greater than the first threshold comprises:
determining that the reconstructed frame is not to be re-compressed for buffering when the reconstructed frame is the reference frame and the initial picture quantization parameter is not greater than a second threshold.

US Pat. No. 10,171,823

IMAGE DECODING DEVICE AND IMAGE CODING DEVICE

SHARP KABUSHIKI KAISHA, ...

1. An image decoding device that decodes coded data that is hierarchically coded to reconstruct a decoded picture of a higher layer which is a target layer, the image decoding device comprising:a parameter set decoding circuit that decodes a parameter set; and
a predicted image generation circuit that generates a predicted image by inter-layer prediction with reference to decoded pixels of a reference layer picture,
wherein the parameter set decoding circuit decodes a color format identifier and derives a luma chroma width ratio depending upon a chroma format, which is specified by the color format identifier,
wherein the parameter set decoding circuit decodes: (i) a scaled reference layer offset syntax which is decoded in a chroma pixel unit of the target layer picture, and (ii) a reference layer offset syntax which is decoded in a chroma pixel unit of the reference layer picture,
wherein the scaled reference layer offset syntax specifies an offset between a top-left sample of a reference region in the target layer picture and a top-left sample of the target layer picture, and the reference layer offset syntax specifies an offset between a top-left sample of the reference region in the reference layer picture and a top-left sample of the reference layer picture,
wherein the predicted image generation circuit derives a reference position by using a scaled reference layer offset, a reference layer offset, and a scale,
wherein the scaled reference layer offset is derived by multiplying a value of the scaled reference layer offset syntax by a first luma chroma width ratio that is set to the luma chroma width ratio of the target layer picture,
wherein the reference layer offset is derived by multiplying a value of the reference layer offset syntax by a second luma chroma width ratio that is set to the luma chroma width ratio of the reference layer picture, and
wherein the scale is derived by using the scaled reference layer offset and the reference layer offset.

US Pat. No. 10,171,822

IMAGE TRANSMISSION DEVICE, IMAGE TRANSMISSION METHOD, AND IMAGE TRANSMISSION PROGRAM

CIAO, INC., (JP)

1. An apparatus for transmitting images, including:a base server being situated at a point where an image is to be taken, and being connected to an imaging device; and
an aggregation server being connected to said base server through an electrical communication channel,
said base server including:
reference image transmitter for transmitting image data (hereinafter, referred to as “reference image data”) of a frame acting as a reference (hereinafter, referred to as “reference frame”) to said aggregation server at a predetermined timing among images of a plurality of consecutive frames sequentially obtained through said imaging device;
extracted area computer for selecting an image (hereinafter, referred to as “background image”) acting as a background among images of a plurality of consecutive frames sequentially obtained through said imaging device, and sequentially computing a third area surrounding both a first area and a second area for each of frames individually following said reference frame selected among a plurality of consecutive frames sequentially obtained through said imaging device, said first area surrounding an area in which a difference is generated between an image of said each of frames and said background image, said second area surrounding an area in which a difference is generated between an image of a frame immediately prior to said each of frames and said background image; and
extracted image transmitter for sequentially extracting image data of said third area out of said each of frames, and transmitting the thus extracted image data to said aggregation server,
said aggregation server including image synthesizer for synthesizing a moving image based on said reference image data transmitted from said base server, and said image data of said third area extracted out of said each of frames.

US Pat. No. 10,171,821

SCALABLE VIDEO ENCODING METHOD AND APPARATUS AND SCALABLE VIDEO DECODING METHOD AND APPARATUS USING UP-SAMPLING FILTER ACCOMPANIED BY CONVERSION OF BIT DEPTH AND COLOR FORMAT

SAMSUNG ELECTRONICS CO., ...

1. A scalable video encoding method comprising:determining a reference layer image from among base layer images so as to inter layer predict an enhancement layer image, wherein the reference layer image corresponds to the enhancement layer image;
determining a phase between pixels of the enhancement layer image and the reference layer image, according to a scaling factor between the enhancement layer image and the reference layer image and a color format difference of the enhancement layer image and the reference layer image;
selecting at least one filter coefficient set corresponding to the determined phase, from filter coefficient data comprising filter coefficient sets that respectively correspond to phases;
generating an up-sampled reference layer image by extending a resolution of the reference layer image according to the scaling factor by performing interpolation filtering on the reference layer image by using the selected filter coefficient set;
obtaining a prediction error between the up-sampled reference layer image and the enhancement layer image;
generating an enhancement layer bitstream comprising the prediction error; and
generating a base layer bitstream by encoding the base layer images.

US Pat. No. 10,171,820

DIGITAL IMAGE RECOMPRESSION

Dropbox, Inc., San Franc...

1. A system, comprising:one or more processors;
storage media; and
one or more programs stored in the storage media and configured for execution by the one or more processors, the one or more programs comprising instructions configured for:
obtaining compressed image data that is a coded representation of a digital image;
decoding the compressed image data to obtain at least one block of quantized discrete cosine transform (DCT) coefficients corresponding to a sample block of the digital image, the block of quantized DCT coefficients comprising a DC coefficient and a plurality of non-zero AC coefficients;
determining probability estimates for binary symbols of binarized representations of the plurality of non-zero AC coefficients based, at least in part, on classifying each non-zero AC coefficient of the plurality of non-zero AC coefficients as being part of at most one of: (a) a top-edge row of AC coefficients of the block of quantized DCT coefficients, (b) a left-edge column of AC coefficients of the block of quantized DCT coefficients, or (c) a sub-block of AC coefficients of the block of quantized DCT coefficients;
wherein a particular non-zero AC coefficient of the plurality of non-zero AC coefficients is classified as being part of (c) the sub-block of AC coefficients;
wherein determining probability estimates for binary symbols of a binarized representation of the particular non-zero AC coefficient is based on:
an AC coefficient corresponding in position to the particular non-zero AC coefficient in an above quantized DCT block of coefficients,
an AC coefficient corresponding in position to the particular non-zero AC coefficient in a left quantized DCT block of coefficients, and
an AC coefficient corresponding in position to the particular non-zero AC coefficient in an above-left quantized DCT block of coefficients;
arithmetic coding the binary symbols based, at least in part, on the probability estimates; and
based, at least in part, on the arithmetic coding, storing further compressed image data that is a coded representation of the digital image, the further compressed image data being lossless with respect to the compressed image data, the further compressed image data requiring fewer bytes to store in storage media than required by the compressed image data.

US Pat. No. 10,171,818

SCANNING ORDERS FOR NON-TRANSFORM CODING

Microsoft Technology Lice...

1. A method comprising:identifying, by a computing device, a scanning order for scanning a first block, the first block being associated with a transform coding mode and having an associated size and an associated prediction mode;
identifying, by the computing device, a second block that is associated with a non-transform coding mode, the second block being part of a same image as the first block and having the same associated size and the same associated prediction mode as the first block;
determining whether to scan the second block according to a scanning order inverse to the scanning order for scanning the first block, the determining being based on the prediction mode associated with the second block and the size associated with the second block, wherein, if the prediction mode is an intra-prediction mode and the size is smaller than a predetermined size, the second block is scanned according to the inverse scanning order; and
scanning, by the computing device, the second block according to the inverse scanning order, in response to determining that the prediction mode is an intra-prediction mode and the size of the second block is smaller than the predetermined size.

US Pat. No. 10,171,817

IMAGE PROCESSING DEVICE AND METHOD

SONY CORPORATION, Tokyo ...

1. An image processing device, comprising:at least one processor configured to:
set a binary parameter that corresponds to a binary data processing rate of a decoder,
wherein the decoder with at least one first setting is indicated by an encoded stream, and
wherein the encoded stream corresponds to binary data;
calculate a first maximum processing amount of the encoded stream based on the set binary parameter;
calculate a second maximum processing amount of the binary data based on the set binary parameter;
calculate a target bit that indicates a target rate of the encoded stream, based on the calculated first maximum processing amount of the encoded stream and the calculated second maximum processing amount of the binary data;
control a quantization rate based on the calculated target bit;
quantize input data based on the controlled quantization rate;
binarize the quantized input data to obtain the binary data;
arithmetically code the binary data to generate the encoded stream such that the encoded stream is decoded in the decoder without one of an overflow condition or an underflow condition in the decoder; and
transmit the set binary parameter and the encoded stream to the decoder.

US Pat. No. 10,171,816

METHOD AND APPARATUS FOR MOTION COMPENSATION PREDICTION

NTT DOCOMO, INC., Tokyo ...

1. A video decoding method for motion compensation performed under an inter-frame prediction to decode a target picture, the method comprising computer executable steps executed by a processor of a video decoder to implement:(a) decoding a residual and a motion vector received from an encoder;
(b) referencing to the motion vector to retrieve a reference sample from a reference picture stored in a reference picture memory, wherein the reference picture stored in the reference picture memory and the reference sample retrieved from the reference picture are both represented with a first bit depth;
(c) performing a scaling-up operation and a first fractional sample interpolation in a first direction on the retrieved reference sample to generate a first set of fractional samples represented with a second bit depth to which the first bit depth is scaled up by a scaling-up factor, wherein the second bit depth is constant and set equal to a number of bits available to represent the fractional sample, and the scaling-up factor is set equal to the second bit depth minus the first bit depth and is variable to keep the second bit depth constant and independent from a change of the first bit depth;
(d) performing a second fractional sample interpolation on the first set of fractional samples in a second direction to generate a second set of fractional samples represented with the second bit depth;
(e) referencing fractional parts of the motion vector to derive a bidirectional prediction sample from the first and second sets of fractional samples, the bidirectional prediction sample being represented with the second bit depth, wherein the bidirectional prediction sample is generated by combining two second sets of fractional samples, the two second sets being different from each other;
(f) scaling down and clipping the bidirectional prediction sample from the second bit depth to the first bit depth to generate a prediction picture represented with the first bit depth; and
(g) adding the prediction picture and the residual to reconstruct the target picture represented with the first bit depth,
wherein the fractional sample interpolation applies an 8-tap FIR (Finite Impulse Response) filter having a set of coefficients equal to [?1, 4, ?11, 40, 40, ?11, 4, ?1] to generate a quarter-pel sample.

US Pat. No. 10,171,814

MOVING PICTURE DECODING DEVICE, MOVING PICTURE DECODING METHOD AND MOVING PICTURE DECODING PROGRAM

JVC KENWOOD Corporation, ...

1. A moving picture decoding device adapted to decode a bitstream in which moving pictures are coded in units of blocks obtained by partitioning each picture of the moving pictures, comprising:a first bitstream decoding unit configured to set a predefined number of merge candidates;
a second bitstream decoding unit configured to decode information indicating indices of the candidates;
a spatial merge candidate generation unit configured to derive spatial merge candidates from a first predefined number of blocks neighboring a prediction block subject to decoding;
a temporal merge candidate generation unit configured to derive a temporal merge candidate from a block that exists at the same position as or near a prediction block subject to decoding in a decoded picture that is different from the prediction block subject to decoding;
a merge candidate addition unit configured to add the spatial merge candidates and the temporal merge candidates to a merge candidate list;
a merge candidate supplying unit configured to add one or more merge candidates to the merge candidate list up to the predefined number of merge candidates as an upper limit when the number of merge candidates included in the merge candidate list is smaller than the predefined number of merge candidates;
a coding information selection unit configured to select a merge candidate from the merge candidates added to the merge list; and
a motion compensation prediction unit configured to perform inter prediction of the prediction block subject to decoding by the merge candidate thus selected,
wherein the second bitstream decoding unit derives the indices of the merge candidates based on the number of the merge candidates;
the spatial merge candidate generation unit stops deriving the spatial merge candidates when the number of the derived spatial merge candidates reaches a second predefined number smaller than the first predefined number; and
the merge candidate supplying unit adds a merge candidate having a motion vector of (0,0).

US Pat. No. 10,171,813

HIERARCHY OF MOTION PREDICTION VIDEO BLOCKS

QUALCOMM Incorporated, S...

1. A method of decoding video data according to a merge mode, the method comprising:obtaining an index value for a current video block coded in the merge mode;
generating a set of candidate predictive blocks for the merge mode based on spatial and temporal neighbors to the current video block;
limiting the set of generated candidate predictive blocks for the merge mode to a subset of generated candidate predictive blocks for the merge mode, wherein the subset of generated candidate predictive blocks for the merge mode is limited to be smaller than the set of generated candidate predictive blocks for the merge mode;
selecting a predictive video block from the subset of generated candidate predictive blocks for the merge mode based on the index value; and
generating motion information for the current video block according to the merge mode based on motion information of the predictive video block, wherein generating the motion information for the current video block includes inheriting motion information from the predictive video block.

US Pat. No. 10,171,812

DATA OUTPUT APPARATUS, DATA OUTPUT METHOD, AND DATA GENERATION METHOD

PANASONIC INTELLECTUAL PR...

1. A data output apparatus comprising:a decoder that decodes a video stream to generate a first video signal;
an acquirer that acquires one or more pieces of metadata corresponding to one or more first conversion modes in which a luminance range of a video signal is converted;
an interpreter that interprets one of the one or more pieces of metadata to acquire characteristic data indicating a luminance range of the first video signal, and conversion auxiliary data for converting the luminance range of the first video signal;
a control information generator that converts the characteristic data into control information according to a predetermined transmission protocol;
a converter that supports one or more second conversion modes in which a luminance range of a video signal is converted, the converter for performing conversion processing of the luminance range of the first video signal in one of the one or more second conversion modes based on the conversion auxiliary data to generate a second video signal with a luminance range narrower than the luminance range of the first video signal; and
an outputter that outputs the second video signal and the control information to a display apparatus in accordance with the transmission protocol,
wherein the interpreter further determines which of the data output apparatus and the display apparatus is to perform the conversion processing, based on the one or more first conversion modes, the one or more second conversion modes, and one or more third conversion modes in which a luminance range of a video signal is converted, the one or more third conversion modes being supported by the display apparatus,
the interpreter further determines a conversion mode which is included in the one or more first conversion modes and is included in at least one of the one or more second conversion modes and the third conversion modes, as a conversion mode of the conversion processing to be performed by the data output apparatus or the display apparatus,
the acquirer acquires a plurality of pieces of metadata corresponding to a plurality of first conversion modes including the one or more first conversion modes,
the converter supports a plurality of second conversion modes including the one or more second conversion modes, and
the interpreter determines, as a conversion mode of the conversion processing to be performed by the data output apparatus or the display apparatus, a conversion mode with highest reproducibility for a master image which is an image that is output without conversion of the luminance range, from among a plurality of conversion modes which are included in the plurality of first conversion modes, and are included in at least one of the plurality of second conversion modes and the third conversion modes.

US Pat. No. 10,171,811

METHOD AND APPARATUS FOR DETERMINING REFERENCE PICTURE SET OF IMAGE

SAMSUNG ELECTRONICS CO., ...

1. A method of decoding a video, the method comprising:obtaining, by at least one processor, information of the number of reference picture sets from a bitstream, wherein the reference picture sets are included in a sequence parameter set and a reference picture set includes a plurality of reference pictures;
determining, by the at least one processor, whether an index of a current reference picture set of a current picture is equal to the number of reference picture sets, wherein the number of reference picture sets is based on the information of the number of the reference picture sets and the index of the current reference picture set indicates the current reference picture set among reference picture sets;
when the index of the current reference picture set of the current picture is equal to the number of reference picture sets, obtaining, by the at least one processor, delta index information about a difference between the index of the current reference picture set of the current picture and an index of a reference picture set (reference RPS) of the current picture from the bitstream;
determining, by the at least one processor, the index of the reference RPS based on the delta index information;
determining, by the at least one processor, the current reference picture set of the current picture based on the index of the reference RPS of the current picture and a delta RPS which is a difference value between a picture order count (POC) value of a reference picture in the current reference picture set of the current picture and a picture order count (POC) value of a reference picture in the reference RPS of the current picture; and
predictive decoding, by the at least one processor, the current picture by using a reference picture included in one of reference picture sets including the current reference picture set of the current picture.

US Pat. No. 10,171,810

TRANSFORM COEFFICIENT CODING USING LEVEL-MODE AND RUN-MODE

Cisco Technology, Inc., ...

1. A method comprising:obtaining a two-dimensional array of integer samples representing a block of quantized transform coefficients for a video frame;
converting the two-dimensional array of integer samples to a one-dimensional array of integer samples using a scan pattern, wherein each integer sample is represented with a level that is an absolute value of the sample and a sign bit if the level is greater than zero;
converting the one-dimensional array of integer samples to a bit-stream by processing the one-dimensional array of samples in sequential order, wherein converting the one-dimensional array of samples to a bit-stream comprises:
encoding the samples in a level-mode and a run-mode to form an encoded bit-stream;
wherein in the level-mode:
encoding each sample individually and encoding a level;
encoding a sign bit if the level is greater than zero; and
switching to the run-mode for a next sample when the level is less than a first threshold;
wherein in the run-mode:
for each non-zero level, encoding a combined event of:
length of a zero-run corresponding to a number of zeros since a last non-zero level;
whether the level is greater than one; and
when the level is one, encoding the sign bit;
when the level is greater than one:
encoding the level and sign bit jointly, and
encoding the combined event with a code equal to 2*(level?2)+sign;
switching to the level-mode for the next sample when the level is greater than a second threshold.

US Pat. No. 10,171,809

VIDEO ENCODING APPARATUS AND VIDEO ENCODING METHOD

FUJITSU LIMITED, Kawasak...

1. A video encoding apparatus comprising:a processor configured to:
divide a plurality of orthogonal transform coefficients included in each of a plurality of blocks into a plurality of coefficient groups each of which includes a predetermined number of the orthogonal transform coefficients, the plurality of blocks being obtained by dividing a picture included in a video, the plurality of orthogonal transform coefficients being obtained by orthogonally transforming, for each block, a prediction error signal obtained on the basis of difference between a value of each pixel of the picture and a prediction signal of the pixel;
determine, for each of the predetermined number of orthogonal transform coefficients included in a target coefficient group from among the plurality of coefficient groups, a candidate possible to minimize a cost obtained on the basis of a coding error and an amount of coding among a plurality of quantized-coefficient candidates to be used for quantizing the orthogonal transform coefficient, to be a quantized coefficient of the orthogonal transform coefficient, the target coefficient group being selected from among the plurality of coefficient groups sequentially from the coefficient group including the orthogonal transform coefficients corresponding to lowest frequencies;
determine, for a target coefficient group, whether to substitute all the predetermined number of quantized coefficients included in the target coefficient group by zero, on the assumption that a quantized coefficient that is not zero is included in the coefficient group corresponding to higher frequencies than those of the target coefficient group, the target coefficient group being selected from among the plurality of coefficient groups sequentially from the coefficient group including the orthogonal transform coefficients corresponding to the lowest frequencies;
determine, for a target coefficient group, a first candidate for the quantized coefficient corresponding to a highest frequency among the quantized coefficients that are included in the target coefficient group and are not zero, on the assumption that all the quantized coefficients included in the coefficient groups corresponding to higher frequencies than those of the target coefficient group are zero, the target coefficient group being selected from among the plurality of coefficient groups sequentially from the coefficient group including the orthogonal transform coefficients corresponding to the lowest frequencies;
calculate the coding error of the coefficient groups from the coefficient group adjacent to the coefficient group including a second candidate for the quantized coefficient on a higher frequency side to the coefficient group including the first candidate, the second candidate being the quantized coefficient with the highest frequency among the quantized coefficients not being zero, obtained from the coefficient group corresponding to lower frequencies than those of the coefficient group including the first candidate;
update the second candidate to the first candidate when a comparison cost obtained by subtracting the coding error of the coefficient group corresponding to higher frequencies than those of the coefficient group including the first candidate from the cost obtained for the first candidate is lower than a value obtained by adding the coding error to the comparison cost calculated for the second candidate, and determine the second candidate at time when the second candidate for the coefficient group corresponding to highest frequencies among the plurality of coefficient groups is updated, to be the quantized coefficient that is not zero and corresponds to a highest frequency; and
calculate the coding error of the coefficient groups from the coefficient group adjacent to the coefficient group including the second candidate on a higher frequency side to the coefficient group including the first candidate,
wherein
the second candidate includes a third candidate and a fourth candidate, the third candidate being updated to the first candidate also when all the predetermined number of quantized coefficients included in the coefficient group including the first candidate are substituted by zero, the fourth candidate being not updated to the first candidate when all the predetermined number of quantized coefficients included in the coefficient group including the first candidate are substituted by zero, and
the determining a quantized coefficient that is not zero and corresponds to a highest frequency determines the third candidate as the quantized coefficient that is not zero and corresponds to the highest frequency when all the quantized coefficients included in the coefficient groups corresponding to higher frequencies than those of the coefficient group including the third candidate are zero, and
determines the fourth candidate as the quantized coefficient that is not zero and corresponds to the highest frequency when the quantized coefficient that is not zero is included in any one of the coefficient groups corresponding to higher frequencies than those of the coefficient group including the third candidate.

US Pat. No. 10,171,808

IN-LOOP ADAPTIVE WIENER FILTER FOR VIDEO CODING AND DECODING

Intel Corporation, Santa...

1. A video encoder having an input to receive video and a channel output comprising:a transform/quantizer having an input and at least one output;
an adder having three inputs and an output coupled to said transform/quantizer input, one of said adder inputs coupled to receive said video;
an inverse quantizer having an input coupled to said transform/quantizer output;
an adaptive Wiener filter having a first input coupled to said inverse quantizer output and one of said adder inputs, said filter having a second input coupled to receive reconstructed image data, said filter to set filter taps based on the reconstructed image data, said filter having an output coupled to one of said adder inputs; and
an entropy coding having an input coupled to said transform/quantizer output, said entropy coding coupled to said channel output.

US Pat. No. 10,171,807

PICTURE-LEVEL QP RATE CONTROL FOR HEVC ENCODING

ARRIS Enterprises LLC, S...

1. A method of controlling a bit rate of an encoded video comprising a plurality of pictures, each of the plurality of pictures being of one of a plurality of picture types, comprising:(a) defining a window of M pictures comprising a plurality of window pictures;
(b) defining a parameter set for each picture type T, each parameter set comprising:
a quantization parameter (QT);
a first parameter (?T);
a second parameter (?T);
(c) estimating a number of bits R needed to encode a current picture of picture type T; according to:

wherein:
QcurT is a value of QT of the current picture of type T;
?curT is a value of ?T of the current picture of type T;
?curT is a value of ?T of the current picture of type T;
(d) estimating a number of bits Ri needed to encode each remaining picture i of the window of M pictures of picture type T according to:

wherein:
QiT is a value of QT of each remaining picture i of type T;
?iT is a value of ?T of each remaining picture i of type T;
?iT is a value of ?T of each remaining picture i of type T;
(e) determining, for the current picture and each remaining picture i of the window of M pictures and from the estimated number of bits to needed to encode the current picture Rcur and the estimated number of bits needed encode each remaining picture i of the window of M pictures, if a maximum video buffer boundary Bupper or a minimum video buffer boundary Blow are exceeded;
(f) if the maximum video buffer boundary Bupper or the minimum video buffer boundary Blow are exceeded, adjusting QcurT for the current picture of picture type T and QiT of each remaining picture i of picture type T, and repeating (d)-(f); and
(g) if the maximum video buffer boundary Bupper and the minimum video buffer boundary Blow are not exceeded, designating QcurT as a value for coding the current picture:
coding the current picture according to QcurT,
after coding the current picture according to QcurT:
updating ?T and ?T for the picture type T of the current picture;
setting a next remaining picture as the current picture and performing steps (c)-(g);
determining the actual number of bits Rr used to code the current picture;
determining a difference between the estimated number of bits Rcur to encode the current picture and the actual number of bits used to code the current picture; and
updating ?T and ?T for the picture type T of the current picture only if the difference between the estimated number of bits Rcur to encode the current picture and the actual number of bits used to code the current picture Rr exceeds a value ?;
wherein updating ?T and ?T for the picture type T of the current picture only if the difference between the estimated number of bits Rcur to encode the current picture and the actual number of bits used to code the current picture Rr exceeds a value ? comprises:
computing updated values for ?T and ?T for the picture type of the current picture that minimize the difference between the estimated number of bits Rcur to encode the current picture and the actual number of bits used to code the current picture Rr.

US Pat. No. 10,171,806

ASYMMETRIC DATA DECOMPRESSION SYSTEMS

Realtime Adaptive Streami...

1. A system, comprising:one or more different asymmetric data decompression algorithms, wherein each algorithm of the one or more different asymmetric data decompression algorithms utilizes one or more asymmetric data decompression routines of a plurality of different asymmetric data decompression routines, wherein a first asymmetric data decompression routine of the plurality of different asymmetric data decompression routines is configured to produce decompressed data with a higher data rate for a given data throughput than a second asymmetric data decompression routine of the plurality of different asymmetric data decompression routines; and
a processor configured:
to analyze one or more data parameters from one or more data blocks containing video data, wherein at least one data parameter relates to an expected or anticipated throughput of a communications channel; and
to select two or more different data decompression routines from among a plurality of different data decompression routines based upon, at least in part, the one or more data parameters relating to the expected or anticipated throughput of the communications channel.

US Pat. No. 10,171,805

VIDEO ENCODING DEVICE AND PROGRAM, VIDEO DECODING DEVICE AND PROGRAM, AND VIDEO DISTRIBUTION SYSTEM

Oki Electric Industry Co....

1. A video encoding device which encodes a video signal having a frame sequence, comprising:a predicted image generator which generates a predicted image of a non-key frame, by using a key frame in the frame sequence;
an updated original image generator which
receives an original image of the non-key frame and the predicted image of the non-key frame, and
generates an updated original image by, for each pixel position in the updated original image,
obtaining a difference between a parameter value of the pixel at the pixel position in the original image and a parameter value of the pixel at the pixel position in the predicted image,
comparing the obtained difference with a predetermined quantization error, and
selecting
the parameter value of the pixel at the pixel position in the predicted image if the obtained difference is no larger than the predetermined quantization error, and
the parameter value of the pixel at the pixel position in the original image if the obtained difference is larger than the predetermined quantization error,
to be a parameter value of the pixel at the pixel position in the updated original image
a first quantizer which quantizes the updated original image outputted by the updated original image generator;
a second quantizer which quantizes the predicted image outputted by the predicted image generator;
a rate control section which compares the quantized updated original image and the predicted image to determine an amount of codes per image; and
an error correction code generator which generates an error correction code, for correcting an error of the updated original image with respect to the non-key frame, using the quantized updated original image based on the determined amount of codes per image, wherein
the parameter value of each pixel is a pixel value of said each pixel in the original or predicated image,
each of the predicted image and the updated original image is of same dimensions as those of the original image, and
the updated original image outputted by the updated original image generator differs from the original image only in that, for each pixel position where a difference between the original image and the predicted image is no larger than the predetermined quantization error, the updated original image has such a parameter value that a difference between the updated original image and the predicted image at said each pixel position is zero, to thereby cause a setting rate of the error correction code generated by the error correction code generator to be reduced.

US Pat. No. 10,171,804

VIDEO FRAME ENCODING SCHEME SELECTION

GOOGLE LLC, Mountain Vie...

1. A method for encoding a video stream, the method comprising:determining, using at least one of a historical video encoding data and a simulated encoding data generated based on at least one measurement of a time of a video encoding, a first estimated time period for encoding a frame of the video stream based on a first algorithm configured to estimate a processing time for encoding the frame, the first algorithm being based on a first input control signal corresponding to at least one variable setting of an encoder processing block, the at least one variable setting having a first predetermined processing time used as a variable in the first algorithm;
determining, using at least one of the historical video encoding data and the simulated encoding data, a second estimated time period for encoding the frame based on a second algorithm configured to estimate a processing time for encoding the frame, the second algorithm being based on a second input control signal corresponding to the at least one variable setting of the encoder processing block, the at least one variable setting having a second predetermined processing time used as a variable in the second algorithm;
measuring an elapsed time period for encoding the frame independent of other frames of the video stream;
comparing the elapsed time period for encoding the frame to at least one of the first estimated time period and the second estimated time period; and
changing an encoding scheme for encoding a subsequent frame of the video stream if the encoding time of the frame is one of less than the first estimated time period and greater than the second estimated time period.

US Pat. No. 10,171,803

IMAGE CAPTURING APPARATUS, CALIBRATION METHOD, AND NON-TRANSITORY COMPUTER-READABLE MEDIUM FOR CALCULATING PARAMETER FOR A POINT IMAGE RESTORATION PROCESS

FUJIFILM Corporation, To...

1. An image capturing apparatus comprising:an image capturing unit;
a display unit, including a processor, that displays an imaged picture imaged by the image capturing unit and a guide linearly shaped along a sagittal direction or a tangential direction in at least one of four corners of the imaged picture, the guide assisting imaging of a calibration image used for calibration in a point image restoration process;
a guide indication control unit included in the display unit that performs display control of the guide assisting the imaging of the calibration image used for the calibration in the point image restoration process; and
a parameter calculation unit that calculates a parameter for the point image restoration process on the basis of the calibration image imaged by the image capturing unit with assistance from the guide.

US Pat. No. 10,171,802

CALIBRATION METHOD AND CALIBRATION DEVICE

DENSO CORPORATION, Kariy...

1. A calibration method for calibrating an attitude of a camera mounted on a vehicle using a plurality of markers each arranged vertically and each positioned at pre-designated height from a road surface, the calibration method comprising:a first process including shooting an image of the plurality of markers with the camera, thereby generating a two-dimensional image;
a second process including converting the two-dimensional image, which is generated in the first process and represents the plurality of markers, into a bird's eye view image on a specific plane, the bird's eye view image reflecting the height of each of the plurality of markers, wherein the specific plane is a road surface or on a plane parallel to the road surface; and
a third process including calculating a parameter of the camera based on a position difference between the plurality of markers in the specific plane obtained in the second process,
wherein
a plurality of columnar marker poles are vertically extended from a road surface, and
markers are positioned on the columnar marker poles so as to face in directions toward the vehicle, each marker at a pre-designated height from the road surface.

US Pat. No. 10,171,801

DISPLAY DEVICE AND DISPLAY METHOD

Japan Display Inc., Toky...

1. A display device comprising:a detector configured to detect position information on a position of a viewer;
a parallax barrier configured to form a first area and a second area, a transmittance of the first area being higher than a transmittance of the second area;
a plurality of light adjustment sets each including a plurality of light sources and a light adjustment layer; and
a display unit configured to display an image including a plurality of parallax images,
wherein
the light sources are disposed on a light source substrate and include a first light source and a second light source,
an optical axis of illumination light from each of the light sources is in a vertical direction vertical to the light source substrate, the optical axis having highest brightness,
the light adjustment layer is configured to change a direction of an optical axis of illumination light irradiated from the first light source to a first bending direction having a first angle with the vertical direction and
a direction of an optical axis of illumination light irradiated from the second light source to a second bending direction having a second angle with the vertical direction, the first angle being different from the second angle, and
the parallax barrier is configured to change a position of the first area to:
a first position such that the optical axis of illumination light irradiated from the first light source passes through the first area; and
a second position such that the optical axis of illumination light irradiated from the second light source passes through the first area.

US Pat. No. 10,171,800

INPUT/OUTPUT DEVICE, INPUT/OUTPUT PROGRAM, AND INPUT/OUTPUT METHOD THAT PROVIDE VISUAL RECOGNITION OF OBJECT TO ADD A SENSE OF DISTANCE

MIRAMA SERVICE INC., New...

1. An input/output (I/O) device comprising:a display device that can generate a stereoscopic image in a virtual image display region;
a depth level sensor that measures a distance to an object in a three-dimensional space detection region; and
a control unit that comprises a memory and a processor and performs functions including:
a depth map processing function that sets an overlapping space region of the three-dimensional space detection region, the virtual image display region and an arm movement region in which both hands can move horizontally and vertically with joints of both shoulders being a center of rotation as a manipulation region, and sets a portion other than the manipulation region in the three-dimensional space detection region as a gesture region;
a gesture recognition function that recognizes a gesture of the hands in the gesture region; and
a graphics processing function that manipulates the stereoscopic image based on an output of the depth level sensor in the manipulation region or based on the gesture recognized in the gesture region; and
a calibration processing function that determines display position of the virtual image display region and adjusts automatically correlation of the three-dimensional space detection region and the virtual image display region.

US Pat. No. 10,171,799

PARALLAX IMAGE DISPLAY DEVICE, PARALLAX IMAGE GENERATION METHOD, PARALLAX IMAGE PRINT

FUJIFILM Corporation, To...

1. A parallax image display device comprising a computer, wherein the computer comprises:an image acquiring processor adapted to acquire a right-eye image and a left-eye image used for generating a parallax image enabling a stereoscopic view;
an information volume distribution calculating processor adapted to calculate an information volume distribution of the right-eye image and an information volume distribution of the left-eye image;
a parallax image generating processor adapted to generate the parallax image from the right-eye image and the left-eye image on the basis of the information volume distribution of the right-eye image and the information volume distribution of the left-eye image; and
a first parallax image display adapted to have a parallax image display area in which square reference regions are arranged in a grid pattern,
wherein the parallax image generating processor compares an information volume of the right-eye image and an information volume of the left-eye image in each of the reference regions on the basis of the information volume distribution of the right-eye image and the information volume distribution of the left-eye image, allocates one selected from a group consisting of only a right-eye region for displaying the right-eye image, only a left-eye region for displaying the left-eye image, and both of the right-eye region and the left-eye region in each of the reference regions on the basis of a size of information volumes of the right-eye image and the left-eye image, and displays the right-eye image and the left-eye image in the right-eye region and the left-eye region, respectively, to generate the parallax image,
wherein the information volume of each of the reference regions is at least one of an amount of harmonic signal components of the right-eye image or the left-eye image corresponding to each of the reference regions, a value of a maximum frequency of the right-eye image or the left-eye image corresponding to each of the reference regions, a variance value of a brightness distribution of the right-eye image or the left-eye image corresponding to each of the reference regions, and a difference in pixel value between the right-eye image and the left-eye image corresponding to each of the reference regions,
wherein the parallax image generating processor comprises an information volume comparing processor, a reference region allocating processor, an image reflecting processor, and a brightness regulating processor,
wherein the information volume comparing processor is adapted to compare an information volume of the right-eye image and an information volume of the left-eye image, both corresponding to a same one of the reference regions, in each of the reference regions on the basis of the information volume distribution of the right-eye image and the information volume distribution of the left-eye image, and output a comparison result,
wherein the reference region allocating processor is adapted to change region areas of the right-eye region and the left-eye region depending on size of the information volume and allocate the right-eye region and the left-eye region to different regions in each of the reference regions on the basis of the comparison result,
wherein the image reflecting processor is adapted to generate the parallax image by reflecting the right-eye image and the left-eye image in the right-eye region and the left-eye region, respectively, and
wherein the brightness regulating processor is adapted to increase brightness of the area of the right-eye region or the left-eye region allocated in each of the reference regions as the allocated area is smaller, and decrease brightness of the allocated area as the allocated area is larger.

US Pat. No. 10,171,798

LIQUID CRYSTAL LENTICULAR LENS ELEMENT, DRIVING METHOD THEREFOR, STEREOSCOPIC DISPLAY DEVICE, AND TERMINAL DEVICE

NLT TECHNOLOGIES, LTD., ...

1. A liquid crystal lenticular lens element, comprising: a first substrate; a second substrate in parallel thereto; a liquid crystal layer provided between both substrates; a first electrode formed on the liquid crystal layer side of the first substrate; and second electrodes comprising a plurality of stripe-shaped electrodes formed on the liquid crystal layer side of the second substrate, wherein:a stripe-shaped repeating structure comprising repeating units placed along an arrangement direction of the second electrodes is formed; and
an asymmetric refractive index distribution based on a mirror operation with respect to a plane bisecting each of the repeating units between the second electrodes to another direction that is perpendicular to the arrangement direction is induced by an electric signal applied to each of the electrodes from outside.

US Pat. No. 10,171,797

SYSTEMS AND METHODS TO CONFIRM THAT AN AUTOSTEREOSCOPIC DISPLAY IS ACCURATELY AIMED

Elwha LLC, Bellevue, WA ...

1. An autostereoscopic display system comprising:a processing circuit configured to:
control an adjustable autostereoscopic display to selectively project images representing a left-eye view and a right-eye view of an image;
control an emitter to emit a tracer beam when at least one of the left-eye view and the right-eye view of the image are selectively not projected;
receive feedback data from a sensor configured to detect reflections of the tracer beam;
determine an impact site of the tracer beam on a viewer based on the feedback data; and
adjust a direction of the tracer beam based on the impact site to intercept a desired impact site of the viewer.

US Pat. No. 10,171,796

MOVING BODY SYSTEM

RICOH COMPANY, LTD., Tok...

1. A moving body system comprising:an imaging device attachable to a moving body including at least one wheel and a steering device which is attached to the moving body and is operated to control a movement direction of the moving body in accordance with steering operation of the steering device;
a visual line direction changing mechanism configured to change a visual line direction of the imaging device; and
a control unit configured to determine at least one steering operation information selected from a group consisting of a steering operation angle of the steering device attached to the moving body, a steering operation speed of the steering device attached to the moving body, and an angle of inclination of the wheel of the moving body, and detect, based on said at least one steering operation information determined by the control unit, changing of the movement direction of the moving body, and change the visual line direction of the imaging device in accordance with the changing of the movement direction of the moving body, detected based on said at least one steering operation information.

US Pat. No. 10,171,795

SYSTEM AND METHOD FOR GENERATING DISPARITY MAP BY MATCHING STEREO IMAGES

Hyundai Motor Company, S...

1. A system for generating a disparity map, the system comprising:an image obtainer obtaining a left image and a right image;
a matching cost calculator calculating a matching cost for each of a plurality of pixels of the left image and the right image;
an accumulation and summation calculator calculating an accumulation value of one of the pixels based on the calculated matching cost, and calculating a relaxation accumulation value, which is an average of values obtained by multiplying each of relation coefficients between a disparity value of the one pixel and disparity values of surrounding pixels of the one pixel with the accumulation value of the one pixel;
a disparity value deriver deriving a disparity value for each of the pixels based on the calculated relaxation accumulation value; and
a disparity map generator generating the disparity map based on the derived disparity value.

US Pat. No. 10,171,794

METHOD FOR SELECTING CAMERAS AND IMAGE DISTRIBUTION SYSTEM CAPABLE OF APPROPRIATELY SELECTING CAMERAS

PANASONIC INTELLECTUAL PR...

1. A method, comprising:obtaining, using sensors included in a first number of cameras capturing images of a same scene, positions and image capture angles of the cameras;
selecting, for display, a second number of the cameras capturing the images by using a processor based on the positions and the image capture angles of the cameras;
determining whether to switch at least one of the second number of the cameras to another camera in a frame after the selecting; and
selecting, when the determining determines that the at least one of the second number of the cameras is to be switched, a new camera for the at least one of the second number of the cameras based on the positions and the image capture angles of the N cameras,
wherein the first number is a natural number at least equal to 2,
the second number is a natural number less than the first number, and
in the determining:
when a time elapsed since a previous switching operation is shorter than a first time, the at least one of the second number of the cameras is determined not to be switched to another camera;
when the time elapsed since the previous switching operation is equal to or longer than the first time but shorter than a second time longer than the first time, whether to switch the at least one of the second number of the cameras to another camera is determined in accordance with a first criterion; and
when the time elapsed since the previous switching operation is equal to or longer than the second time, whether to switch the at least one of the second number of the cameras to another camera is determined in accordance with a second criterion, the at least one of the second number of the cameras being more likely to be switched to the another camera according to the second criterion than the first criterion.

US Pat. No. 10,171,793

IMAGE CAPTURING DEVICE AND METHOD THEREOF

TELEFONAKTIEBOLAGET LM ER...

1. An image capturing device comprising:an image sensor for capturing a first image of a scene;
a light source for illuminating the scene with a first flash of coded light during capturing of the first image by the image sensor;
a network interface for effecting wireless communications with one or more of a communications network and a further image capturing device,
the image sensor being operative to detect a second flash of coded light emitted by the further image capturing device, and
a processing unit being operative to:
encode information into the first flash during capturing of the first image by the image sensor, the information enabling retrieval of the first image from a first data storage,
capture the first image; and
store the first image in the first data storage;
decode information which is encoded into the second flash, the information enabling retrieval of a second image captured by the further image capturing device from a second data storage;
retrieve the second image from the second data storage using the decoded information from the second flash; and
create a 3D model from the first image and the second image, wherein the 3D model is only created if a time interval between capturing the first image and capturing the second image is below a threshold time interval.

US Pat. No. 10,171,792

DEVICE AND METHOD FOR THREE-DIMENSIONAL VIDEO COMMUNICATION

The University of Akron, ...

1. A three-dimensional (3D) communication device for communication with one or more other communication devices, comprising:a housing having an elongated channel disposed therein, said channel extending along a longitudinal axis;
a processor;
a display in communication with said processor;
a stereoscopic camera in communication with said processor, said stereoscopic camera configured to capture content in three-dimensions (3D), said stereoscopic camera having a first camera element and a second camera element moveably carried in said channel, wherein said first and second camera elements are coaxial with said longitudinal axis; and
a network communication device coupled to said processor to transmit said captured 3D content to the one or more other communication devices and to receive 3D content that is transmitted from the one or more other communication devices for presentation on said display in three-dimensions;
wherein said processor controls movement of at least one of said first and second camera elements to adjust a distance between said first and second camera elements, such that said distance is based on a position of said stereoscopic camera relative to the content.

US Pat. No. 10,171,791

METHODS AND APPARATUS FOR CONDITIONAL DISPLAY OF A STEREOSCOPIC IMAGE PAIR

QUALCOMM Incorporated, S...

1. A method of displaying data on an electronic display, comprising:determining, via an electronic hardware processor, a vertical disparity between a first digital image and a second digital image representing left and right perspectives of a scene respectively, wherein a horizontal disparity represents a horizontal offset between the left and right perspectives; and
correcting, via the electronic hardware processor, the vertical disparity between the first image and the second image by generating a corrected image;
displaying, on an electronic display, by the electronic hardware processor, the stereoscopic image pair in response to the corrected vertical disparity being below a first threshold; and
displaying, on the electronic display, by the electronic hardware processor, a two dimensional image in response to the corrected vertical disparity exceeding a second threshold.

US Pat. No. 10,171,790

DEPTH SENSOR, IMAGE CAPTURE METHOD, AND IMAGE PROCESSING SYSTEM USING DEPTH SENSOR

Samsung Electronics Co., ...

1. An image capture method performed by a depth sensor, the method comprising:emitting a first source signal having a first amplitude towards a scene, and thereafter, emitting a second source signal having a second amplitude different from the first amplitude towards the scene;
receiving, as a first reflected signal, a reflected portion of the first source signal;
receiving, as a second reflected signal, a reflected portion of the seconds source signal;
demodulating the first reflected signal with an N-times sampling operation to generate a first image;
demodulating the second reflected signal with another N-times sampling operation to generate a second image, wherein N is an integer greater than one; and
interpolating the first and second images to generate a final image, wherein:
the second amplitude is greater than the first amplitude,
the first source signal is used to capture a first point of the scene that is relatively close to the depth sensor, and
the second source signal is used to capture a second point of the scene that is relatively far from the depth sensor.

US Pat. No. 10,171,789

MULTI-SENSOR VIDEO FRAME SYNCHRONIZATION APPARATUS AND METHODS

Texas Instruments Incorpo...

1. A video controller, comprising:a start-of-frame monitor to monitor a time of receipt of a start-of-frame indication associated with a first image sensor and a start-of-frame indication associated with a second image sensor;
a frame delta calculator operationally coupled to the start-of-frame monitor to calculate a time difference between the time of receipt associated with the first image sensor and the time of receipt associated with the second image sensor; and
a frame period adjuster coupled to the frame delta calculator to alter a frame period determining parameter associated with at least one of the first image sensor or the second image sensor from an original value to an adjusted value in order to decrease the time difference if the time difference is greater than or equal to a frame synchronization threshold value and to reset the frame period determining parameter to equal values at the first and second image sensors if the time difference is less than the frame synchronization threshold value, the frame period adjuster being configured to cause a horizontal blanking period of the first or second image sensor to be increased or decreased in response to the altered frame period determining parameter to decrease the time difference.

US Pat. No. 10,171,788

PLAYBACK METHOD ACCORDING TO FUNCTION OF PLAYBACK DEVICE

PANASONIC INTELLECTUAL PR...

1. A display device comprising:a first remapper that receives a video signal having a first luminance range, performs electro-optical transfer function (EOTF) conversion associated with the first luminance range on a code value represented by a luminance signal in the video signal to obtain a first luminance value, and converts the first luminance value obtained by the EOTF conversion into a second luminance value associated with a second luminance range different in maximum value from the first luminance range;
a second remapper that receives a graphics signal having the first luminance range and performs the EOTF conversion associated with the first luminance range on a code value represented by a luminance signal in the graphics signal to obtain a third luminance value, but does not perform conversion of the third luminance value obtained by the EOTF conversion;
a synthesizer that synthesizes the video signal having the second luminance value converted by the first remapper with the graphics signal having the third luminance value not converted by the second remapper; and
a display that displays a signal synthesized by the synthesizer.

US Pat. No. 10,171,787

REPRODUCTION DEVICE, REPRODUCTION METHOD, AND RECORDING MEDIUM FOR DISPLAYING GRAPHICS HAVING APPROPRIATE BRIGHTNESS

SONY CORPORATION, Tokyo ...

1. A reproduction device comprising:a readout unit configured to read out coded data of an HEVC stream including an extended video that is a video having a first brightness range that is wider than a second brightness range, brightness characteristic information that represents a brightness characteristic of the extended video, and graphics data that is superimposed on the extended video and that has the second brightness range, from a recording medium that has recorded the coded data, the brightness characteristic information, and the graphics data;
a first decoding unit configured to decode the coded data;
a second decoding unit configured to decode the graphics data;
a first conversion unit configured to convert a first pixel value of the graphics, obtained by decoding, to a second pixel value in the brightness characteristic of the extended video represented by the brightness characteristic information, the second pixel value representing brightness that is equivalent to brightness represented by the first pixel value in a brightness characteristic of the graphics; and
a synthesis unit configured to synthesize the extended video, the synthesized extended video being obtained by decoding the coded data, together with the graphics having the second pixel value,
wherein the readout unit is further configured to read out brightness conversion definition information that is recorded in the recording medium and that is used when performing brightness conversion,
wherein the brightness characteristic information and the brightness conversion definition information are inserted as SEI of the HEVC stream including the coded data,
wherein the brightness conversion definition information comprises an indication of a tone map model set from among a plurality of tone map models in order to perform the brightness conversion, and
wherein the readout unit, the first decoding unit, the second decoding unit, the first conversion unit, and the synthesis unit are each implemented via at least one processor.

US Pat. No. 10,171,786

LENS SHADING MODULATION

Apple Inc., Cupertino, C...

8. A system, comprising:an image capture device comprising a lens;
a memory operatively coupled to the image capture device and having, stored therein, computer program code; and
a programmable control device operatively coupled to the memory and comprising instructions stored thereon to cause the programmable control device to execute the computer program code to:
obtain a first image of a scene captured using the lens, wherein the first image comprises a first plurality of pixels;
determine a lens shading correction level based, at least in part, on a focal distance of the lens used to capture the first image; and
apply the determined lens shading correction level to the first image,
wherein the determined lens shading correction level encompasses both color shading gain and vignetting gain, and wherein the lens shading correction level is a function of normalized diagonal field position from a center of the lens used to capture the first image.

US Pat. No. 10,171,785

COLOR BALANCING BASED ON REFERENCE POINTS

Disney Enterprises, Inc.,...

1. A computer-implemented method of adjusting coloration, the computer-implemented comprising:receiving a selection of one or more source reference points within a source image depicting a source lighting condition;
receiving a selection of one or more target reference points within a target image depicting a target lighting condition distinct from the source lighting condition;
determining a coloration difference between a coloration of the one or more source reference points within the source image and a coloration of the one or more target reference points within the target image; and
normalizing the distinct, depicted source and target lighting conditions by adjusting, by operation of one or more computer processors, the coloration of at least a portion of the source image based on the determined coloration difference and to correspond more closely to the coloration of the target image, whereafter the source image is output.

US Pat. No. 10,171,784

SOLID STATE IMAGING DEVICE AND IMAGING APPARATUS HAVING A PLURALITY OF ADDITION READ MODES

NIKON CORPORATION, Tokyo...

1. An imaging device, comprising:a pixel section including (i) a plurality of first pixels that are each configured to output a first signal generated by light from a first filter having a first spectral characteristic, and (ii) a plurality of second pixels that are each configured to output a second signal generated by light from a second filter having a second spectral characteristic different from the first spectral characteristic, the plurality of first pixels and the plurality of second pixels being alternately arranged in a first direction;
a scanning circuit configured to read the first and second signals, respectively, from the respective plurality of first and second pixels that are arranged in the pixel section;
an outputting circuit including (i) a first outputting circuit that is configured to output a first addition signal generated by adding a plurality of the first signals read from the plurality of first pixels, and (ii) a second outputting circuit configured to output a second addition signal generated by adding a plurality of the second signals read from the plurality of second pixels, the pixel section arranged between the first outputting circuit and the second outputting circuit in a second direction crossing the first direction; and
a controlling circuit configured to control the outputting circuit to shift, in the first direction, a pixel position corresponding to:
(1) a sub-set of the plurality of the first signals to be added by the first outputting circuit among the plurality of the first signals read by the scanning circuit from the plurality of the first pixels; and
(2) a sub-set of the plurality of the second signals to be added by the second outputting circuit among the plurality of the second signals read by the scanning circuit from the plurality of the second pixels.

US Pat. No. 10,171,783

RGB SIGNAL TO RGBY SIGNAL IMAGE CONVERTING SYSTEM AND METHOD

Shenzhen China Star Optoe...

2. An RGB signal to RGBY signal image converting method for an RGBY image display apparatus having multiple pixel units each of which is consisted of a red sub-pixel unit, a green sub-pixel unit and blue sub-pixel unit, comprises steps of:receiving RGB input signals Ri, Gi and Bi from a signal transmitting connector connected to the RGBY image display apparatus;
determining whether a color of the RGB input signals is yellow, wherein if the color of the RGB input signals is yellow, further comprises determining a numerical magnitude relationship between the Ri input signal and Gi input signal; and calculating the RGBY output signals Ro, Go, Bo and Yo according to a determining result; and
calculating and outputting RGBY output signals Ro, Go, Bo and Yo used to control gray scale values of the red, green and blue sub-pixel units for the corresponding pixel unit, when the color of the RGB input signals is not yellow, wherein Y0=0, Ro=Ri, Go=Gi and Bo=Bi.

US Pat. No. 10,171,782

IMAGE SENSOR AND METHOD OF GENERATING RESTORATION IMAGE

SAMSUNG ELECTRONICS CO., ...

1. An image sensor comprising:a plurality of non-color pixel sensors each configured to sense a non-color signal; and
a color pixel sensing region including at least one color pixel sensor configured to sense a color signal,
wherein the color pixel sensing region has an area physically greater than an area of each of the non-color pixel sensors,
wherein the color pixel sensing region is encompassed by the non-color pixel sensors, and
wherein at least two non-color pixel sensors are placed between color pixel sensing regions.

US Pat. No. 10,171,781

PROJECTION APPARATUS, METHOD FOR CONTROLLING THE SAME, AND PROJECTION SYSTEM

CANON KABUSHIKI KAISHA, ...

1. A projection apparatus comprising:a processor; and
a memory having stored thereon instructions that when executed by the processor, cause the processor to:
output an image signal in which a predetermined calibration pattern is synthesized in an input image;
control transmittance or reflectance of a display unit and form an image based on the output image signal;
project an image formed on the display unit by irradiating the display unit with light;
capture the projected image by an image sensor which employs a rolling shutter system in which charge accumulation is performed for each row;
extract the calibration pattern from the captured image and generate a correction parameter for correcting an image to be formed on the display unit depending on a condition of the extracted calibration pattern; and
control image capturing timing so that images before and after update of the display unit is not mixed in the captured image,
wherein controlling the image capturing timing is executed by
recognizing update timing of a predetermined line of the display unit on a basis of a synchronizing signal, and by
starting charge accumulation for a predetermined row of the image sensor during an update period of the display unit.

US Pat. No. 10,171,780

LIGHTING APPARATUS

MAXELL, LTD., Kyoto (JP)...

1. A lighting apparatus comprising:an illuminator configured to emit illumination light;
a projector configured to emit image-projecting light for projecting an image; and
a sensor configured to emit operation-detecting emission light used for operation detection, and to detect an operation by an operation object in a range including an image projection area of the projector,
wherein the illumination light, the image-projecting light, and the operation-detecting emission light have respective different wavelength distribution characteristics,
regarding a light amount in a wavelength range of light used by the sensor for the operation detection, a light amount of the operation-detecting emission light is the largest among those of the illumination light, the image-projecting light, and the operation-detecting emission light,
the projector has an optical filter configured to cut off or to reduce a wavelength in a non-visible light range, the optical filter being disposed at any position on such an optical path in which light from a light source becomes the image-projecting light,
the illuminator has an optical filter configured to cut off or to reduce a wavelength in the non-visible light range before light emitted from an illumination light source becomes the illumination light, and
the lighting apparatus further comprises:
a controller configured to set a virtual switch area in an illumination area of the operation-detecting emission light and at a position outside an image projection area of the projector, and to control execution of a given process when the sensor detects an operation by the operation object with respect to the virtual switch area; and
the controller is further configured to control, in setting the position of the virtual switch area, a display for a setting guide expression for a user in the image projection area of the projector so that the position of the virtual switch area can be set at a position intended by the user outside the image projection area of the projector.

US Pat. No. 10,171,779

OPTIMIZING DRIVE SCHEMES FOR MULTIPLE PROJECTOR SYSTEMS

MTT Innovation Incorporat...

1. An image projection system, comprising:an image storage mechanism that stores one or more images to be presented in sequence on a projection screen;
a high dynamic range projector including a light source that only directly projects light to a first imaging element that is configured to modulate the phase of light from the light source for producing images on a projection screen with an average brightness over an entire area of the projection screen and images with a higher than average peak brightness over less than the entire area of the projection screen;
a low dynamic range projector including a light source that only projects light to a second imaging element that is different than the first imaging element that is configured to modulate the intensity of light from the first imaging element and the intensity of light from the light source in the low dynamic range projector, wherein the low dynamic range projector has a smaller dynamic range than the high dynamic range projector; and
control hardware that is configured to analyze data for each image to be projected onto the projection screen on a frame by frame basis to selectively control the low dynamic range projector to supply an approximately uniform amount of light onto the second imaging element such that some images are projected onto the projection screen using only the high dynamic range projector and some images requiring a higher than average brightness level over the entire area of the projection screen than is available from the high dynamic range projector but with a reduced dynamic range are projected on the screen using both the high dynamic range projector and the low dynamic range projector.

US Pat. No. 10,171,777

STREAMING AND STORING VIDEO CONTENT CAPTURED BY AN AUDIO/VIDEO RECORDING AND COMMUNICATION DEVICE

Amazon Technologies, Inc....

1. A method for transmitting and storing video images captured by an audio/video (A/V) recording and communication device, the A/V recording and communication device including a camera and a local storage device, the A/V recording and communication device being connected to a network, the method comprising:the A/V recording and communication device detecting a person at the A/V recording and communication device;
the camera of the A/V recording and communication device capturing video images from within a field of view of the camera at the A/V recording and communication device;
initiating a call to a client device via the network;
transmitting the video images in a plurality of data packets to the client device via the network;
receiving at least one negative-acknowledgement (NACK) indicating that at least one of the data packets was lost in transmission;
retransmitting the lost data packets to the network;
receiving a message with a list of data packets that were lost in retransmission;
storing copies of the data packets on the list at the local storage device of the A/V recording and communication device;
receiving a notification that the call with the client device has terminated; and
after receiving the notification that the call with the client device has terminated, retrieving the data packets stored at the local storage device of the A/V recording and communication device and retransmitting the retrieved data packets to the network.

US Pat. No. 10,171,776

SYSTEMS AND METHODS FOR VIDEO MONITORING USING LINKED DEVICES

Verint Systems Ltd., (IL...

1. A video monitoring system for a predetermined area, comprising:a map representing the predetermined area, on which icons are placed and configured, the icons representing at least two video monitoring devices comprising a first video monitoring device and a second video monitoring device, the at least two video monitoring devices deployed in the predetermined area;
a first scene of a plurality of scenes, the first scene specifying a relationship between the first video monitoring device and the second video monitoring device, the first video monitoring device being a main video monitoring device for the first scene, wherein the second video monitoring device is logically linked to the main video monitoring device;
a recorder, the recorder configured to record video feed data received from the at least two logically linked video monitoring devices to stable storage; and
a video manager that receives the map, the video feed data, and the first scene, displays the map, the video feed data, and a navigation display based on the first scene, and accepts input from a user to track an entity through the predetermined area,
wherein the first scene comprises linked video data from a plurality of video monitoring devices, and
wherein the video manager is configured to generate graphical representations of a plurality of connections, at least one connection extending between the first scene and the video feed data from the logically linked second video monitoring device, wherein each connection of the at least one connections is indicative of an exit point from the range of view of the first scene that results in entry of a range of view in the video feed data from the logically linked second video monitoring device.

US Pat. No. 10,171,775

AUTONOMOUS VEHICLE VISION SYSTEM

VECNA TECHNOLOGIES, INC.,...

1. A method of autonomously operating an autonomous vehicle, comprising:providing a vision system having at least two cameras in operable communication with the autonomous vehicle for providing substantially similar views relative to the vehicle, the at least two cameras receiving information relating to the views and the views at least being substantially in the direction of travel of the vehicle, the two cameras alone at least capable of determining distances between the autonomous vehicle and any objects in the autonomous vehicle's path;
providing a stand-alone laser in operable communication with the vehicle only for selectively shining a single discrete mark on at least a portion of the views provided to the at least two cameras, wherein a shape of the single discrete mark is determined from a template stored in a memory of the autonomous vehicle;
determining whether the information received by the at least two cameras is at least ambiguous regarding the views and therefore capable of more than one interpretation with regard to the direction of travel of the vehicle;
in response to determining that the information received by the at least two cameras is ambiguous, activating the laser to project the single discrete mark into the views provided by the at least two cameras, wherein the single discrete mark is projected in the shape determined from the template stored in the memory of the autonomous vehicle;
detecting the projected single discrete mark within a corresponding portion of each of the views provided by the at least two cameras, the detecting based on analyzing objects contained in each of the views against expected shape information of the single discrete mark contained within the template;
in response to detecting the projected single discrete mark within each of the views provided by the at least two cameras, deactivating the laser; and
calculating a distance to an object upon which the projected single discrete mark impinges, the calculating based on image data contained within the corresponding portion in which the projected single discrete mark was detected in each of the views provided by the at least two cameras, in order to thereby resolve the ambiguity.

US Pat. No. 10,171,774

CAMERA CONTROL DEVICE, CAMERA CONTROL METHOD, AND CAMERA CONTROL SYSTEM

PANASONIC INTELLECTUAL PR...

1. A camera control device for controlling cameras comprising:an entering prediction value calculator configured to calculate a first entering prediction value representing a possibility of a user terminal entering into a monitoring range of a first camera and a second entering prediction value representing a possibility of the user terminal entering a monitoring range of a second camera, based on movement history information about the user terminal with respect to a position of the first camera and with respect to a position of the second camera, the monitoring range of the first camera being a range in which an image of the user terminal is captured by the first camera and the monitoring range of the second camera being a range in which the image of the user terminal is captured by the second camera, the movement history information including fourth information as long-term movement history information about the user terminal, the entering prediction value calculator calculates the first entering prediction value larger than the second entering prediction value when the fourth information indicates a total time of stay of the user terminal in the monitoring range of the first camera for a first predetermined time is longer than the total of the time of stay of the user terminal in the monitoring range of the second camera;
an entering prediction time calculator configured to calculate a first entering prediction time and a second entering prediction time, the first entering prediction time being a prediction time necessary for the user terminal to enter into the monitoring range of the first camera based on first information representing a current position of the user terminal and second information representing the position of the first and the second cameras, the second entering prediction time being a prediction time necessary for the user terminal to enter into the monitoring range of the second camera based on the first information and the second information;
a preparation time calculator configured to calculate a first preparation time necessary for running a first application on the first camera and a second preparation time necessary for running a second application on the second camera; and
a determination unit configured to determine whether preparation for running the first application on the first camera is started based on the first entering prediction value, the first entering prediction time and the first preparation time and whether preparation for running the second application on the second camera is started based on the second entering prediction value, the second entering prediction time and the second preparation time.

US Pat. No. 10,171,773

DYNAMIC VIDEO IMAGE MANAGEMENT

International Business Ma...

1. A computer system for dynamic video image management, the computer system comprising a computer readable memory, a processing unit communicatively coupled to the computer readable memory, computer readable storage medium, and program instructions stored on the computer readable storage medium for execution by the processing unit via the computer readable memory, the program instructions comprising:program instructions to collect, with respect to a dynamic video image, a set of dynamic image quality factors;
program instructions to determine, based on the set of dynamic image quality factors, a set of display parameter values of a set of display parameters for a set of computing assets to benefit the set of dynamic image quality factors with respect to the dynamic video image;
program instructions to configure, using the set of display parameter values, the set of computing assets to benefit the set of dynamic image quality factors with respect to the dynamic video image;
program instructions to maintain, to configure the set of computing assets without changing a video camera configuration, the video camera configuration;
program instructions to structure the set of computing assets to include a set of secondary computing assets;
program instructions to maintain, to configure the set of computing assets without changing a set of active display parameter values of a set of active display parameters for a set of active computing assets, the set of active display parameter values of the set of active display parameters for the set of active computing assets;
program instructions to disable, for a threshold temporal period, a modification to the set of active display parameter values of the set of active display parameters for the set of active computing assets;
program instructions to structure the set of secondary computing assets to include a plurality of computing devices which run a plurality of separate operating systems which have a plurality of different applications which include a plurality of separate application windows for presentation on a plurality of different physical display screens, wherein the set of display parameter values is for the plurality of separate application windows; and
program instructions to configure the set of secondary computing assets in a gradual fashion to manage the dynamic video image based on a set of incremental changes to the set of display parameter values.

US Pat. No. 10,171,771

CAMERA SYSTEM FOR VIDEO CONFERENCE ENDPOINTS

Cisco Technology, Inc., ...

1. An apparatus comprising:a wide lens camera fixedly positioned within a camera housing to provide an overall view of a space;
a first long focus lens camera fixedly positioned within the camera housing at a first angle with respect to the wide lens camera so that the first long focus lens camera provides a view of a first portion of the space;
a second long focus lens camera that is fixedly positioned within the camera housing at a second angle with respect to the wide lens camera and rotated, about a first vertical axis extending through the second long focus lens camera, towards the first long focus lens camera so that the second long focus lens camera provides a view of a second portion of the space; and
a third long focus lens camera fixedly that is positioned within the camera housing at a third angle with respect to the wide lens camera and rotated, about a second vertical axis extending through the third long focus lens camera, towards the first long focus lens camera so that the third long focus lens camera provides a view of a third portion of the space.

US Pat. No. 10,171,770

IMAGE PLAYBACK DEVICE, DISPLAY DEVICE, AND TRANSMISSION DEVICE

Maxell, Ltd., Kyoto (JP)...

1. A video playback apparatus comprising:a transmission apparatus configured to transmit video data, and
a display apparatus configured to display video based on the video data from the transmission apparatus,
wherein the transmission apparatus includes a first processor programmed to:
receive an encoded data stream,
generate decoded video data in a first format by decoding the encoded data stream,
transmit, to the display apparatus, available interpolation data information indicating a plurality of kinds of interpolation data based on the decoded video data in the first format,
receive, from the display apparatus, selection information indicating a selected kind of interpolation data,
generate video data in a second format and interpolation data from the decoded video data in the first format on the basis of the selection information, and
transmit, to the display apparatus, the generated video data in the second format and the generated interpolation data,
wherein the display apparatus includes a second processor programmed to:
select the one of the kinds of interpolation data according to a predetermined priority order and based on the received available interpolation data information, and
transmit the selection information indicating the selected kind of interpolation data,
receive, from the transmission apparatus, the generated video data in the second format and the generated interpolation data, and
display the video resulting from interpolation of the received video data based on the received interpolation data for interpolating differences between the second format and the first format, and
wherein the first processor is further programmed to:
generate interpolation data list information indicating a list of the kinds of interpolation data capable of being generated by the transmission apparatus based on the encoded data stream as the available interpolation data information.

US Pat. No. 10,171,769

SOUND SOURCE SELECTION FOR AURAL INTEREST

International Business Ma...

1. A method comprising:modifying a video recording by adding to the video recording a viewer-selectable region of a video display plane corresponding to a sub-set of pixels within a set of pixels displayed during playback of the video recording, the viewer-selectable region corresponding to a first sound source recorded by at least one microphone of a plurality of microphones from a three-dimensional scene; and
adjusting an audio signal played by the modified video recording based, at least in part, upon selection of the viewer-selectable region during playback of the modified video recording;
wherein:
the at least one microphone records audio from the first sound source on an audio channel that is distinct from the audio channels of other microphones of the plurality of microphones; and
selection of the viewer-selectable region plays an audio recording made by the at least one microphone corresponding to the first sound source.

US Pat. No. 10,171,768

CURVE PROFILE CONTROL FOR A FLEXIBLE DISPLAY

INTERNATIONAL BUSINESS MA...

1. A method comprising:tracking curve profiles applied to one or more flexible displays by one or more users in association with presentation of different digital media on the one or more flexible displays;
building predefined rules base on the tracking, the predefined rules defining preferred curve profiles based on curves applied to the one or more flexible displays in presenting the different digital media and comprising mappings between individual characteristics of different digital media and the preferred curve profiles;
storing the predefined rules as candidates for selection to apply in association with presentation of other digital media;
obtaining a first digital media to be presented on a flexible display;
automatically determining a curve profile to apply to the flexible display in association with presentation of the first digital media on the flexible display, the automatically determining being based at least in part on an analysis of the first digital media to be presented, wherein the automatically determining the curve profile comprises:
comparing identified characteristics of the first digital media to at least one mapping provided by the stored predefined rules;
identifying a predefined rule, of the stored predefined rules and based on the comparing, having one or more mappings of digital media characteristics that correspond to the identified characteristics of the first digital media, the digital media characteristics being those shared with second digital media, different from the first digital media, the second digital media being at least a subset of the different digital media presented on the one or more flexible displays; and
selecting the preferred curve profile of the identified predefined rule, wherein the automatically determined curve profile to apply is the selected preferred curve profile or is determined based on the selected preferred curve profile; and
applying the automatically determined curve profile to the flexible display in association with the presentation of the first digital media on the flexible display.

US Pat. No. 10,171,767

IMAGE READER COMPRISING CMOS BASED IMAGE SENSOR ARRAY

HAND HELD PRODUCTS, INC.,...

1. A method for capturing and decoding at least a two dimensional bar code in image data captured by an image reader, the image reader comprising an image sensor array comprising plurality of pixels in a two-dimensional array, and the image reader further comprising at least one illumination light source, the method comprising:exposing all or substantially all of the pixels in the image sensor array in a global shutter mode, wherein exposing the all or substantially all of the pixels in the global shutter mode comprises exposing the all or substantially all of the pixels in response to an exposure control timing pulse; and
illuminating at least a portion of the bar code in response to an illumination control timing pulse;
wherein the exposure control timing pulse and the illumination control timing pulse are interdependent.

US Pat. No. 10,171,766

IMAGING DEVICE WITH REDUCED DELAY IN DISPLAY

Seiko Epson Corporation, ...

1. An imaging device comprising:a controller including a circuit;
an image sensor that performs imaging operations at intervals of a predetermined sensor cycle;
an image data generator that generates image data based on output data from the image sensor; and
a display that displays an image represented by the image data within a second display scanning period whose length is shorter than a first display scanning period corresponding to a display cycle that is N times the sensor cycle (N being an integer larger than or equal to “2”) by a margin period which is variable.

US Pat. No. 10,171,765

BIT LINE BOOST FOR FAST SETTLING WITH CURRENT SOURCE OF ADJUSTABLE SIZE

OmniVision Technologies, ...

16. A method of fast settling an output line circuit, comprising: maintaining a high potential to a row select (RS) enable to switch on a row select (RS) transistor; maintaining a cascode control voltage (VCN) to bias a first cascode transistor, wherein the cascode control voltage (VCN) is a positive potential to ensure normal operation of the first cascode transistor; maintaining a bias control voltage (VBN) to bias a first bias transistor and a second bias transistor, wherein the bias control voltage (VBN) is a positive potential to ensure normal operation of the first bias transistor and the second bias transistor; maintaining a low potential to a first boost enable signal to open a first boost enable switch; resetting a floating diffusion (FD) to a reset FD voltage (VRFD) by setting a reset (RST) gate to high to switch on a reset (RST) transistor; disconnecting the FD from the reset FD voltage (VRFD) by setting the RST gate to low to switch off the RST transistor; boosting one of a first RST surge current and a second RST surge current to sink a bitline; reading background charges on the FD, wherein the SF converts a background voltage from its gate terminal and provides an amplified background signal to the bitline on the SF source terminal when enabled by the closed RS transistor; transferring charges from a TX receiving terminal to a floating diffusion (FD) by setting a transfer (TX) gate to high to switch on a transfer (TX) transistor; discontinuing the charge transferring to the FD by setting the TX gate to low to switch off the TX transistor; boosting one of a first TX surge current and a second TX surge current to sink a bitline; and reading the image charges on the FD, wherein the SF converts an image signal from its gate terminal and provides an amplified image signal to the bitline on the SF source terminal when enabled by the closed RS transistor.

US Pat. No. 10,171,764

APPARATUS, SYSTEM AND METHOD FOR A MANUFACTURED IMAGER SYSTEM

Jabil Inc., St. Petersbu...

1. A method of manufacturing an in-process modifiable imager system, comprising:fixing the imager system relative to a focal target;
activating an imager in the imager system;
assessing a baseline optical signature at least partially dictated by a prior process step parameter of the activated imager, based on a first optical response of the aspects to the focal target;
responsive to the baseline optical signature, computing via at least one computing processor applying non-transitory computing code of at least one first parameter for material, the first parameter being selected from the group consisting of height, thickness and composition, to be deposited onto or between one or more layers of which the prior process step parameter on the imager is indicative, wherein the at least one first parameter is modified from the prior process step parameter; and
executing a first depositing via a materials deposition process of the at least one first parameter of material on the imager based upon the computing via the at least one computing processor.

US Pat. No. 10,171,763

METHOD FOR FIXED PATTERN NOISE REDUCTION AND USE OF SUCH METHOD

Axis AB, Lund (SE)

1. A method for structural fixed pattern noise reduction in a video stream comprises:defining a pixel to be processed in a first image frame as a target pixel;
for each target pixel in the first image frame,
defining a first target pixel patch including the target pixel,
defining a first search area comprising the first target pixel patch, and
for each pixel in the first search area,
comparing a first pixel patch around the pixel in the first search area with the first target pixel patch, and
using the pixel in the first search area when calculating an average for the target pixel if similarity between the first pixel patch and the first target pixel patch is within a first threshold;
in a second image frame,
localizing a second target pixel,
defining a second search area comprising the second target pixel, and
for each pixel in the second search area,
comparing a second pixel patch around the pixel in the second search area with the first target pixel patch, and
using the pixel in the second search area when calculating the average for the target pixel if similarity between the second pixel patch and the first target pixel patch is within a second threshold; and
correcting the first image frame based on a value indicative of the average for the target pixel;
wherein the first image frame and the second image frame are separate image frames in the video stream, and wherein the step of localizing a second target pixel comprises estimating a location of the second target pixel using a location of the target pixel and camera directional data.

US Pat. No. 10,171,762

IMAGE SENSING DEVICE

Renesas Electronics Corpo...

1. An image sensing device comprising:a photoelectric conversion element;
a transfer transistor to read out an electric-charge from the photoelectric conversion element;
a floating diffusion to hold the electric-charge read out via the transfer transistor;
a reset circuit to switch a voltage to be supplied to the floating diffusion when the floating diffusion is reset;
an output wire to output an output signal generated based on the electric-charge held in the floating diffusion; and
a reset control circuit to instruct switching of the voltage supplied by the reset circuit to the floating diffusion, and output a reset control signal,
wherein the reset circuit supplies,
a first reset voltage based on a power-source voltage to the floating diffusion in a first reset operation that resets the floating diffusion and the photoelectric conversion element prior to a light-exposure period for exposing the photoelectric conversion element with light, and
supplies a second reset voltage based on a reset correction voltage lower than the power-source voltage to the floating diffusion and thereafter supplies the first reset voltage, in a second reset operation that resets the floating diffusion during the light-exposure period for exposing the photoelectric conversion element with the light.

US Pat. No. 10,171,761

SOLID-STATE IMAGING DEVICE AND ELECTRONIC DEVICE

Sony Corporation, Tokyo ...

1. A solid-state imaging device, comprising:a plurality of pixels;
a vertical signal line configured to output a pixel signal of a pixel of the plurality of pixels;
a clipping circuit configured to limit a first voltage of the vertical signal line to a second voltage,
wherein the clipping circuit includes:
a transistor configured to generate the second voltage based on a third voltage of a gate of the transistor,
a sample holding circuit configured to:
hold a reset level of the pixel that is output to the vertical signal line, and
input the reset level to the gate of the transistor, and
a plurality of capacitors; and
a voltage generation circuit configured to:
apply a fourth voltage to a first capacitor of the plurality of capacitors to read the reset level of the pixel, and
apply a fifth voltage to a second capacitor of the plurality of capacitors to read a signal level of the pixel.

US Pat. No. 10,171,760

SOLID-STATE IMAGING DEVICE, METHOD FOR DRIVING SOLID-STATE IMAGING DEVICE, AND ELECTRONIC APPARATUS USING AN AMPLIFIER AND SIGNAL LINES FOR LOW AND HIGH GAIN

BRILLNICS INC., Grand Ca...

1. A solid-state imaging device comprisinga pixel portion in which pixels are arranged,
a readout circuit including an amplifier capable of amplifying a pixel readout voltage read out from the pixels,
a first signal line to which a readout voltage of a low gain is output, and
a second signal line to which the output side of the amplifier is connected and a readout voltage of a high gain is output, wherein
a pixel includes
a photoelectric conversion element which accumulates a charge generated by photoelectric conversion in an accumulation period,
a transfer element capable of transferring a charge accumulated in the photoelectric conversion element in a transfer period,
a floating diffusion to which a charge accumulated in the photoelectric conversion element is transferred through the transfer element,
a source-follower element which converts the charge of the floating diffusion to a voltage signal in accordance with the charge quantity,
a reset element which resets the floating diffusion to a potential of the second signal line or a predetermined potential in a reset period, and
a feedback capacitor having one electrode connected to the floating diffusion and having another electrode connected to the second signal line, wherein
the first signal line connected to an output line of the voltage signal by the source-follower element and connected to the input side of the amplifier.

US Pat. No. 10,171,759

IMAGING DEVICE, METHOD FOR CONTROLLING IMAGING DEVICE, IMAGING SYSTEM, AND METHOD FOR CONTROLLING IMAGING SYSTEM

JVC KENWOOD CORPORATION, ...

1. An imaging device comprising:a first projection controller configured to control a first infrared projector, capable of projecting infrared light with multiple wavelengths, to project selectively the infrared light with the multiple wavelengths;
an imaging unit configured to image an object in a state where the first infrared projector projects the infrared light; and
a synchronous signal transmitter configured to transmit outward a synchronous signal for synchronizing a timing of projecting infrared light from a second infrared projector controlled by a second projection controller included in another imaging device other than the imaging device, with a timing of projecting the infrared light from the first infrared projector controlled by the first projection controller.

US Pat. No. 10,171,758

MULTI-SPECTRUM IMAGING

Digital Direct IR, Inc., ...

1. An imaging system, comprising:a first imager comprising an array of thermal infrared (IR) detectors, wherein the first imager is configured to receive incident photonic radiation and generate a thermal IR image, wherein each thermal IR detector comprises a photon absorber member that is configured to absorb thermal IR photonic radiation from the incident photonic radiation and convert the absorbed thermal IR photonic radiation to thermal energy, and reflect remaining photonic radiation in the incident photonic radiation along an optical path of the imaging system, wherein the photon absorber members within the array of thermal IR detectors collectively form a reflecting surface; and
a second imager disposed in said optical path of the imaging system, wherein the second imager is configured to receive the remaining photonic radiation reflected from the reflective surface collectively formed by the photon absorber members within the array of thermal IR detectors of the first imager and generate a second image;
wherein the first imager comprises:
a substrate, wherein each thermal IR detector is formed on the substrate; and
wherein each thermal IR detector further comprises:
a resonator member configured to generate an output signal having a frequency or period of oscillation; and
wherein the photon absorber member comprises an unpowered detector member that is configured for photon exposure, wherein the unpowered detector member comprises a material having a thermal coefficient of expansion that causes the unpowered detector member to distort due to said photon exposure, wherein the unpowered detector member is further configured to apply a mechanical force to the resonator member due to said distortion of the unpowered detector member, and cause a change in the frequency or period of oscillation of the output signal generated by the resonator member due to said mechanical force applied to the resonator member; and
a thermal insulating member configured to thermally insulate the resonator member from the unpowered detector member; and
digital circuitry configured to (i) determine the frequency or period of oscillation of the output signal generated by the resonator member as a result of the mechanical force applied to the resonator member by the unpowered detector member, and to (ii) determine an amount of said photon exposure based on the determined frequency or period of oscillation of the output signal generated by the resonator member.

US Pat. No. 10,171,757

IMAGE CAPTURING DEVICE, IMAGE CAPTURING METHOD, CODED INFRARED CUT FILTER, AND CODED PARTICULAR COLOR CUT FILTER

NEC CORPORATION, Tokyo (...

1. An image capturing device comprising:a color filter which separates an incident light into a plurality of colors;
a photo sensor which converts the plurality of colors which the color filter has separated into data representing image signals;
a coded infrared cut filter which is provided in front of the color filter in the light traveling direction or between the color filter and the photo sensor, and which cuts a near infrared light and passes the near infrared light; and
a hardware image processor which acquires plural-color information and near infrared information of a pixel based on a plurality of image signals related to lights which pass a cutting portion of the coded infrared cut filter and an image signal related to a light which passes a transmitting portion of the filter.

US Pat. No. 10,171,756

IMAGE-BASED LOCALIZATION OF ULTRAVIOLET CORONA

THE UNITED STATES OF AMER...

1. A method for identifying a fault in an electrical distribution system using an unmanned aerial vehicle (UAV), the method comprising:capturing an ultraviolet (UV) corona emission image of a corona event, the corona event being associated with a component of the electrical distribution system;
processing the UV corona emission image of the corona event to identify a center and a boundary of the UV corona emission image, the identified center being a UV nucleus of the UV corona emission image and the boundary demarcating an extent of the UV corona emission image;
capturing an image of the corona event in the visible band of the electromagnetic spectrum, such that the captured image includes the identified center of the UV emission image; and
generating and displaying an overlay on the captured image of the corona event, the displayed overlay identifying the center and the boundary of the UV corona emission image.

US Pat. No. 10,171,755

SYSTEMS AND METHODS FOR DETECTING LIGHT SOURCES

Elbit Systems of America,...

1. A method comprising:capturing an image including a sub-infrared light emitter;
applying a filter to a pixel of the captured image to isolate a signal strength of a range of frequencies, wherein the filter is tunable to the range of frequencies by tilting the filter;
comparing the signal strength of the filtered pixel to an expected signal strength of a background spectra for the range of frequencies;
as a result of a difference between the signal strength of the filtered pixel and the expected signal strength exceeding a predetermined threshold, identifying the pixel as corresponding to a light emitter;
as a result of the difference between the signal strength of the filtered pixel and the expected signal strength not exceeding a predetermined threshold, identifying the pixel as not corresponding to a light emitter;
capturing an image of a zone having a known spectral profile, the image having an actual spectral profile, wherein the image is captured by a ground-based image capture system proximate to an airport;
comparing the actual spectral profile to the known spectral profile and determining an atmospheric contribution profile bases on the comparison;
modifying the expected signal strength of the background spectra for the range of frequencies based on the atmospheric contribution profile; and
transmitting the atmospheric contribution profile or the modified expected signal strength to an aircraft proximate to the airport.

US Pat. No. 10,171,754

OVERLAY NON-VIDEO CONTENT ON A MOBILE DEVICE

Sony Interactive Entertai...

1. A mobile device comprising:a video camera;
a display;
a processor; and
a memory communicatively coupled with the processor and storing computer-readable instructions that, upon execution by the processor, cause the mobile device to:
capture, by at least using the video camera, video content displayed on a video display, the video content having a marker that includes time code, the time code identifying a temporal position corresponding to a portion of time in the video content;
track the video content based on the marker;
receive a user selection of a subportion of an image in the video content;
access non-video content associated with the subportion of the image and synchronized with the temporal position of the video content using the time code; and
present, on the display, the non-video content associated with the subportion of the image at substantially the same time as the video content is captured using the video camera.

US Pat. No. 10,171,753

SHOOTING METHOD, SHOOTING DEVICE AND COMPUTER STORAGE MEDIUM

NUBIA TECHNOLOGY CO., LTD...

1. A shooting method, comprising:successively collecting images;
reading the collected images, and identifying a light-painting area in a read current image; extracting the light-painting area, superposing the light-painting area on a corresponding position of a basic image for performing image composition so as to generate a composite image, and taking the composite image as a basic image for next image composition;
capturing the composite image, and encoding the captured composite image to obtain an encoded image; and
generating, by using data of the encoded images, a video file when shooting is ended;
wherein the step of identifying a light-painting area in a read current image comprises:
acquiring a position of a light-painting area in a read previous image; and
searching a preset range of a corresponding position in the read current image for light-painting bright spots matching the pre-stored features, and identifying an area where the light-painting bright spots are located as the light-painting area.

US Pat. No. 10,171,752

IMAGING APPARATUS, DISPLAY METHOD, AND PROGRAM

Olympus Corporation, Tok...

1. An imaging apparatus comprising:an imaging unit configured to continuously image a subject and generate moving image data of the subject;
a display unit configured to display a moving image corresponding to the moving image data;
a shooting controller configured to control the imaging unit to continuously image the subject in a moving image mode capable of connecting different pieces of the moving image data having different shooting time-points;
a thumbnail generation unit configured to generate resized image data by performing resize processing of reducing a size of image data of at least one frame constituting the moving image data based on the moving image data generated by the imaging unit, and generate a thumbnail representing the moving image data by combining a resized image corresponding to the resized image data with a template having a display area displaying information indicating that a different piece of the moving image data may be connected;
a display controller configured to display the thumbnail generated by the thumbnail generation unit on the display unit; and
an operating unit configured to receive an input of a start signal instructing a start of continuously imaging the subject to the imaging unit and a finish signal instructing a finish of the continuously imaging to the imaging unit,
wherein the moving image mode includes:
a first moving image mode that generates the moving image data by causing the imaging unit to continuously image the subject from a point of input of the start signal until a point of input of the finish signal; and
a second moving image mode that generates the moving image data by causing the imaging unit to continuously image the subject for a prescribed time-span from the point of input of the start signal from the operating unit, the second moving image mode being capable of connecting different pieces of the moving image data generated by the imaging unit at different time-points,
the shooting controller controls the imaging unit to start the continuously imaging in the first moving image mode or the second moving image mode when the operating unit has received the input of the start signal,
the thumbnail generation unit generates the thumbnail when the imaging unit has generated the moving image data in the second moving image mode, and
when the imaging unit has generated the moving image data in the first moving image mode, the thumbnail generation unit generates the thumbnail and thereafter generates trimming image data by performing trimming processing onto an area including the resized image on the thumbnail, and generates a first moving image thumbnail representing the moving image data captured in the first moving image mode by performing, onto the trimming image data, resize processing of enlargement up to an area that covers the display area.

US Pat. No. 10,171,751

SUPERIMPOSING AN IMAGE ON AN IMAGE OF AN OBJECT BEING PHOTOGRAPHED

Chad-Affonso Wathington, ...

1. A system comprising:an image sensor;
a beam combiner;
a lens array located in the system so as to project light from an object onto the image sensor via the beam combiner; and
an electro-optic display, which is located in the system, so that, when activated, a picture of choice is projected onto the image sensor superimposed with the light of the object via the beam combiner;
wherein no lenses are located between the beam combiner and the image sensor.

US Pat. No. 10,171,750

SOLID-STATE IMAGE SENSOR, IMAGING CONTROL METHOD, SIGNAL PROCESSING METHOD, AND ELECTRONIC APPARATUS

SONY CORPORATION, Tokyo ...

1. A solid-state image sensor, comprising:a pixel array unit that includes:
a plurality of pixels,
wherein the plurality of pixels comprise a first pixel and a second pixel,
wherein a first sensitivity of the first pixel is highest among the plurality of pixels, and
wherein the first pixel and the second pixel are of different type; and
a control unit configured to:
control, at least one of an analog gain of each of the first pixel and the second pixel or an exposure time for each of the first pixel and the second pixel, based on a ratio of the first sensitivity of the first pixel and a second sensitivity of the second pixel; and
correct, a first difference between the first sensitivity of the first pixel and the second sensitivity of the second pixel, based on the controlled at least one of the analog gain of each of the first pixel and the second pixel or the exposure time for each of the first pixel and the second pixel.

US Pat. No. 10,171,748

IMAGE PICKUP APPARATUS, NON-TRANSITORY COMPUTER-READABLE MEDIUM STORING COMPUTER PROGRAM, AND IMAGE PICKUP METHOD

Olympus Corporation, Tok...

9. An image pickup method comprising:an exposure control step of determining a proper exposure time, and setting, when the proper exposure time is longer than a frame period, a long exposure time equal to or shorter than the frame period and a short exposure time shorter than the long exposure time such that a total time of the short exposure time and one or more long exposure times is equal to the proper exposure time;
an image pickup step of outputting, for every frame period, a long exposure image exposed for the long exposure time and a short exposure image exposed for the short exposure time within an exposure period of the long exposure image, when the proper exposure time is longer than the frame period; and
a synthesizing step of adding the short exposure image of one frame and the long exposure image or long exposure images of one or more frames, to generate a synthetic image corresponding to the proper exposure time, when the proper exposure time is longer than the frame period.

US Pat. No. 10,171,747

IMAGE CAPTURING APPARATUS, EXTERNAL APPARATUS, IMAGE CAPTURING SYSTEM, METHOD FOR CONTROLLING IMAGE CAPTURING APPARATUS, COMPUTER PROGRAM, AND COMPUTER-READABLE STORAGE MEDIUM

Canon Kabushiki Kaisha, ...

1. An image capturing apparatus to communicate with an external apparatus via a network, the image capturing apparatus comprising:an image capturing unit;
a hardware processor; and
a memory storing instructions to be executed by the hardware processor, wherein, when the instructions stored in the memory are executed by the hardware processor, the image capturing apparatus functions as:
a receiving unit configured to receive, from the external apparatus via the network, a synthesizing command for controlling an operation of synthesizing a plurality of images that have been captured by the image capturing unit under different exposure conditions, and an exposure setting command for controlling an operation of obtaining an image that has been generated under a set exposure condition,
a control unit configured to selectively execute, in a case where the receiving unit receives the synthesizing command and the exposure setting command, one of a synthesizing operation and an exposure setting operation,
a determining unit configured to determine the operation executed by the control unit, and
a transmitting unit configured to transmit, to the external apparatus via the network, operation information indicating operations which are specifiable by the synthesizing command and the exposure setting command received by the receiving unit.

US Pat. No. 10,171,745

EXPOSURE COMPUTATION VIA DEPTH-BASED COMPUTATIONAL PHOTOGRAPHY

Dell Products, LP, Round...

1. A method in an electronic information handling system comprising:recording a first image of a scene at a first exposure level using a three-dimensional (3D) camera;
correlating distances from the 3D camera and exposure levels over a plurality of image elements of the first image;
selecting a first exposure parameter value for at least one of the plurality of image elements having a z-distance value falling within a range of z-distance values;
recording a second image of the scene according to the first exposure parameter value selected for the at least one of the plurality of image elements having a second exposure level; and
constructing a composite image based on at least a portion of the second image for the at least one of the plurality of image elements.

US Pat. No. 10,171,744

IMAGE PROCESSING APPARATUS, IMAGE CAPTURE APPARATUS, AND CONTROL METHOD FOR ADDING AN EFFECT OF A VIRTUAL LIGHT SOURCE TO A SUBJECT

CANON KABUSHIKI KAISHA, ...

1. An image processing apparatus, comprising:a processor; and
a memory including instructions that, when executed by the processor, cause the processor to function as:
an obtainment unit configured to obtain an image derived from image capture;
a computation unit configured to compute an effect of a virtual light source on a subject included in the image obtained by the obtainment unit, the virtual light source being non-existent at the time of the image capture; and
an output unit configured to output an image derived from addition of the effect of the virtual light source to the subject based on a result of the computation by the computation unit, wherein
the computation unit includes:
an estimation unit configured to, based on the obtained image, estimate an illuminating condition by an ambient light source in an environment where the image was captured;
a determination unit configured to, based on a result of the estimation by the estimation unit, determine an illumination direction of the virtual light source and reflective characteristics of the subject illuminated by the virtual light source; and
a processing unit configured to compute the effect of the virtual light source based on the illumination direction of the virtual light source and the reflective characteristics of the subject determined by the determination unit.

US Pat. No. 10,171,743

IMAGE PICKUP APPARATUS, IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM FOR IMPROVING QUALITY OF CAPTURED IMAGE

CANON KABUSHIKI KAISHA, ...

12. An image processing method comprising the steps of:detecting a saturated pixel of an image sensor based on a single image corresponding to image data output from the image sensor;
estimating a luminance value, which is outside a luminance range of the image sensor, of a pixel that was detected to be the saturated pixel of the image sensor based on the single image;
setting an exposure parameter based on the estimated luminance value; and
combining a plurality of images obtained from the image sensor to output a composite image of the plurality of images, the plurality of images obtained from the image sensor including at least one image obtained using the set exposure parameter.

US Pat. No. 10,171,742

IMAGE CAPTURING APPARATUS, METHOD, AND PROGRAM WITH OPERATION STATE DETERMINATION BASED UPON ANGULAR VELOCITY DETECTION

Sony Corporation, Tokyo ...

1. An image capturing apparatus comprising:an angular velocity detection unit configured to respectively detect angular velocities of movement of the image capturing apparatus at a plurality of times;
an operation determination unit configured to determine a panning operation state of the image capturing apparatus based on the detected angular velocities at the plurality of times, the determined panning operation state being one of a plurality of predetermined classifications of panning operation states; and
a zoom control unit configured to perform zoom control based on the determined panning operation state.