US Pat. No. 10,219,071

SYSTEMS AND METHODS FOR BANDLIMITING ANTI-NOISE IN PERSONAL AUDIO DEVICES HAVING ADAPTIVE NOISE CANCELLATION

Cirrus Logic, Inc., Aust...

9. A method comprising:receiving a reference microphone signal indicative of ambient audio sounds at the acoustic output of a transducer;
receiving an error microphone signal indicative of an acoustic output of the transducer and the ambient audio sounds at the acoustic output of the transducer;
generating an anti-noise signal from filtering the reference microphone signal with an adaptive filter to reduce the presence of the ambient audio sounds heard by a listener and shaping a response of the adaptive filter in conformity with the error microphone signal and the reference microphone signal by adapting the response of the adaptive filter to minimize the ambient audio sounds present in the error microphone signal;
further adjusting the response of the adaptive filter by combining injected noise with the reference microphone signal;
receiving the injected noise by a copy of the adaptive filter so that the response of the copy of the adaptive filter is controlled by the adaptive filter adapting to cancel a combination of the ambient audio sounds and the injected noise; and
controlling the response of the adaptive filter with the coefficients adapted in the copy of the adaptive filter, whereby the injected noise is not present in the anti-noise signal;
wherein each of a sample rate of the copy of the adaptive filter and a rate of adapting of the adaptive filter is significantly less than a sample rate of the adaptive filter and the sample rate of the copy of the adaptive filter is significantly less than the rate of adapting of the adaptive filter.

US Pat. No. 10,219,070

ACOUSTIC SYSTEM AND METHOD

1. An acoustic system, comprising:a coconut endocarp, said endocarp being a round shape,
said endocarp further comprising one opening for installation of a loudspeaker, wherein the opening is formed by two individual cuts,
the first cut being a circular cutout performed as the coconut endocarp rotates about the endocarp's longitudinal axis, the first cut forming a first flat surface exposing fibers of an endocarp membrane,
the second cut being a flat cross sectional cut made adjacent to said first cut, the second cut forming an adjacent flat edge at an angle relative to the first flat surface,
the two cuts creating a direct contact with fibers of the endocarp membrane, the cuts combining to maximize a surface area of exposed fibers of the endocarp,
wherein a loudspeaker is positioned flush against the endocarp such that parasitic acoustic resonances generated from one or more sides of the loudspeaker are directed to a dampening channel for a dissipation of undesired frequencies.

US Pat. No. 10,219,068

HEADSET WITH MAJOR AND MINOR ADJUSTMENTS

Voyetra Turtle Beach, Inc...

1. An audio headset, the headset comprising:a headband;
a headband endcap at each end of the headband;
a headband slide coupled to each headband endcap;
ear cups operatively coupled to the headband slides, each ear cup comprising a guide for restricting movement of a cross-bar element of a corresponding headband slide away from the ear cup while allowing vertical movement of the cross-bar with respect to the ear cup; and
a second headband located only above the headband slides, said second headband comprising a flexible band coupled to the headband endcaps and said second headband not in contact with the headband slides, wherein an adjustment of force on a user of the headset is enabled by actuation of at least one headband slide in a vertical direction, the actuation of the headband slide limited by a corresponding cross-bar element and guide.

US Pat. No. 10,219,067

AUTO-CALIBRATING NOISE CANCELING HEADPHONE

Harman International Indu...

1. A sound system comprising:a headphone including a transducer and at least two microphones disposed over the transducer and adapted to receive sound radiated therefrom;
an equalization filter adapted to equalize an audio input signal based on at least one predetermined coefficient; and
a loop filter circuit including a leaky integrator circuit adapted to generate a filtered audio signal based on the equalized audio input signal and a feedback signal indicative of sound received by the at least two microphones, and to provide the filtered audio signal to the transducer; and
a switch adapted to switch between a first position, in which the equalization filter is connected to an audio source for receiving a first audio input signal, and a second position, in which the equalization filter is adapted to receive a second audio input signal;
a controller programmed to:
control the switch to be arranged in the second position in response to a user command;
generate the second audio input signal which is indicative of a test signal;
receive a second feedback signal indicative of a test sound received by the at least two microphones;
calibrate the headphone by updating the at least one predetermined coefficient of the equalization filter based on the second feedback signal; and
control the switch to be arranged in the first position in response to the at least one predetermined coefficient being updated.

US Pat. No. 10,219,066

INTERCHANGEABLE WEARING MODES FOR A HEADSET

Plantronics, Inc., Santa...

1. A head-mountable sound delivery device, comprising:a mounting element configured to support the apparatus on a user's head;
a speaker capsule with an external surface having an upwardly-facing elongate groove defined therein;
a C-shaped retention element moveably coupled to the mounting element, into which the speaker capsule may be snapped such that the C-shaped retention element is located in the groove defined in the speaker capsule, thereby to prevent movement of the speaker capsule relative to the C-shaped retention element when the two are engaged, the C-shaped retention element comprising an inner surface, a first free end, and a second free end and a retention formation at each of the first and second free ends that in use engage corresponding retention formations in the speaker capsule.

US Pat. No. 10,219,065

TELECOIL ADAPTER

Otojoy LLC, Santa Barbar...

1. An apparatus comprising:an adapter including a telecoil, the adapter including an audio plug for audio signals received wirelessly via the telecoil to be communicated to and from the adapter via an electrical connection to an external device, and the adapter further including at least one of the following: an audio jack to physically receive an external audio plug in which the audio jack of the adapter is to receive the audio signals from the audio plug of the adapter via the electrical connection to the external device and/or headphones physically part of the adapter and electrically connected to receive the audio signals from the audio plug of the adapter, wherein the telecoil is incorporated into the adapter and the adapter is integrated into a cable electrically connected to the headphones at one end and including the audio plug at the other end; and
a mobile device including a storage medium, wherein the mobile device further comprises at least one of the following: a smart phone, a tablet, a laptop, a personal digital assistant, or a wearable computing device, the storage medium having stored thereon instructions executable by a computing device to process the audio signals, the audio signals to comprise electrical signals, the electrical signals to be induced in the telecoil by an electromagnetic (EM) field, the EM field to be generated by an external hearing loop, the electrical signals to be induced to further be amplified, and the instructions further executable to generate a quality rating for a hearing loop system associated with the electrical signals, the quality rating to be based at least in part on the electrical signals.

US Pat. No. 10,219,064

TRI-MICRO LOW FREQUENCY FILTER TRI-EAR BUD TIPS AND HORN BOOST WITH RATCHET EAR BUD LOCK

Acouva, Inc., San Franci...

1. A system for improving use of an in-ear utility device, the system comprising:Tri-Ear Buds adapted for a connection to an in-ear main trunk support extending from a solid portion of the in-ear utility device, wherein the Tri-Ear Buds are configured to reside in a user's ear canal within a first bend of the ear canal, wherein the Tri-Ear Buds comprise an end configured to reside in the user's ear canal at a distance less than 16 millimeters from the entrance of the user's ear canal;
a ratchet ear bud lock adapted to physically associate with the Tri-Ear Buds to facilitate the connection between the Tri-Ear Buds and the in-ear main trunk support, wherein the ratchet ear bud lock comprises locking features configured for a removal force adjustable from at least one of 0.25 Lbs/0.5 Lbs/0.75 Lbs/1.25 Lbs/1.5 Lbs/2.25 Lbs/2.5 Lbs, by way of decreasing jaws of the locking features; and
a horn boost component adapted to physically associate with the in-ear utility device, and wherein the horn boost component is configured to facilitate an acoustic horn effect.

US Pat. No. 10,219,063

IN-EAR WIRELESS DEVICE WITH BONE CONDUCTION MIC COMMUNICATION

Acouva, Inc., San Franci...

1. A wireless in-ear utility device, comprising:a housing comprising an oval shaped trunk configured to reside in a user's ear canal within the first bend of the ear canal, the housing comprising a proximal end configured to reside in the user's ear canal at a distance less than 16 millimeters from the entrance of the user's ear canal;
a microphone port located on an external surface of the housing and configured to receive first ambient external sounds from the low/mid/high frequencies (50 Hz to 10,000 kHz); a microphone located within the housing configured to receive the first ambient external sounds via the microphone port, wherein the received first ambient external sounds comprise sounds from the low/mid/high frequencies (50 Hz to 10,000 kHz);
a bone conduction microphone configured to detect resident frequencies to facilitate user voice recognition; a communications module located within the housing and configured for wireless communications, wherein the communication module receives second ambient external sounds from a second in-ear utility device located in the user's second ear, wherein the second ambient external sounds from the second in-ear utility device comprise sounds from the low/mid/high frequencies (50 Hz to 10,000 kHz);
and a processing system located within the housing, wherein the processing system is configured to identify the user based on a frequency profile shape of the user's voice and at least one of the first ambient external sounds and the second ambient external sounds process the first and the second ambient external sounds based on a frequency profile shape of the user's voice.

US Pat. No. 10,219,062

WIRELESS AUDIO OUTPUT DEVICES

Apple Inc., Cupertino, C...

1. A method comprising:determining that a first audio output device is not wirelessly communicatively coupled to a second audio output device;
detecting a user action associated with a housing configured to store the first audio output device and the second audio output device;
in response to detecting the user action:
allowing the first audio output device to become discoverable by a source communication device, and
detecting a pairing request to wirelessly communicatively couple the first audio output device to the source communication device within a threshold period of time from detecting the user action; and
in response to detecting the pairing request within the threshold period of time:
causing one or more wireless link keys stored on the first audio output device and the second audio output device to be erased, and
wirelessly communicatively coupling the first audio output device with the second audio output device.

US Pat. No. 10,219,061

LIGHT AND LOUDSPEAKER DRIVER DEVICE

Native Design Limited, (...

1. A combined light and loudspeaker driver device comprising:a loudspeaker driver having a loudspeaker diaphragm with an opening formed around a central longitudinal axis of the device, the central longitudinal axis defining a forward and a rearward direction of the device;
a housing for supporting the loudspeaker driver,
a light source positioned radially inwardly of the opening of the loudspeaker diaphragm, with respect to the central longitudinal axis and configured to direct light forward and away from the device;
a heat removal element comprising a heat sink having at least an axially central part formed rearwardly of the housing along the central longitudinal axis of the device, and a beat removal column extending from the axially central part of the heat sink in the forward direction along the central longitudinal axis of the device, the light source being mounted at the forward end of the heat removal column; and
a ring radiator tweeter positioned radially inwardly of the opening in the loudspeaker diaphragm and radially outwardly of the light source, with respect to the longitudinal axis.

US Pat. No. 10,219,060

HELMET-WORN DEVICE FOR ELECTRONIC COMMUNICATIONS DURING HIGH MOTION ACTIVITY

HEARSHOT INC., Toronto (...

1. An assembly for transmitting vibrations to a helmet worn by a user, the device comprising:an element adapted to adhere to an outer surface of the helmet;
an assembly which connects with the element and comprises
a bottom housing having a floor which passes through the element, teeth near the floor which engage with the element, and a sidewall extending upwardly around the perimeter of the bottom housing;
a top housing having an outer sidewall that fits outside of the sidewall of the bottom housing and also having a plurality of apertures in a top surface;
a PCB which sits within the top housing,
a pressure transducer placed atop the floor and in electrical connection with the PCB; and
a mechanical user interface placed above the PCB and having at least one button which extends upwardly and through one of the apertures on the top surface of the top housing.

US Pat. No. 10,219,059

SMART PASSENGER SERVICE UNIT

1. A passenger service unit for an aircraft cabin, comprising:an oxygen supply module comprising an oxygen canister and a plurality of oxygen masks;
a lighting module comprising a plurality of LED reading light units disposed on a single contiguous flexible printed circuit board;
at least one mini-speaker comprising a horn element, wherein a first mini-speaker of the at least one mini-speaker is integrated with a first LED reading light unit of the plurality of LED reading light units, an LED for illuminating the first LED reading light unit is at least partially disposed in the horn element of the first mini-speaker, and sound waves from the first mini-speaker travel adjacent to the LED, wherein the horn element of the first mini-speaker is shaped and positioned with respect to the first LED reading light unit such that a geometric plane passes through the first LED reading light unit and a circular cross-section of the horn element of the first mini-speaker; and
control circuitry for controlling the oxygen supply module, the lighting module, and the at least one mini-speaker, wherein the control circuitry is connected to
a power converter for converting an external power supply to voltage usable by the control circuitry, and
a single communications interface for communicating with an external management computing system.

US Pat. No. 10,219,058

ELECTRONIC DEVICE HAVING L-SHAPED SOUND CHANNEL AND METHOD FOR MANUFACTURING THE SAME

Chiun Mai Communication S...

1. An electronic device, comprising:a display;
a cover, attached to the display and defining a notch;
a frame, comprising:
a bottom wall, comprising a first surface and a second surface opposite to the first surface, wherein the bottom wall defines a through hole, and the first surface defines a first recess corresponding to the through hole; and
a side wall, extending from a peripheral edge of the bottom wall;
a sound assembly, comprising:
a sound output hole, defined and surrounded by the notch and the side wall;
a sound channel, formed in the frame and comprising a first channel and a second channel, wherein the first channel is formed by horizontally cutting from an inner surface of the first recess towards the side wall, the second channel is formed by vertically cutting from an upper surface of the side wall towards the bottom wall, and the second channel communicates with the first channel to form an L-shaped sound channel communicating with the sound output hole;
a sealing member, positioned on the first recess and sealing the first recess; and
a sound generating module, positioned on the second surface and corresponding to the through hole, wherein sound generated by the sound generating module is transmitted to the sound output hole through the through hole and the L-shaped sound channel.

US Pat. No. 10,219,057

AUDIO MODULE FOR AN ELECTRONIC DEVICE

Apple Inc., Cupertino, C...

1. An audio module for an electronic device, the audio module comprising:a driver assembly comprising:
a diaphragm defining a speaker plane; and
a voice coil attached to the diaphragm and positioned adjacent one or more magnets; and
an enclosure surrounding the driver assembly and defining:
a front volume positioned on a first side of the speaker plane and coupled to a sound port;
a back volume positioned on the first side of the speaker plane and on a second side of the speaker plane; and
a resonant cavity coupled to the front volume via a resonant cavity port and separated from the back volume by a resonant cavity cover.

US Pat. No. 10,219,055

LOUDSPEAKER MODULE

GOERTEK INC., Weifang (C...

1. A loudspeaker module, comprising: a module housing, wherein a loudspeaker unit is accommodated in the module housing, the loudspeaker unit comprises a unit housing and a unit front cover combined with each other, and a vibration system and a magnetic circuit system are accommodated in a space defined by the unit housing and the unit front cover, wherein an end surface of at least one sidewall of the unit housing is provided with an ultrasonic surface ultrasonically welded with the module housing, and the module housing is provided with an ultrasonic line which is provided at a position corresponding to the ultrasonic surface and is combined with the ultrasonic surface by ultrasonic welding, and wherein the loudspeaker unit is disposed adjacent to an edge of one side of the module housing, a sidewall of the unit housing is exposed to the outside of the module housing, and the ultrasonic surface is provided on the sidewall of the unit housing exposed to the outside of the module housing.

US Pat. No. 10,219,054

PROTECTIVE MEMBER FOR ACOUSTIC COMPONENT AND WATERPROOF CASE

NITTO DENKO CORPORATION, ...

1. A protective member for an acoustic component, the protective member comprising a sound-transmissive sheet that consists essentially of an elastomer,wherein no lamination of another layer is formed on any major surface of the sound-transmissive sheet, or a lamination of another layer is formed only on an edge region of any major surface of the sound-transmissive sheet; and
wherein the elastomer has a type A hardness in a range from 20 to 80 as measured according to JIS K 6253, and the sound-transmissive sheet has a thickness of 10 to 150 ?m.

US Pat. No. 10,219,053

FIBER-TO-COAX CONVERSION UNIT AND METHOD OF USING SAME

Viavi Solutions, Inc., S...

1. An apparatus, comprising:a housing,
an optical network unit positioned in the housing, the optical network unit configured to convert a fiber optical signal into an electrical signal suitable for transmission via an Ethernet cable, and
an adaptor positioned in the housing and connected to the optical network unit, the adaptor configured to convert the the electrical signal into an Ethernet-based RF signal, the adaptor including a port configured to receive a coaxial cable connector.

US Pat. No. 10,219,052

AGILE RESOURCE ON DEMAND SERVICE PROVISIONING IN SOFTWARE DEFINED OPTICAL NETWORKS

FUTUREWEI TECHNOLOGIES, I...

1. A method performed by a controller in signal communication with a reconfigurable optical add-drop multiplexer (ROADM) controlling a first link in an optical network portion of a communications network, the method comprising:receiving, by the controller, a first request for a first connection, the first link being in a first route in the communications network for the first connection;
sending, by the controller, first commands to the ROADM to allocate first bandwidth to the first link for at least the first connection;
receiving, by the controller, a second request for a second connection, the first link being in a second route in the communications network for the second connection; and
sending, by the controller, second commands to the ROADM to allocate second bandwidth to the first link for at least the first and second connections.

US Pat. No. 10,219,051

COMMUNICATION PLATFORM WITH FLEXIBLE PHOTONICS PAYLOAD

1. An apparatus, comprising:a spacecraft; and
a payload positioned on the spacecraft, the payload is configured to receive a wireless communication and to select a subset of sub-bands of a first optical signal representing the wireless communication, the selecting being such that bandwidth and center frequency of the sub-bands are independently programmable for each sub-band, the payload is configured to send content of the selected subset of sub-bands to one or more entities off of the spacecraft.

US Pat. No. 10,219,050

VIRTUAL LINE CARDS IN A DISAGGREGATED OPTICAL TRANSPORT NETWORK SWITCHING SYSTEM

FUJITSU LIMITED, Kawasak...

1. An optical transport networking switching system comprising:an Ethernet fabric including a number M of Ethernet switches, each of the M Ethernet switches having a number N of Ethernet switch ports, each of the N Ethernet switch ports having a number P of Ethernet switch sub-ports, wherein a variable i having a value ranging from 1 to M denotes the ith Ethernet switch corresponding to one of the M Ethernet switches, a variable j having a value ranging from 1 to N denotes the jth Ethernet switch port corresponding to one of the N Ethernet switch ports, and a variable k having a value ranging from 1 to P denotes the kth Ethernet switch sub-port corresponding to one of the P Ethernet switch sub-ports, and wherein N, M, and Pare greater than one; and
a plug-in universal (PIU) module having M PIU ports, wherein the ith PIU port of the M PIU ports corresponds to the ith Ethernet switch,
wherein the optical transport networking switching system switches optical data units through the Ethernet fabric using the PIU modules and a virtual switch fabric associated with the PIU modules.

US Pat. No. 10,219,049

OPTICAL RECEPTION APPARATUS, OPTICAL TRANSMISSION APPARATUS, OPTICAL COMMUNICATION SYSTEM, AND SKEW ADJUSTING METHOD

FUJITSU LIMITED, Kawasak...

1. An optical reception apparatus comprising:a memory; and
a processor coupled to the memory, wherein the processor executes a process comprising:
receiving an optical signal including a plurality of first pilot symbols obtained by modulating values of bits in a predetermined bit pattern by an optical transmission apparatus by a BPSK method in an IQ complex plane, and converting the received optical signal into an electrical signal;
performing suppression processing to suppress fluctuations in amplitude of the electrical signal;
extracting the first pilot symbols from the electrical signal having been subjected to the suppression processing;
first calculating a ratio of an amplitude component to a phase component of each of the first pilot symbols extracted by the extracting; and
transmitting information relating to skew adjustment based on the ratio of the amplitude component to the phase component calculated by the first calculating for each of a plurality of different control values for skew to the optical transmission apparatus.

US Pat. No. 10,219,048

METHOD AND SYSTEM FOR GENERATING REFERENCES TO RELATED VIDEO

ARRIS Enterprises LLC, S...

1. A method of generating references to related videos, comprising the steps of:comparing a keyword-context pair for a primary video to a plurality of keyword-context pairings, wherein:
the keyword-context pair for the primary video comprises: a keyword comprising one or more words identified within closed caption text of the primary video, and a context of the keyword, the context comprising program metadata of the primary video,
the plurality of keyword-context pairings is provided in a knowledge base that is a stored database separate from the primary video, the knowledge base comprising:
a pre-determined listing of a plurality of known keywords, each keyword comprising no express identification of, and no direct reference to, another video,
a plurality of known contexts, and
a plurality of pre-determined rules,
the plurality of keyword-context pairings stored in the knowledge base pairs each one of the known keywords with one or more of the known contexts,
each one of the keyword-context pairings stored in the knowledge base is associated with a corresponding one of the pre-determined rules stored in the knowledge base, and
each one of the pre-determined rules comprises one or more actions that, when performed, identify a reference video from the associated keyword-context pairing, the rules being pre-determined using semantic matching that is based on a contextual meaning of the keyword in the known context of the associated keyword-content pairing, to deduce a reference to another video;
based on the comparing, determining a match of the keyword-context pair with a matching one of the keyword-context pairings in the listing, wherein the keyword-context pair comprises: the keyword identified from the primary video, and the context of the keyword;
taking the one or more actions specified by the rule in the listing associated with the matching one of the keyword-context pairings in the listing;
obtaining, from a result of the one or more actions, information identifying a reference video related to the primary video; and
creating an annotation comprising program metadata of the reference video related to the primary video.

US Pat. No. 10,219,047

MEDIA CONTENT MATCHING USING CONTEXTUAL INFORMATION

GOOGLE LLC, Mountain Vie...

1. A method, comprising:determining whether a similarity value between characteristics of a portion of a probe media content item and characteristics of a portion of a reference media content of a reference media content item exceeds a threshold, wherein the threshold depends on the characteristics of the portion of the probe media content item and the characteristics of the portion of the reference media content;
in response to determining that the similarity value exceeds the threshold, determining that the probe media content item includes the portion of reference media content of the reference media content item;
upon determining that the probe media content item includes the portion of the reference media content, receiving a pair of media content items comprising the probe media content item and the reference media content item;
receiving metadata associated with the pair, the metadata associated with the pair providing information about the probe media content item, the reference media content item, and the portion of reference media content reused in the probe media content;
classifying the pair into a reuse group of a plurality of reuse groups based on the metadata associated with the pair, each of the plurality of reuse groups associated with a corresponding amount of reuse different from other reuse groups, the corresponding amount of reuse used for determining whether or not the probe media content item is to be flagged for removal;
comparing a first amount of the portion of reference media content to a second amount of reuse associated with the reuse group into which the pair is classified; and
responsive to the first amount of the portion of reference media content being greater than the second amount of reuse associated with the reuse group, flagging the probe media content item for removal.

US Pat. No. 10,219,046

DISTRIBUTION MANAGEMENT FOR MEDIA CONTENT ASSETS

PREMIERE DIGITAL SERVICES...

1. A computer-implemented method for distributing media content comprising:acquiring, into a computer database, title information for a media asset;
maintaining in the computer database, distribution requirements for one or more retailers and one or more territories, wherein the distribution requirements comprise one or more file requirements for one or more files required to distribute the media asset;
selecting one or more desired retailers from the one or more retailers;
based on the title information and the one or more desired retailers, automatically selecting one or more territories with distribution requirements that match the title information, wherein the title information comprises an original spoken language of the media asset;
displaying the one or more file requirements for the selected one or more desired retailers and the selected one or more territories, wherein the displaying the file requirements comprises interactively indicating whether one or more of the one or more files is available, is not available, or should be automatically created including displaying a cost associated with a creation of the one or more files;
creating an order, wherein the creating is based on the title information, the selected one or more desired retailers, the selected one or more territories, and the one or more file requirements, wherein the title information comprises one or more languages of the one or more files that are to be included in the order, and wherein the automatically selecting the one or more territories comprises selecting the one or more territories with language requirements that match the one or more languages of the one or more files that are to be included in the order;
receiving the one or more files identified by the one or more file requirements; and
based on the order, automatically submitting the one or more received files for distribution of the media asset via the selected one or more retailers in the one or more territories.

US Pat. No. 10,219,045

SERVER, IMAGE PROVIDING APPARATUS, AND IMAGE PROVIDING SYSTEM COMPRISING SAME

LG ELECTRONICS INC., Seo...

1. A server comprising: a memory to store a personal server list and network information of an image providing apparatus corresponding to the personal server list;an interface to receive a connection request from a terminal in response to a Web address input to the terminal;
a processor electrically coupled to the memory and the interface, and configured to: perform a control operation to transmit information for connection to a personal server to the terminal according to a connection request;
perform a control operation, when login information is received from the terminal, to transmit personal server list information corresponding to the login information to the terminal;
perform a control operation, when the terminal makes a request for information corresponding to a specific personal server list of the personal server list, to transmit, to the terminal, network information of an image providing apparatus corresponding to a corresponding personal server,
wherein a corresponding personal server stores thumbnails for a shared content list and a recommended content list,
wherein the network information comprises public IP information and private IP information of the image providing apparatus,
wherein the private IP information changes whenever the image providing apparatus is turned on, and wherein the interface connects to the image providing apparatus to receive new network information of the image providing apparatus whenever the image providing apparatus is turned on; and
perform a control operation to update the memory with the new network information.

US Pat. No. 10,219,044

ELECTRONIC PROGRAMMING GUIDE WITH SELECTABLE CATEGORIES

Intel Corporation, Santa...

1. A computer system capable of being used in association with a television and a remote control, the computer system comprising:a wireless communication interface capable of permitting, when the computer system is in operation, wireless communication between the computer system and a remote control;
a processor to execute program instructions that, when executed, result in performance of operations comprising:
displaying, on the television, a graphical user interface comprising an electronic programming guide that is capable of presenting, in response to user input, user selectable icons and related information associated, at least in part, with video content items that are capable of being selected, via the graphical user interface, for viewing on the television, the video content items to be received, at least in part, by the computer system via Internet;
the icons comprising video content category icons, a search icon, and a chat icon;
the category icons being associated with one or more respective selectable video content subcategory icons whose selection results in display of one or more associated selectable video content item icons;
the search icon being to facilitate keyword searching for available video content items, the available video content items including at least one video content item associated with currently ongoing video content;
the chat icon being capable of being selected, after the at least one video content item has been selected for display, to permit chatting related to the at least one video content item;
wherein:
the computer system is capable, when the computer system is in the operation, of receiving user-entered information indicating that a user of the computer system has decided that certain video content is to be categorized as being in favorite video content categorization as defined by the user;
the favorite video content categorization is to be associated with another icon that, when selected by the user when the computer system is in the operation, results in displaying, via the graphical user interface, of favorite video content item icons that have been categorized as being in the favorite video content categorization;
the favorite video content item icons are configurable to include at least one favorite video content item icon and at least one other favorite video content item icon;
the at least one favorite video content item icon is associated with other video content that is not yet available for viewing; and
the at least one other favorite video content item icon is associated with broadcast news video content that is currently available for viewing;
wherein the at least one favorite video content item icon and the at least one other favorite video content item icon are configurable to be displayed in separate categorizations.

US Pat. No. 10,219,043

SHARING VIDEO CONTENT FROM A SET TOP BOX THROUGH A MOBILE PHONE

11. A device comprising:a processing system including a processor of a mobile phone; and
a memory that stores executable instructions that, when executed by the processing system, facilitate performance of operations comprising:
receiving a first signal during presentation of video content at a display device coupled to a media processor, the first signal indicating that sharing of a portion of the video content is to be performed;
receiving a second signal initiating a message via a messaging client of the mobile phone;
obtaining the portion of the video content from a cache accessible to the media processor;
receiving a third signal representing an endpoint of a video clip from the portion of the video content;
editing the video clip in accordance with user input at the mobile phone;
converting a format of the video clip, thereby producing a converted video clip and enabling presentation of the converted video clip at a recipient device; and
transmitting the converted video clip to the recipient device via the messaging client,
wherein the media processor and the mobile phone comprise a natively integrated device, the converted video clip accordingly being produced and transmitted without installation of an application on the mobile phone being required, and wherein the media processor and the mobile phone are mutually authenticated.

US Pat. No. 10,219,042

METHOD AND APPARATUS FOR MANAGING PERSONAL CONTENT

1. A method, comprising:obtaining, by a system including a processor, first personal content associated with a first mobile communication device, wherein the first personal content is generated based on sensory information obtained by the first mobile communication device and a second mobile communications device, wherein the first mobile communication device obtains a portion of the sensory information from the second mobile communication device in response to a broadcast of a wireless signal by the first mobile communication device representing a notice to obtain additional sensory information as the portion of the sensory information, wherein the sensory information is associated with an environment of the first mobile communication device, and wherein the sensory information comprises images of the environment;
obtaining, by the system, image recognition information associated with an object;
performing, by the system, image recognition on the first personal content using the image recognition information to detect the object being present in a first image of the first personal content;
obtaining, by the system, second personal content associated with the first mobile communication device;
performing, by the system, additional image recognition on the second personal content using the image recognition information to detect the object being present in a second image of the second personal content; and
generating, by the system, combined media content based on the object being present in both the first personal content and the second personal content.

US Pat. No. 10,219,041

SYSTEMS AND METHODS FOR TRANSMITTING MEDIA ASSOCIATED WITH A MEASURE OF QUALITY BASED ON LEVEL OF GAME PLAY IN AN INTERACTIVE VIDEO GAMING ENVIRONMENT

ROVI GUIDES, INC., San J...

1. A method for providing supplemental content in 3D interactive video gaming environments, the method comprising:generating, by processing circuitry, a display of a 3D interactive video gaming environment;
identifying a subset of a plurality of videos, stored on a storage device, the plurality of videos having been provided by a plurality of other users;
selecting, by the processing circuitry, a video of the subset of the plurality of videos for display;
determining a position between a user and a background object in the 3D interactive video gaming environment; and
generating for display the video at the position.

US Pat. No. 10,219,040

VIDEO FRAME BOOKMARKING USER INTERFACE COMPONENT

THE DIRECTV GROUP, INC., ...

7. A system for bookmarking a frame of media content comprising:(a) a computer comprising a processor;
(b) a media player executing on the computer via the processor; and
(c) a user interface (UI) component that controls playback of the media content in the media player, wherein the UI component comprises:
(1) a circular progress bar;
(2) a scaled circular keyframe within the circular progress bar, wherein the scaled circular keyframe comprises a preview of the frame of the media content located at a time within the media content that is identified by a progress marker;
(3) a bookmark indicator that is displayed on the circular progress bar, wherein the bookmark indicator reflects a location in the media content where the frame is located, and wherein the frame is identified by a user, and wherein the bookmark indicator is different from the progress marker; and
(4) a UI feature for:
(i) accepting user input identifying the frame within the media content;
(ii) accepting user input associating the frame with a bookmark;
(iii) accepting user input selecting the bookmark indicator; and
(iv) in response to the user input selecting the bookmark indicator, playing the media content from the frame identified by the bookmark indicator.

US Pat. No. 10,219,039

METHODS AND APPARATUS TO ASSIGN VIEWERS TO MEDIA METER DATA

The Nielsen Company (US),...

1. A method to impute panelist household viewing behavior, the method comprising:monitoring, via a first meter, a first media presentation device in a tuning household associated with tuning panelists, the first meter structured to collect first media identification data indicative of first media to which the first media presentation device is tuned without collecting person identifying information indicative of which of the tuning panelists are exposed to the first media;
monitoring, via second meters, second media presentation devices in a set of viewing households associated with viewing panelists, the second meters structured to (i) collect second media identification data indicative of respective second media to which the second media presentation devices are tuned and (ii) collect person identifying information indicative of which of the viewing panelists are exposed to the respective second media;
transmitting, via a network, first data from the first meter, the first data including the first media identification data, the first data not including the person identifying information associated with the tuning panelists;
transmitting, via the network, second data from the second meters, the second data including the second media identification data and the person identifying information associated with the viewing panelists;
calculating, by executing an instruction with a processor, first viewing probabilities for the tuning panelists during a first set of time periods, the first viewing probabilities calculated based on the first data;
identifying a plurality of candidate viewing households from among the set of viewing households based on a similarity of household characteristics with the tuning household;
calculating, by executing an instruction with the processor, second viewing probabilities for the viewing panelists in the plurality of candidate viewing households during a second set of time periods, the second viewing probabilities calculated based on the second data;
identifying, by executing an instruction with the processor, a matching one of the plurality of candidate viewing households that matches the tuning household based on an absolute difference value between an average value of the first viewing probabilities and respective ones of average values of the second viewing probabilities; and
imputing, by executing an instruction with the processor, ones of tuning minutes of the tuning household as viewing minutes for ones of the tuning panelists when the matching one of the plurality of candidate viewing households exhibits viewing activity during one of the second set of time periods that matches one of the first set of time periods, the tuning minutes indicative of when the first media presentation device was tuned to the first media, the viewing minutes indicative of when the ones of the tuning panelists were exposed to the first media to which the first media presentation device was tuned.

US Pat. No. 10,219,038

HANDLING DISRUPTION IN CONTENT STREAMS RECEIVED AT A PLAYER FROM A CONTENT RETRANSMITTER

SLING MEDIA PVT LTD, Ban...

1. A method for handling disruption in content streams received at a player device from a content retransmitter, the method comprising:the player device monitoring bandwidth of a communication connection utilized to receive a first portion of content at the player device from the content retransmitter wherein the first portion of the content is encoded at a first resolution level; and
the player device signaling the content retransmitter to encode a second portion of the content at a second resolution level when the player device determines that the bandwidth of the communication connection has decreased below a threshold value, wherein the player device determines that the bandwidth of the communication connection has decreased below the threshold value by measuring a number of frames of the first portion of the content dropped by the player device during at least one of decoding the first portion of the content or rendering the first portion of the content.

US Pat. No. 10,219,036

DYNAMICALLY ADJUSTING VIDEO MERCHANDISING TO REFLECT USER PREFERENCES

NETFLIX, INC., Los Gatos...

1. A method, comprising:displaying a plurality of still images associated with a plurality of video assets within a display, wherein the plurality of still images are amenable to interaction;
detecting that a user has selected a first video asset included in the plurality of video assets based on at least one interaction with a first still-image included in the plurality of still-images and corresponding to the first video asset;
determining that the user has continued interest in the first video asset;
in response to the user having continued interest:
determining a target scene included in the first video asset for the user;
displaying a video of the target scene within the display; and
continuing to display the plurality of still images within the display concurrently with displaying the video of the targeted scene, wherein the plurality of still images remains amenable to interaction;
determining that the user has continued interest in the first video asset after displaying the video of the target scene; and
in response to the user having continued interest after displaying the video of the target scene and prior to a request from the user to play back the first video asset:
determining a starting scene included in the first video asset; and
playing back the first video asset from the starting scene within the display.

US Pat. No. 10,219,035

SYSTEM AND METHOD FOR PROVIDING A TELEVISION NETWORK CUSTOMIZED FOR AN END USER

1. A system for treating an individual at an end user location coping with neurodegeneration, comprising:a first signal feed including content including personal imagery or video comprising persons, places, or things that are personal to the life of the specific individual;
a second signal feed including visual or audiovisual content; and
a control device configured to:
access the first and second signal feeds;
select specific content from the first signal feed;
select specific content from the second signal feed;
combine the selected content from the first signal feed and the selected content from the second signal feed to generate an output feed; and
present the generated output feed with an audiovisual display of the combined selected content from each of the first and second signal feed at the end user location to treat the neurodegeneration of the specific individual by refreshing a memory of the specific individual with the personal imagery or video provided from the first signal feed and visual or audiovisual content provided from the second signal feed;
electronically detect an event at the end user location via a monitoring device; and
select specific content from the first signal feed responsive to the detected event at the end user location and modify the output feed to include the selected specific content.

US Pat. No. 10,219,034

METHODS AND APPARATUS TO DETECT SPILLOVER IN AN AUDIENCE MONITORING SYSTEM

THE NIELSEN COMPANY (US),...

1. An apparatus comprising:an audio sample selector to select first audio samples of a first audio signal received by a first microphone, the first audio signal associated with media;
an offset selector to select first offset samples corresponding to a second audio signal received by a second microphone, the second audio signal associated with media, the first offset samples offset from the first audio samples by a first offset value;
a cluster analyzer to, if a first count of occurrences of the first offset value satisfies a count threshold, accept the first offset value into a cluster;
a weighted averager to calculate a weighted average of a second offset value and the first offset value in the cluster;
a direction determiner to determine an origin direction of the media based on the weighted average and
a code reader to, when the origin direction is within a threshold angle, monitor audio codes embedded in the media.

US Pat. No. 10,219,033

METHOD AND APPARATUS OF MANAGING VISUAL CONTENT

SNELL ADVANCED MEDIA LIMI...

1. A method of managing visual content, comprising the steps of:receiving a stream of video fingerprints, derived in a fingerprint generator by an irreversible data reduction process from respective temporal regions within a particular visual content stream, at a fingerprint processor that is physically separate from the fingerprint generator via a communication network; and
processing said fingerprints in the fingerprint processor to generate metadata which is not directly encoded in the fingerprints; wherein said processing includes:
windowing the stream of video fingerprints with a time window,
deriving frequencies of occurrence of particular fingerprint values or ranges of fingerprint values within each time window by converting each set of particular fingerprint values to a histogram,
determining statistical moments or entropy values of said frequencies of occurrence,
comparing said statistical moments or entropy values with expected values for particular types of content,
generating metadata representing the type of the visual content, and
providing the metadata to a control system for managing video content distribution.

US Pat. No. 10,219,032

TUNING BEHAVIOR ENHANCEMENT

ARRIS Enterprises LLC, S...

1. A method for processing content received from a communications network, the method comprising:in one or more processors of a Customer Premises Equipment (CPE) device communicatively coupled to a communications network:
receiving from the communications network a signal containing a program;
processing the received signal to display the program;
receiving a request to tune to other content;
determining whether the request was generated from a user input device or from a module of the CPE device; and
differentiating, based on a result of the determining, between a user-initiated command to tune to other content and a non-user initiated command to tune to other content, the differentiating comprising:
responsively to the request being generated from a user input device, stopping the processing of the signal containing the program while obtaining the other content, and during a delay after the user-initiated command to tune to other content has been received, displaying one of a mute to black and a mute to still; and
responsively to the request being generated from a module of the CPE device, during a delay after the non-user initiated command to tune to other content has been received, continuing to process the signal containing the program for display of the program while obtaining the other content, and continuing to display the program during the delay.

US Pat. No. 10,219,031

WIRELESS VIDEO/AUDIO SIGNAL TRANSMITTER/RECEIVER

Untethered Technology, LL...

1. A system for wirelessly mirroring video from a mobile device to a display screen, the system comprising:a transmitter device comprising:
a communications connector configured to electronically connect to a standard communications port on a mobile device;
a first one or more video and audio signal processors electronically connected to the communications connector and preconfigured to provide communications with a second one or more video and audio signal processors, the first one or more video and audio signal processors configured to:
receive a video and audio signal from the mobile device via the communications connector;
generate, based on the video and audio signal received from the mobile device, an HDMI video and audio signal; and
generate, based on the HDMI video and audio signal, a wireless network transmission signal;
a first antenna; and
a first RF transceiver electronically connected to the first one or more video and audio signal processors and to the first antenna, the first RF transceiver configured to communicate the wireless network transmission signal wirelessly to a second RF transceiver via the first antenna and without retransmission by additional wireless networking devices; and
a receiver device preconfigured for operation with the transmitter device, the receiver device comprising:
a second antenna;
an HDMI output connector configured to electronically connect to an HDMI input port on a display screen;
the second RF transceiver electronically connected to the second one or more video and audio signal processors and to the second antenna, the second RF transceiver configured to receive the wireless network transmission signal from the first RF transceiver via the second antenna and communicate the wireless network transmission signal to the second one or more video and audio signal processors; and
the second one or more video and audio signal processors electronically connected to the HDMI output connector and preconfigured to provide communications with the first one or more video and audio signal processors, the second one or more video and audio signal processors configured to:
receive the wireless network transmission signal from the second RF transceiver;
generate, based on the wireless network transmission signal, the HDMI video and audio signal; and
output the HDMI video and audio signal to the display screen via the HDMI output connector.

US Pat. No. 10,219,030

MULTI-INTERFACE STREAMING MEDIA SYSTEM

Roku, Inc., Los Gatos, C...

1. A system, comprising:an audio/visual device; and
a media device for accessing streamed data and operatively coupled to the audio/visual device, wherein the media device is configured to
detect a type of audio/visual interface that is utilized by the media device via an audio/visual connector of the media device,
detect whether the media device is properly connected to an external power source via a power connector and a removable power cord, wherein the removable power cord is operatively coupled to the power connector, and
determine whether additional power is required to fully operate the media device based at least on the type of audio/visual interface that is utilized by the media device, wherein the type of audio/visual interface includes a first type of audio/visual interface capable of fully operating the media device without the additional power and a second type of audio/visual interface not capable of fully operating the media device without the additional power.

US Pat. No. 10,219,029

DETERMINING ONLINE CONTENT INSERTION POINTS IN AN ONLINE PUBLICATION

Google LLC, Mountain Vie...

1. A computer-implemented method for determining online content insertion points in an online publication, comprising:receiving, by a break point identifying (“BPI”) computer device in communication with a memory device, a candidate online publication that includes a plurality of audio segments;
determining, by the BPI computer device, a threshold proportional to a total length of the candidate online publication;
comparing, by the BPI computer device, a portion of each of the plurality of audio segments to a plurality of reference audio segments to identify a number of the plurality of audio segments that match one of the plurality of reference audio segments;
determining, by the BPI computer device and responsive to the number of the plurality of audio segments that match one of the plurality of reference audio segments being above a second threshold, a plurality of break candidates within the candidate online publication;
determining, by the BPI computer device, a first aggregate time for a first break candidate of the plurality of break candidates, the first aggregate time comprising a duration between the first break candidate and a first prior break candidate;
determining, by the BPI computer device, the first aggregate time for the first break candidate is less than the threshold proportional to the total length of the candidate online publication;
excluding, responsive to the first aggregate time for the first break candidate being less than the threshold proportional to the total length of the candidate online publication, the first break candidate as a content insertion point within the candidate online publication, wherein the content insertion point represents a point in the candidate online publication for presenting online content;
determining, by the BPI computer device, a second aggregate time for a second break candidate, the second aggregate time comprising a time between the second break candidate and a second prior break candidate;
determining, by the BPI computer device, the second aggregate time for the second break candidate is greater than the threshold;
selecting, responsive to the second aggregate time for the second break candidate being greater than the threshold, the second break candidate as the content insertion point within the candidate online publication; and
storing the content insertion point in association with the candidate online publication in the memory device.

US Pat. No. 10,219,028

DISPLAY, DISPLAY DEVICE, PLAYER, PLAYING DEVICE, AND PLAYING DISPLAY SYSTEM

BOE Technology Group Co.,...

1. A display, comprising:at least two first display interfaces, a decoder, and at least two data channels, wherein the at least two first display interfaces have a one-to-one correspondence with the at least two data channels, and a correspondence between the at least two first display interfaces and the at least two data channels is determined by the decoder;
wherein, when each of the at least two first display interfaces is connected, via connecting lines, to a corresponding one of at least two second display interfaces of a player, in a one-to-one connection to transmit at least two first image data streams from the at least two second display interfaces of the player to the at least two first display interfaces, the at least two first display interfaces receive the at least two first image data streams, transmit the received at least two first image data streams to the decoder, each first image data stream of the at least two first image data streams including at least one start data frame and a second image data stream, the start data frame carrying a data channel identifier, wherein the data channel identifier indicates a data channel, among the at least two data channels, corresponding to the first image data stream;
the decoder receives the at least two first image data streams transmitted from the at least two first display interfaces, obtains each second image data stream and data channel identifier corresponding to each second image data stream according to the at least one start data frame included in each of the at least two first image data streams, determines the at least two data channels corresponding to the at least two display interfaces according to the data channel identifiers, and transmits each second image data stream to the corresponding data channel; and
each data channel receives a corresponding second image data stream, and outputs the corresponding second image data stream.

US Pat. No. 10,219,027

SYSTEM FOR PROVIDING MUSIC CONTENT TO A USER

Music Choice, Horsham, P...

1. A method for providing an enhanced television service to a user of a user device in communication with a television system, the method comprising:receiving information indicating that the user desires to consume a selected programmed linear video channel; and
in response to receiving the information, displaying on a display device of the user device a user interface screen, wherein the user interface screen comprises:
i) a first display area for displaying scheduled video content transmitted by the television system on the selected programmed linear channel and in accordance with a video content schedule for the selected programmed linear video channel,
ii) a second display area for displaying a group of graphic images;
displaying, in the first display area, first scheduled video content transmitted by the television system on the selected programmed linear channel in accordance with the video content schedule for the selected programmed linear video channel;
while displaying the first scheduled video content: 1) displaying in the second display area a first group of at least four graphic images, wherein the first group of graphic images is displayed in the second display area in a grid pattern having at least two rows and two columns and 2) displaying an artist list for enabling the user to select an artist, wherein the artist list comprises a set of at least two artist images including a first artist image and a second artist image, where each artist image included in the set identifies an artist, and wherein displaying the artist list comprises displaying the first artist image so that the first artist image is not obscured, further wherein and each graphic image included in the first group of graphic images is associated with a different music video associated with the artist identified by the first artist image;
after the first scheduled video content has ended: 1) automatically displaying, in the first display area, second scheduled video content transmitted by the television system on the selected programmed linear channel in accordance with the video content schedule for the selected programmed linear video channel; 2) automatically adding to the displayed artist list a third artist image such that the third artist image is not obscured; and 3) automatically displaying in the second display area a second group of at least four graphic images, wherein the second group of graphic images is displayed in the second display area in a grid pattern having at least two rows and two columns and each graphic image included in the second group of graphic images is associated with a different music video associated with the artist identified by the third artist image;
while displaying the second scheduled video content in the first display area and the second group of graphic images in the second display area, receiving a user input indicating that the user has selected one of the graphic images included in the second group of graphic images; and
after receiving the user input, causing the music video associated with the selected graphic image to be streamed on-demand to the user device.

US Pat. No. 10,219,026

MOBILE TERMINAL AND METHOD FOR PLAYBACK OF A MULTI-VIEW VIDEO

LG ELECTRONICS INC., Seo...

18. A method of controlling a mobile terminal, the method comprising the steps of:displaying a first frame corresponding to a first playback angle at a first playback time point of a multi-view video and a progress bar corresponding to the multi-view video;
in response to a touch input applied to the progress bar, moving a time indicator displayed at a first position corresponding to the first playback time point to a second position corresponding to a second playback time point of the progress bar;
displaying a first thumbnail image corresponding to the first playback angle at the second playback time point while maintaining displaying the first frame, as the time indicator is moved from the first position to the second position;
in response to a touch input applied to the first thumbnail, changing the first thumbnail image into a second thumbnail image to indicate that the first playback angle is increased to a second playback angle at the second playback time point; and
in response to a touch input applied to the second thumbnail image, replace the first frame with a different frame of the multi-view video and display the replaced first frame, the different frame corresponding to the second playback angle indicated by the second thumbnail image at the second playback time.

US Pat. No. 10,219,025

VIDEO DISTRIBUTION DEVICE, VIDEO DISTRIBUTION METHOD, AND PROGRAM

DWANGO CO., LTD., Chuo-K...

1. A video distribution device comprising:a transmission range determinator that receives information, indicating a display position for a video gallery display screen from a terminal device, the transmission range determinator determining, as a transmission range for video gallery data in which display images of video data items are arranged, a range at approximately a center of which the display position indicated by the received information is positioned and which has a size greater than a possible display range of the terminal device; and
a video gallery display screen generator that generates the video gallery display screen in which the display images of the video data items included in the transmission range determined by the transmission range determinator are arranged according to an arrangement of the display images of the video data items which is defined by the video gallery data, wherein the video gallery display screen generator distributes the generated video gallery display screen to the terminal device;
wherein among the display images of the video data items in the video gallery display screen, an outermost image included in the display range of the terminal device in the video gallery display screen is displayed in a manner such that part of the outermost image is cut by an outer periphery of the display range; and
in the video gallery display screen, the display images of the video data items have the same size and are arranged in a vertical direction and a horizontal direction.

US Pat. No. 10,219,024

TRANSMISSION APPARATUS, METAFILE TRANSMISSION METHOD, RECEPTION APPARATUS, AND RECEPTION PROCESSING METHOD

SATURN LICENSING LLC, Ne...

1. A transmission apparatus, comprising:processing circuitry configured to
store, in a memory, first acquisition information used for a first client terminal to acquire a first determined number of data streams of first content that are to be delivered by a delivery server via a network, second acquisition information used for the first client terminal and a second client terminal to acquire a second determined number of data streams of a second content that are to be delivered by the delivery server via the network, a first metafile, and a second metafile,
wherein the first metafile includes either first presentation control information to control presentation of the first content or first reference information to refer a first file that includes the first presentation control information, and the second metafile includes either second presentation control information to control presentation of the second content or second reference information to refer a second file that includes the second presentation control information,
wherein the first presentation control information includes first presentation time information that designates a start time at which the first content is to be reproduced, and the second presentation control information includes second presentation time information that designates a start time at which the second content is to be reproduced;
transmit, to the first client terminal via the network, the stored first metafile and the stored second metafile based on a first transmission request transmitted from the first client terminal via the network; and
transmit, to the second client terminal via the network, the stored second metafile based on a second transmission request transmitted from the second client terminal via the network, wherein
the first client terminal stops reproducing the second content in response to receiving a notification from the second client terminal.

US Pat. No. 10,219,023

SEMICONDUCTOR DEVICE, VIDEO DISPLAY SYSTEM, AND METHOD OF OUTPUTTING VIDEO SIGNAL

LAPIS SEMICONDUCTOR CO., ...

1. A semiconductor device comprising:a first selecting processor configured to select one of a first video signal and a second video signal according to a first selection signal;
a selection signal generating processor configured to generate the first selection signal;
a second selecting processor configured to select another one of the first video signal and the second video signal according to a second selection signal; and
a scaling processor configured to scale a size of a video of the another one of the first video signal and the second video signal to a size of a display device,
wherein said first selecting processor is configured to output the one of the first video signal and the second video signal in synchronization with a synchronization signal accompanied with the one of the first video signal and the second video signal,
said second selecting processor is configured to output the another one of the first video signal and the second video signal in synchronization with a synchronization signal accompanied with the another one of the first video signal and the second video signal to the scaling processor,
said scaling processor is configured to output the another one of the first video signal and the second video signal to the first selecting processor, said scaling processor includes a second setting processor configured to store a setting value for the scaling processor to scale the video, said scaling processor is configured to supply a second status signal indicating that the scaling processor changes the setting value to the second selecting processor when the scaling processor changes the setting value to be stored in the second setting processor, and
said second selecting processor is configured to output the another one of the first video signal and the second video signal in synchronization with the synchronization signal accompanied with the another one of the first video signal and the second video signal after the second selecting processor detects that the scaling processor completes changing the setting value to be stored in the second setting processor according to the second status signal.

US Pat. No. 10,219,022

METHOD AND SYSTEM FOR SHARING TELEVISION (TV) PROGRAM INFORMATION BETWEEN SET-TOP-BOXES (STBS)

Wipro Limited, Bangalore...

1. A method of sharing television (TV) program information, the method implemented by one or more set-top-boxes (STBs) and comprising:obtaining program-specific-information of TV content;
converting, by a Text-to-Speech (TTS) converter, the program-specific information into a voice message;
establishing, using a first Subscriber Identity Module (SIM) a voice call to another STB;
transmitting, from the first SIM, (i) the voice message associated with the program-specific-information to a second SIM associated with the another STB over the voice call using a first modulation scheme, (ii) user interaction-data over the voice call using a second modulation scheme and (iii) one or more control commands over the voice call using a third modulation scheme; and
multiplexing the voice message associated with the program-specific information, control commands, and the user-interaction-data over the voice call.

US Pat. No. 10,219,021

METHOD AND APPARATUS FOR PROCESSING COMMANDS DIRECTED TO A MEDIA CENTER

1. A method, comprising:associating, by a processing system comprising a processor, a first gesture of a first object of a plurality of objects based on images from image data with a first command for controlling a media center, wherein the image data is captured by a plurality of image sensors;
associating, by the processing system, a second gesture of a second object of the plurality of objects based on the images from the image data with a second command for controlling the media center, wherein the second gesture is based on images detected by the plurality of image sensors;
modifying, by the processing system, the first command to a modified command according to a characteristic of the first gesture;
determining, by the processing system, a conflict between the first command and the second command;
presenting, by the processing system, a notification indicating the conflict and requesting a resolution to the conflict;
determining, by the processing system, if a response to the notification indicates the resolution is to perform the first command or the second command; and
responsive to a first determination that the response indicates the resolution is to perform the first command processing, by the processing system, the modified command to control the media center responsive to determining the response indicates the resolution is to perform the first command.

US Pat. No. 10,219,020

PORTABLE TERMINAL, INFORMATION PROCESSING APPARATUS, CONTENT DISPLAY SYSTEM AND CONTENT DISPLAY METHOD

Maxell, Ltd., Kyoto (JP)...

1. A display apparatus for the display of video content acquired from a television broadcast and for the display of video content acquired via the internet, comprising:a digital broadcast receiver configured to receive a digital broadcast signal;
a signal separator for de-multiplexing the digital broadcast signal into video data and audio data;
a video processor configured to convert the format of the video data;
an audio processor configured to convert the format of the audio data;
a network communication module configured to communicate over the internet;
a radio receiver for receiving data from an external mobile terminal;
an infra-red (IR) receiver for receiving commands from a remote controller;
a display panel; and
a system controller configured to control the display apparatus to:
receive a first video content via the received broadcast signal;
display the first video content on the display panel;
receive an identifier for identifying a second video content from the external mobile terminal;
acquire the second video content via the interne using said identifier;
display the second video content on the display panel;
terminate display of the first video content prior to display of the second video content;
execute operation instructions received from the external mobile terminal via the radio receiver while the second video content is being displayed; and
execute commands received from the remote controller, wherein said remote controller is a different device than the external mobile terminal.

US Pat. No. 10,219,019

PORTABLE TERMINAL, INFORMATION PROCESSING APPARATUS, CONTENT DISPLAY SYSTEM AND CONTENT DISPLAY METHOD

Maxell, Ltd., Kyoto (JP)...

1. A video apparatus for outputting video content acquired from a television broadcast and for outputting video content acquired via the internet, comprising:a digital broadcast receiver configured to receive a digital broadcast signal;
a signal separator for de-multiplexing the digital broadcast signal into video data and audio data;
a video processor configured to convert the format of the video data;
an audio processor configured to convert the format of the audio data;
a network communication module configured to communicate over the internet;
a radio receiver for receiving data from an external mobile terminal;
an infra-red (IR) receiver for receiving commands from a remote controller; and
a system controller configured to control the video apparatus to:
receive a first video content via the received broadcast signal;
output first video signals representing the first video content;
receive an identifier for identifying a second video content from the external mobile terminal;
acquire the second video content via the internet using said identifier;
output second video signals representing the second video content;
terminate output of the first video signals prior to outputting the second video signals;
execute operation instructions received from the external mobile terminal via the radio receiver while the second video signals are being outputted; and
execute commands received from the remote controller, wherein said remote controller is a different device than the external mobile terminal.

US Pat. No. 10,219,018

METHOD OF CONTROLLING DISPLAY DEVICE FOR PROVIDING CONTENT AND DISPLAY DEVICE PERFORMING THE SAME

SAMSUNG ELECTRONICS CO., ...

1. A method of controlling a first device, the method comprising:transmitting identification information of a first user from the first device to a server;
obtaining, from the server, identification information of a plurality of second users that are related to the first user and content information corresponding to the plurality of second users, the content information comprising information of content that is being displayed on a plurality of second devices respectively corresponding to the plurality of second users;
displaying, on the first device, a user interface (UI) for selecting content that corresponds to the content information; and
in response to selecting content using the UI, playing, on the first device, the selected content,
wherein the UI comprises a plurality of objects, each object comprising identification information of one of the plurality of second users and content information corresponding to the one of the plurality of second users, and the plurality of objects are displayed within the UI in order of closeness between the first user and each of the plurality of second users; and
wherein the order of closeness is determined based on comparing a viewing rate referring to a time during which content has been viewed by each of the plurality of second users as compared to a total time of the content.

US Pat. No. 10,219,017

APPARATUS AND METHODS FOR MULTICAST DELIVERY OF CONTENT IN A CONTENT DELIVERY NETWORK

Time Warner Cable Enterpr...

1. A method of providing packetized content using a managed webserver to a client device from a content server, said method comprising:receiving a request for said packetized content from said client device;
determining whether said requested packetized content is to be provided via a multicast to a group of devices; and
when it is determined that said requested packetized content is to be provided via said multicast, causing, by the managed webserver, another client device to assign a persistent transmission control protocol (TCP) port that enables direct bidirectional communication between said client device and said content server for receiving said requested packetized content, and thereafter providing said client device with an instruction to cause said client device to query said another client device with a request to open said persistent TCP port between said client device and said content server previously assigned by said another client device, the TCP port being configured to provide said packetized content, said another client device being configured to, in response to said query:
join a multicast group for receiving said requested packetized content; and
enable said content server to provide said requested packetized content as a unicast stream to said client device via said persistent TCP port.

US Pat. No. 10,219,016

EXCLUDING SPECIFIC APPLICATION TRAFFIC FROM CUSTOMER CONSUMPTION DATA

Time Warner Cable Enterpr...

1. A method comprising:receiving first data packets over a communication link from a first source, the first data packets destined for delivery to a communication device operated by a user in a network environment, the first data packets assigned delivery information to facilitate conveyance of the first data packets over the communication link to the communication device;
examining the delivery information assigned to the first data packets to control delivery of the first data packets, the delivery information indicating that the first data packets are received from the first source; and
in response to detecting that the first data packets are received from the first source and that communications from the first source are to be excluded from a data delivery count representing a amount of data conveyed to the communication device over the communication link on behalf of the user, communicating the first data packets over a data flow that is not counted in the data delivery count assigned to the user.

US Pat. No. 10,219,015

OFFERING ITEMS IDENTIFIED IN A MEDIA STREAM

Amazon Technologies, Inc....

1. A device comprising:at least one physical processor; and
one or more memory devices to store computer instructions that, when executed by the at least one physical processor, cause the at least one physical processor to:
receive, from a content provider, a content stream, the content stream including content depicting an item;
provide the content stream to a display device associated with a user;
detect the item within the content stream;
analyze one or more visual elements of one or more images of the content stream with respect to a catalog of items to identify one or more characteristics of the item;
analyze one or more audio elements of the content stream to identify at least one of the one or more characteristics of the item, the one or more audio elements different than part of an advertisement;
determine an identification of a first catalog item that corresponds to the item within the catalog of items based at least in part on the one or more characteristics of the item;
determine an identification of a secondary catalog item that is similar to the item based at least in part on the one or more characteristics of the item, the secondary catalog item having at least one additional different functionality than the item;
receive a request regarding the first catalog item, the request regarding the first catalog item including first audio input from the user, the first audio input corresponding to a transaction phrase token spoken by the user;
receive identification information of the user, wherein the identification information of the user includes the transaction phrase token that is assigned a transaction rule to automatically approve purchase requests for items that belong to an item type for a certain account and to automatically approve shipment of the first catalog item to a first shipping address associated with the certain account;
transmit, to one or more item offering services, the identification of the first catalog item, the identification of the secondary catalog item, and the identification information of the user; and
receive a second request regarding the secondary catalog item, the second request including second audio input from a second user, the second audio input corresponding to a second transaction phrase token spoken by the second user, and wherein the second transaction phrase token is usable to authorize a purchase of the secondary catalog item without user access to billing information associated with a second account corresponding to the second transaction phrase token and to automatically approve shipment of the secondary catalog item to a second shipping address associated with the second account.

US Pat. No. 10,219,012

METHOD AND APPARATUS FOR TRANSCEIVING DATA FOR MULTIMEDIA TRANSMISSION SYSTEM

Samsung Electronics Co., ...

1. A method for receiving media content in a multimedia system, the method comprising:receiving one or more multimedia data packets generated based on a data unit, the data unit being fragmented into at least one sub data unit, each multimedia data packet including a packet header and a payload; and
decoding the one or more multimedia data packets to recover the media content,
wherein each of the one or more multimedia data packets comprise type information indicating whether a respective payload data included in a payload of a given multimedia data packet comprises either a metadata of the data unit or a data element derived from the at least one sub data unit,
wherein the type information comprises a first value indicating that the respective payload data comprises the metadata of the data unit if the respective payload data comprises the metadata of the data unit, and
wherein the type information comprises a second value indicating that the respective payload data comprises the data element derived from the at least one sub data unit if the respective payload data comprises the data element derived from the at least one sub data unit.

US Pat. No. 10,219,011

TERMINAL DEVICE AND INFORMATION PROVIDING METHOD THEREOF

SAMSUNG ELECTRONICS CO., ...

1. A terminal device comprising:a communication interface;
a display; and
a processor configured to:
control the display to output a moving picture;
extract fingerprints from frames of the moving picture while the moving picture is output;
detect an object from a frame of the moving picture while the moving picture is output;
control the communication interface to transmit, to a server, a request comprising a fingerprint extracted from a currently output frame, to query information corresponding to the fingerprint comprised in the request;
in response to transmitting the request to the server, receive, from the server, the information corresponding to the fingerprint comprised in the transmitted request; and
control the display to output the received information,
wherein the processor is further configured to control the communication interface to regularly transmit the request to the server at a predetermined time interval while a target object is not detected,
wherein the processor is further configured to control the communication interface to transmit the request to the server immediately when the target object is detected regardless of the predetermined time interval, and
wherein the processor is further configured to stop transmitting the request to the server while the same target object is continuously detected.

US Pat. No. 10,219,010

SELECTIVE MEDIA PLAYING METHOD AND APPARATUS ACCORDING TO LIVE STREAMING AND RECORDED STREAMING

HANWHA TECHWIN CO., LTD.,...

1. A media streaming apparatus for playing media on a web browser, comprising at least one processor to implement:a receiving unit configured to receive media data by using a communication protocol which supports web services, the media data being generated by a media service apparatus;
a first media restoring unit configured to decode the media data by a first decoder written in a script which can be parsed by the web browser;
a second media restoring unit configured to decode the media data by a second decoder embedded in the web browser; and
an output unit configured to output the media data decoded by at least one of the first media restoring unit and the second media restoring unit,
wherein the media data is decoded by the at least one of the first media restoring unit and the second media restoring unit based on a streaming mode,
wherein the media data is decoded by the first media restoring unit when the streaming mode is a live streaming mode, and the media data is decoded by the at least one of the first media restoring unit and the second media restoring unit when the streaming mode is a recorded streaming mode.

US Pat. No. 10,219,009

LIVE INTERACTIVE VIDEO STREAMING USING ONE OR MORE CAMERA DEVICES

Twitter, Inc., San Franc...

1. A computing device comprising:at least one processor; and
a non-transitory computer-readable medium having executable instructions that when executed by the at least one processor are configured to execute an interactive streaming application, the interactive streaming application configured to:
join a live broadcast of an event that is shared by an interactive video broadcasting service executing on a server computer;
receive a first video stream of the live broadcast, the first video stream having video captured from a camera device configured as a first video source;
display the video of the first video stream on a display screen of the computing device;
trigger display of a first icon and a second icon on the display screen during a course of the live broadcast, the first icon representing a first user-provided engagement provided by a first viewing device, the second icon representing a second user-provided engagement provided by a second viewing device, the first user-provided engagement being associated with a first timestamp in the first video stream such that the display of the first icon is triggered at a time indicated by the first timestamp, the second user-provided engagement being associated with a second timestamp in the first video stream such that the display of the second icon is triggered at a time indicated by the second timestamp,
wherein the first icon is removed from the display screen when a predetermined interval elapses after the time indicated by the first timestamp, and the second icon is removed from the display when a predetermined interval elapses after the time indicated by the second timestamp;
receive a second video stream of the live broadcast, the second video stream having panoramic video captured from a panoramic video capturing device configured as a second video source;
display a portion of the panoramic video according to a first viewing angle on the display screen;
receive a change to the first viewing angle of the panoramic video; and
display another portion of the panoramic video according to a second viewing angle, the second viewing angle providing a different perspective of the panoramic video than what was provided by the first viewing angle.

US Pat. No. 10,219,008

APPARATUS AND METHOD FOR AGGREGATING VIDEO STREAMS INTO COMPOSITE MEDIA CONTENT

1. A system, comprising:a processing system including a processor; and
a memory that stores executable instructions that, when executed by the processing system, facilitate performance of operations, comprising:
obtaining a live video stream from each of a plurality of communication devices resulting in a plurality of live video streams, the plurality of live video streams being associated with a common event;
detecting a presentation capability of a device;
determining that a group of the plurality of live video streams are from a same perspective of the common event;
identifying a user associated with a communication device, wherein the plurality of communication devices comprise the communication device;
providing additional bandwidth to the communication device based on the communication device providing a first live video stream, wherein the plurality of live video streams comprises the first live video stream;
selecting one of the group of the plurality of live video streams that are from the same perspective of the common event according to the presentation capability of the device;
aggregating a first portion of the plurality of live video streams to generate a composite video stream for presenting a selectable viewing of the common event, wherein the first portion of the plurality of live video streams includes the one of the group of the plurality of live video streams;
sending the composite video stream to the device for presentation of the composite video stream of the common event at the device, wherein the sending of the composite video stream comprises transmitting the composite video stream to a social media server, wherein the social media server shares the composite video stream with social media members;
providing a graphical user interface to the device, wherein the graphical user interface is presented by the device with the presentation of the composite video stream of the common event, wherein the graphical user interface includes a touchscreen to receive first user-generated input through contact with the touchscreen and a gesture, and wherein the graphical user interface enables adjustment of a viewing of the common event;
receiving first user-generated input from the device, wherein the first user-generated input comprises a request to adjust the presentation of the common event by providing a selection of a moving object;
adjusting the composite video stream according to the first user-generated input to generate a first adjusted composite video stream, wherein each image of the adjusted composite video stream includes a selected moving object within the common event;
providing the first adjusted composite video stream to the device for presentation of adjusted composite video stream of the common event at the device;
receiving second user-generated input from the device, wherein the second user-generated input comprises a first gesture with the touchscreen of the graphical user interface that indicates a magnification of the selected moving object on a separate screen, and wherein the second user-generated input comprises a change in location of the device;
adjusting the first adjusted composite video stream to generate a second adjusted composite video stream and a third adjusted composite video stream, wherein the second adjusted composite video stream includes is adjusted according to the change in location and the third adjusted composite video stream is adjusted according to the magnification of the selected moving object; and
providing the second adjusted composite video stream and the third adjusted composite video stream to the device for presentation of the second adjusted composite video stream at a same time for presentation of the third adjusted composite video stream.

US Pat. No. 10,219,007

METHOD AND DEVICE FOR SIGNALING IN A BITSTREAM A PICTURE/VIDEO FORMAT OF AN LDR PICTURE AND A PICTURE/VIDEO FORMAT OF A DECODED HDR PICTURE OBTAINED FROM SAID LDR PICTURE AND AN ILLUMINATION PICTURE

INTERDIGITAL VC HOLDINGS,...

1. A method for signaling, in a bitstream representing a LDR picture obtained from an HDR picture, both a picture/video format of a decoded version of said LDR picture, denoted an output LDR format, and a picture/video format of a decoded version of said HDR picture, denoted an output HDR format, the method comprising encoding in the bitstream a first syntax element defining the output LDR format,wherein it further comprises encoding in the bitstream a second syntax element which is distinct from the first syntax element and which defines the output HDR format.

US Pat. No. 10,219,005

SYSTEM AND METHOD FOR REAL-TIME COMPRESSION OF DATA FRAMES

HCL Technologies Italy S....

1. A method for real-time compression of a data frame, the method comprises:receiving, by a processor, a data frame, wherein the data frame comprises a set of symbols, wherein the length of each symbol is m bits;
identifying, by the processor, a frequency associated with each symbol, from the set of symbols, wherein for each symbol, the frequency corresponds to a number of occurrence of the symbol in the data frame;
sorting, by the processor, the set of symbols to generate a sorted set of symbols, based on descending order of frequency associated with each symbol from the set of symbols;
computing, by the processor, a compression gain associated with each predefined case type, from a set of predefined case types, wherein each predefined case type corresponds to a number of bits (C) used for representing first (2?C?1) symbols from the sorted set of symbols;
selecting, by the processor, a target predefined case type, from the set of predefined case types, based on comparison of the compression gain associated with each predefined case type, wherein the target predefined case type corresponds to Ct bits;
assigning, by the processor, Ct bits compressed code to the first (2?Ct?1) symbols, from the sorted set of symbols, and (m+Ct) bits code to the remaining symbols from the sorted list of symbols; and
generating, by the processor, a compressed frame, wherein the compressed frame comprises a header and a sequence of compressed symbols, wherein the sequence of compressed symbols is generated based on the bit code assigned to each symbol, and wherein the header represents the target predefined case type, and the first (2?Ct?1) symbols.

US Pat. No. 10,219,004

VIDEO ENCODING AND DECODING METHOD AND APPARATUS USING THE SAME

Electronics and Telecommu...

1. A method for video decoding that supports multi-layer videos, the method comprising:analyzing a first layer dependency on a current layer based on a video parameter set (VPS) extension;
analyzing a second layer dependency on a current slice in the current layer based on information encoded in a slice unit, wherein the analyzing the second layer dependency on the current slice comprises determining, for the current slice, whether to use the first layer dependency of the VPS extension or the second layer dependency of the slice unit;
constructing a reference picture list for the current slice based on either one or both of the first layer dependency on the current layer and the second layer dependency on the current slice;
predicting a current block included in the current slice by using at least one reference picture included in the reference picture list to generate a prediction block;
generating a residual block of the current block; and
reconstructing the current block by using the prediction block and the residual block,
wherein the generating the residual block comprises entropy-decoding a bitstream to generate a quantized transformed coefficient,
wherein the reference picture list comprises a temporal reference picture belonging to a same layer as the current slice and an inter-layer reference picture belonging to a different layer from the current slice, and
wherein the inter-layer reference picture has a same picture order count (POC) value as the current slice.

US Pat. No. 10,219,003

INTRA-FRAME PREDICTIVE CODING AND DECODING METHODS BASED ON TEMPLATE MATCHING, ARRAY SCANNING METHOD AND APPARATUS, AND APPARATUS

Huawei Technologies Co., ...

1. An intra-frame predictive coding method based on template matching, comprising:determining N predicted pixel values of a to-be-predicted unit by using a template of an ith shape, wherein the to-be-predicted unit is adjacent to the template of the ith shape, an ith predicted pixel value is determined according to the template of the ith shape, wherein i=1, 2, . . . , N, and N is an integer greater than or equal to 2; and
selecting a predicted pixel value that is among the N predicted pixel values of the to-be-predicted unit and meets a preset condition as an optimal predicted pixel value of the to-be-predicted unit, wherein the optimal predicted pixel value of the to-be-predicted unit is used for coding;
wherein the determining N predicted pixel values of a to-be-predicted unit by using a template of an ith shape comprises:
determining a predicted pixel value of a subunit sj in the to-be-predicted unit by using a template ix of the ith shape, wherein the subunit sj is a region that is in the to-be-predicted unit, and the region of the subunit sj has a same shape as the template ix, and that is adjacent to the template ix; j=1, 2, . . . , M; x=1, 2, . . . , M; M is an integer greater than or equal to 2; s1Us2U . . . UsM is equal to the to-be-predicted unit; the subunits s1, s2, sM are successively farther away from an adjacent reconstructed region; and the template ix has a different size; and
wherein all predicted pixel values of the to-be-predicted unit is determined by successive iterations from a peripheral region of the to-be-predicted unit that is closest to a reconstructed region.

US Pat. No. 10,219,002

DYNAMIC FIDELITY UPDATES FOR ENCODED DISPLAYS

Intel Corporation, Santa...

1. A source device comprising:an interface configured to be coupled to a data link; and
at least one processor coupled to the interface and configured to transmit, via the interface and the data link, a plurality of frames of image data, the plurality of frames including at least one base frame and at least one partial fidelity update frame distinct from and corresponding to the at least one base frame, the at least one partial fidelity update frame being applicable to a portion of the at least one base frame and storing at least one chroma value to replace one or more chroma values of the at least one base frame.

US Pat. No. 10,219,001

INTER-LAYER PREDICTION METHOD FOR MULTI-LAYER VIDEO AND DEVICE THEREFOR

Intellectual Discovery Co...

1. An inter-layer prediction apparatus for a multi-layer video, comprising:a frame buffer configured to store a reconstructed picture in an enhancement layer and a reconstructed picture in a reference layer;
a predictor configured to
determine whether the reconstructed picture in the reference layer is present at a time corresponding to a current picture in the enhancement layer,
determine an inter-layer reference picture for the current picture, in response to the determination that the reconstructed picture is present at the time corresponding to the current picture,
generate a reference picture list for the current picture including the inter-layer reference picture and the reconstructed picture in the enhancement layer, and
generate a predicted picture of the current picture by performing inter prediction on the current picture based on the reference picture list; and
an adder configured to generate a reconstructed picture of the current picture by adding the predicted picture of the current picture and a residual picture of the current picture.

US Pat. No. 10,219,000

TIME STAMP RECOVERY AND FRAME INTERPOLATION FOR FRAME RATE SOURCE

PIXELWORKS, INC., Portla...

1. A method of performing motion vector correction in a sequence of video frames, comprising:receiving, at a processor, a sequence of video frames at a received rate lower than an original frame rate, the sequence of video frames having fewer frames than an original sequence of video frames;
identifying motion vectors for frames in the sequence of video frames;
identifying a high-low pattern of motion vector magnitudes over a period of time;
determining a location of dropped frames from the original sequence of video frames based on the high-low pattern;
generating frame interpolation phases based on the high-low pattern;
adjusting magnitudes of the motion vectors based on the high-low pattern to determine motion vectors for each of the frame interpolation phases; and
interpolating a new frame of video data at each of the frame interpolation phases.

US Pat. No. 10,218,999

METHOD AND APPARATUS FOR IMAGE CODING/DECODING

Electronics and Telecommu...

1. An image decoding method comprising:configuring a motion vector candidate list;
modifying the motion vector candidate list based on a number of motion vector candidates in the motion vector candidate list; and
determining a prediction motion vector based on the modified motion vector candidate list,
wherein the modified motion vector candidate list comprises any one or any combination of any two or more of a spatial motion vector candidate, a temporal motion vector candidate, and a (0,0) motion vector,
wherein the configuring of the motion vector candidate list comprises
deriving the spatial motion vector candidate,
deriving the temporal motion vector candidate except when two derived spatial motion vector candidates are present and different from each other, and
adding either one or both of the derived spatial motion vector candidate and the derived temporal motion vector candidate to the motion vector candidate list, and
wherein in response to the number of motion vector candidates in the motion vector candidate list being smaller than a maximum number of motion vector candidates, the modifying of the motion vector candidate list comprises repeatedly adding a specific motion vector candidate to the motion vector candidate list until the motion vector candidate list reaches the maximum number of motion vector candidates, based on only the maximum number of motion vector candidates and the number of motion vector candidates in the motion vector candidate list,
wherein the adding either one or both of the derived spatial motion vector candidate and the derived temporal motion vector candidate to the motion vector candidate list carries out an operation of checking the same motion vector candidate only on the spatial motion vector candidates for removing the same motion vector candidate.

US Pat. No. 10,218,998

METHOD AND APPARATUS FOR ENCODING/DECODING IMAGES USING A MOTION VECTOR OF A PREVIOUS BLOCK AS A MOTION VECTOR FOR THE CURRENT BLOCK

SAMSUNG ELECTRONICS CO., ...

1. An image decoding method comprising:hierarchically splitting a maximum coding unit into at least one coding unit based on split information obtained from a bitstream;
determining a current block in a coding unit among the at least one coding unit;
obtaining information regarding a prediction direction to be used to decode the current block, the information indicating one of an L0 direction, an L1 direction, and a bi-direction;
determining motion vector candidates of the current block based on a motion vector of at least one block decoded before decoding of the current block; and
determining at least one motion vector of the current block based on at least one of a motion vector candidate in the L0 direction and a motion vector candidate in the L1 direction, from among the determined motion vector candidates, according to the information regarding a prediction direction,
wherein the determining motion vector candidates of the current block comprises obtaining the motion vector candidates of the current block using a block co-located together with the current block in a temporal reference picture in the L0 direction or the L1 direction,
the image is split into a plurality of maximum coding units including the maximum coding unit,
the maximum coding unit is hierarchically split into the at least one coding unit of depths,
a coding unit of a current depth is one of square data unit split from a coding unit of an upper depth,
when the split information indicates a split for the current depth, the coding unit of the current depth is split into four coding units of a lower depth, independently from neighboring coding units, and
when the split information indicates a non-split for the current depth, a coding unit of the current depth is split into one or more prediction units, and the current block is a prediction unit.

US Pat. No. 10,218,997

MOTION VECTOR CALCULATION METHOD, PICTURE CODING METHOD, PICTURE DECODING METHOD, MOTION VECTOR CALCULATION APPARATUS, AND PICTURE CODING AND DECODING APPARATUS

Velos Media, LLC, Plano,...

1. A decoding method of decoding a current block included in a current picture, the current picture being included in a coded video stream, the decoding method comprising:determining a reference picture in the coded video stream, the reference picture being included in one of (i) a first reference picture group of the current block and (ii) a second reference picture group of the current block;
selecting a reference motion vector among from one or more reference motion vectors of a reference block in the reference picture such that in situation (A) when the reference block has a first reference motion vector and a second reference motion vector that respectively correspond to the first reference picture group and the second reference picture group, (i) the first reference motion vector is selected when the reference picture is included in the second reference picture group and (ii) the second reference motion vector is selected when the reference picture is included in the first reference picture group, in situation (B) when the reference block has only one reference motion vector, the only reference motion vector is selected, and in situation (C) when the reference block has no reference motion vector, a zero reference motion vector is selected;
deriving the motion vector of the current block using the selected one reference motion vector; and
decoding the current block using the derived motion vector.

US Pat. No. 10,218,996

MOTION VECTOR DETECTION APPARATUS AND METHOD OF CONTROLLING MOTION VECTOR DETECTION APPARATUS

CANON KABUSHIKI KAISHA, ...

1. A motion vector detection apparatus, comprising:a processor that executes a program stored in a memory and functions as:
a detecting unit adapted to detect, for each of a plurality of areas of a base image, a motion vector relative to a reference image;
a motion vector determining unit adapted to determine, among motion vectors, motion vectors related to a moving object;
a candidate vector determining unit adapted to determine, based on a point of interest that is a position within an image and a movement direction of the moving object, one or more of the motion vectors related to the moving object as candidate vector(s); and
a calculating unit adapted to calculate a representative vector of the moving object based on the candidate vector(s),
wherein the candidate vector determining unit determines, from among the motion vectors related to the moving object, one or more motion vectors each being detected in, among the plurality of areas, an area that exists on an axis of interest that extends in a different direction from the movement directions of the moving object and passes through the point of interest, as the candidate vector(s).

US Pat. No. 10,218,995

MOVING PICTURE ENCODING SYSTEM, MOVING PICTURE ENCODING METHOD, MOVING PICTURE ENCODING PROGRAM, MOVING PICTURE DECODING SYSTEM, MOVING PICTURE DECODING METHOD, MOVING PICTURE DECODING PROGRAM, MOVING PICTURE REENCODING SYSTEM, MOVING PICTURE REENCODING M

JVC KENWOOD CORPORATION, ...

1. A moving picture encoding system comprising:a first encoder configured to work on a subsequence of a sequence of moving pictures with a standard resolution to implement a first combination of processes for an encoding and a decoding to create a first sequence of encoded bits and a set of decoded pictures with the standard resolution;
a first super-resolution enlarger configured to work on the subsequence of the sequence of moving pictures with the standard resolution to implement an interpolation of pixels with a first enlargement to create a set of super-resolution enlarged pictures with a first resolution higher than the standard resolution;
a first resolution converter configured to work on the set of super-resolution enlarged pictures to implement a process for a first resolution conversion to create a set of super-resolution enlarged and converted pictures with a standard resolution;
a second super-resolution enlarger configured to acquire the set of decoded pictures with the standard resolution from the first encoder to work on the sequence of decoded pictures to implement an interpolation of pixels with a second enlargement to create a set of super-resolution enlarged decoded pictures with a second resolution higher than the standard resolution;
a second resolution converter configured to work on the set of super-resolution enlarged decoded pictures to implement a process for a second resolution conversion to create a set of super-resolution enlarged and converted decoded pictures with a standard resolution; and
a second encoder configured to:
have the set of super-resolution enlarged and converted pictures from the first resolution converter as a set of encoding target pictures, the set of decoded pictures from the first encoder as a set of first reference pictures, and the set of super-resolution enlarged and converted decoded pictures from the second resolution converter as a set of second reference pictures,
select one of the set of first reference pictures and the set of second reference pictures to create reference picture selection information to identify the set of selected reference pictures to implement a second process for encoding to create a second sequence of encoded bits based on the set of encoding target pictures and the set of selected reference pictures, and
implement a third process for encoding for the reference picture selection information to create a sequence of encoded bits of the reference picture selection information,
wherein the set of encoding target pictures, the set of first reference pictures, and the set of second reference pictures have the same value in spatial resolution.

US Pat. No. 10,218,994

WATERMARK RECOVERY USING AUDIO AND VIDEO WATERMARKING

Verance Corporation, San...

1. A method for enabling acquisition of metadata associated with a multimedia content based on detection of a video watermark from the multimedia content, the method comprising:obtaining, at a watermark extractor that is implemented at least partially in hardware, one or more blocks of sample values representing image pixels in a video frame of the multimedia content, each block including one or more rows of pixel values and one or more columns of pixel values; and
using the watermark extractor to extract one or more video watermarks from the one or more blocks, including:
for each block:
(a) determining a weighted sum of the pixel values in the block produced by multiplying each pixel value with a particular weight coefficient and summing the result together, wherein the particular weight coefficients for each block are selected to at least partially compensate for degradation of video watermark or watermarks in each block due to impairments caused by transmission or processing of the multimedia content;
(b) comparing the weighted sum of the pixel values to one or more predetermined threshold values;
(c) upon a determination that the weighted sum falls within a first range of the one or more predetermined threshold values, identifying a detected watermark symbol having a first value; and
(d) upon a determination that the weighted sum falls within a second range of the one or more predetermined threshold values, identifying a detected watermark symbol having a second value;
repeating operations (a) through (d) for a plurality of the one or more blocks to obtain a plurality of the detected watermark symbol values;
determining whether or not the plurality of the detected watermark symbols values form a valid watermark payload; and
upon a determination that a valid watermark payload has been detected, acquiring the metadata associated with the multimedia content based on the valid watermark payload.

US Pat. No. 10,218,993

VIDEO ENCODING METHOD, VIDEO DECODING METHOD AND APPARATUS USING SAME

LG ELECTRONICS INC., Seo...

1. A video decoding apparatus, comprising:a decoder configured to receive a bitstream including information on a slice header and information on substreams for a current slice segment, to obtain entry point information for the substreams from the slice header, and to decode the substreams based on the entry point information to reconstruct a picture;
a memory configured to store the reconstructed picture,
wherein the decoder comprises:
an entropy decoding module configured to derive prediction information and residual information on a block of a current substream;
a prediction module configured to derive prediction samples on the block based on the prediction information;
an inverse transform module configured to derive residual samples on the block, wherein the residual samples are derived based on the residual information;
a reconstructed block generating unit configured to generate reconstructed samples to generate the reconstructed picture based on the prediction samples and the residual samples,
wherein the picture includes multiple largest coding units (LCUs),
wherein a number of the substreams is equal to a number of LCU rows in the current slice segment in the picture,
wherein the entry point information includes number information indicating a number of entry point offsets, and
wherein the number of the substreams is derived based on the number information in the slice header.

US Pat. No. 10,218,992

ENCODING, TRANSMISSION AND DECODING OF COMBINED HIGH MOTION AND HIGH FIDELITY CONTENT

Cisco Technology, Inc., ...

1. A device comprising:at least one processor; and
at least one memory having computer-readable instructions, which when executed by the at least one processor, cause the at least one processor to:
receive an encoded frame;
determine whether the encoded frame includes at least one region having high fidelity content; and
upon determining that the encoded frame includes at least one region having high fidelity content,
perform a first decoding process,
perform a second decoding process for decoding the at least one region having high fidelity content,
display a previous version of the high fidelity content on a display based on the first decoding process and while the second decoding process is being performed, and
display a decoded version of the at least one region having the high fidelity content on the display when performing the second decoding process is complete.

US Pat. No. 10,218,991

IMAGE ENCODING APPARATUS, METHOD OF IMAGE ENCODING, AND RECORDING MEDIUM, IMAGE DECODING APPARATUS, METHOD OF IMAGE DECODING, AND RECORDING MEDIUM

Canon Kabushiki Kaisha, ...

1. An image decoding apparatus capable of decoding a bit stream including data obtained by encoding an image including a tile, the tile including a plurality of block rows, the image decoding apparatus comprising:a number-of-blocks acquiring unit configured to acquire, from the bit stream, information indicating a number of blocks in a height direction in the tile;
an entry point offset acquiring unit configured to acquire, from the bit stream, an entry point offset indicating a size of data corresponding to a block row included in the tile;
a flag acquiring unit configured to acquire, from the bit stream, a flag indicating whether specific decoding processing is performed; and
a decoding unit configured to decode the image including the tile based on the information acquired by the number-of-blocks acquiring unit and the entry point offset acquired by the entry point offset acquiring unit, in a case where the image includes a plurality of tiles and the flag indicates the specific decoding processing is performed,
wherein the specific decoding processing includes referring to information updated in decoding of a predetermined-numbered block in a first block row, in decoding of a first block in a second block row subsequent to the first block row.

US Pat. No. 10,218,990

VIDEO ENCODING FOR SOCIAL MEDIA

Avago Technologies Intern...

1. A device for encoding and sharing media for social networks, comprising:a sharing engine comprising a buffer, and a network interface installed within a housing of the device;
wherein the sharing engine is configured to:
receive a first portion of a media stream, and
write a subset of the received first portion of the media stream to the buffer; and
wherein the network interface is configured to, responsive to receipt of a capture command:
retrieve a second portion of the media stream from the buffer of the sharing engine,
trim the beginning and end of the retrieved second portion of the media stream to independently decodable frames, and
transmit the retrieved second portion of the media stream via a network to a second device.

US Pat. No. 10,218,989

IMPLICIT SIGNALING OF SCALABILITY DIMENSION IDENTIFIER INFORMATION IN A PARAMETER SET

Dolby International AB, ...

1. An electronic device comprising:a decoder for decoding a coded video sequence, the decoder comprising one or more processing devices configured to:
receive a video syntax set that includes information applicable to the coded video sequence,
determine, based on a flag included in the video syntax, that a scalability dimension identifier for the coded video sequence is implicitly signaled, wherein the flag is indicative of either implicit or explicit signaling of the scalability dimension identifier,
wherein the scalability dimension identifier specifies a scalability dimension of a particular layer of the coded video sequence, the scalability dimension being one of multiple types, including: a spatial type and a quality type,
derive the scalability dimension identifier from a network abstraction layer (NAL) unit header in response to determining that the scalability dimension identifier is implicitly signaled,
decode an enhancement layer based on the scalability dimension, and
generate a decoded video sequence based on, in part, the enhancement layer.

US Pat. No. 10,218,988

METHOD AND SYSTEM FOR INTERPOLATING BASE AND DELTA VALUES OF ASSOCIATED TILES IN AN IMAGE

Nvidia Corporation, Sant...

1. A non-transitory tangible-computer-readable medium having computer-executable instructions for performing a method of image decompression, said method comprising:accessing compressed image data representing an image, wherein said image comprises a plurality of tiles comprising a plurality of pixels, and wherein further said, compressed image data comprises a base value, a delta value and a plurality of indices for each tile of said plurality of tiles;
decompressing said compressed image data by performing:
identifying a pixel in an image;
identifying one or more tiles associated with said pixel;
determining an interpolated base for said pixel by interpolating base values of said one or more tiles;
determining an interpolated delta for said pixel by interpolating delta values of said one or more tiles;
determining an index for said pixel based on said plurality of indices; and
determining a color value for said pixel based on said interpolated base, said interpolated delta, and said index.

US Pat. No. 10,218,987

METHODS, SYSTEMS, AND COMPUTER READABLE MEDIA FOR PERFORMING IMAGE COMPRESSION

THE UNIVERSITY OF NORTH C...

1. A method for performing image compression, the method comprising:identifying a canonical image set from a plurality of images uploaded to or existing on a cloud computing environment and/or a storage environment;
computing an image representation for each image in the canonical image set;
receiving a first image;
identifying, using the image representations for the canonical image set, one or more reference images that are visually similar to the first image, wherein identifying the one or more reference images includes: computing a first image representation for the first image, compressing the first image representation using a binarizing process, and performing, using the first image representation, a k-nearest neighbor(s) (KNN) search over the image representations for the canonical image set, wherein each of the image representations includes a GIST descriptor represented as a binarized string; and
compressing the first image using the one or more reference images.

US Pat. No. 10,218,986

FRAME ACCURATE SPLICING

Google LLC, Mountain Vie...

1. A computer implemented method comprising:receiving, by a computing system, first compressed video content;
receiving, by the computing system, second compressed video content;
identifying, by the computing system, a splice point for the first compressed video content;
identifying a particular frame in the first compressed video content that precedes the splice point;
determining that the particular frame depends on information included in a subsequent frame of the first compressed video content that is after the splice point;
altering, by the computing system and in response to determining that the particular frame depends on information included in the subsequent frame, time stamp information of the subsequent frame, wherein altering the time stamp information of the subsequent frame comprises:
reading a presentation time stamp value associated with the subsequent frame;
subtracting a particular value from the presentation time stamp value; and
storing the resulting value of subtracting the particular value from the presentation time stamp value as a new presentation time stamp for the subsequent frame; and
transmitting, by the computing system and to a video presentation system, the particular frame, the subsequent frame along with the altered time stamp information, and at least a portion of the second compressed video content;
wherein the particular value is between 5 ms and 150 ms.

US Pat. No. 10,218,985

INTRA-FRAME DEPTH MAP BLOCK ENCODING AND DECODING METHODS, AND APPARATUS

Huawei Technologies Co., ...

1. An intra-frame depth map block encoding method, comprising:acquiring a depth map block to be encoded;
when a depth modeling mode (DMM) is applied to a recursive quadtree (RQT) or simplified depth coding (SDC) to encode the depth map block, separately detecting the depth map block by using a DMM1 mode and a DMM4 mode in the DMM, to obtain a rate-distortion result of the depth map block in the DMM1 mode and a rate-distortion result of the depth map block in the DMM4 mode; and
determining that a DMM with a smallest rate-distortion result in the DMM1 and the DMM4 is a DMM used during encoding, applying the used mode to the RQT or the SDC to encode the depth map block, and writing the used DMM to a bitstream.

US Pat. No. 10,218,984

IMAGE CODING METHOD AND DEVICE FOR BUFFER MANAGEMENT OF DECODER, AND IMAGE DECODING METHOD AND DEVICE

SAMSUNG ELECTRONICS CO., ...

1. An apparatus for encoding an image, the apparatus comprising:an encoder configured to encode an image frame by performing motion prediction using a reference frame,
to output a first syntax indicating a maximum size of a buffer required to decode the image frame by a decoder, a second syntax indicating the number of image frames required to be reordered, and a third syntax indication a latency information, and to generate a bitstream by adding the first syntax, the second syntax and the third syntax to a mandatory sequence parameter set,
wherein the number of frames required to be reordered is determined based on an encoding order of the image frame, an encoding order of the reference frame referred to by the image frame, a display order of the image frame, and a display order of the reference frame,
wherein the latency information indicates a largest difference between the encoding order and the display order,
wherein the maximum size of the buffer storing decoded picture is determined based on the first syntax,
wherein, whether to output the decoded picture stored in the buffer is determined based on the second syntax and the third syntax by increasing a latency parameter count of the decoded picture stored in the buffer by one whenever a picture include in an image sequence is decoded,
the decoded picture is outputted from the buffer when the latency parameter count of the decoded picture is equal to the latency information.

US Pat. No. 10,218,983

ADAPTING MODE DECISIONS IN VIDEO ENCODER

Apple Inc., Cupertino, C...

1. An encoding pipeline configured to encode image data, comprising:mode decision circuitry configured to determine a frame prediction mode, the mode decision circuitry comprising:
distortion measurement circuitry configured to select a distortion measurement calculation based at least in part on operational parameters of a display device and the image data, wherein the distortion measurement calculation comprises a higher-cost calculation when the encoding pipeline is capable of encoding the image data at or near real time, or comprises a low-cost calculation when the encoding pipeline is not capable of encoding the image data at or near real time and wherein whether the encoding pipeline is capable of encoding the image data at or near real time depends at least in part on the operational parameters; and
mode selection circuitry configured to:
determine rate distortion cost metrics associated with an inter-frame prediction mode and an intra-frame prediction mode using the distortion measurement calculation; and
select between the inter-frame prediction mode and the intra-frame prediction mode based at least in part on the rate distortion cost metrics.

US Pat. No. 10,218,982

VIDEO CODING AND DECODING METHODS AND VIDEO CODING AND DECODING DEVICES USING ADAPTIVE LOOP FILTERING

SAMSUNG ELECTRONICS CO., ...

1. A method for decoding video implemented by a processor, the method comprising:obtaining, by the processor, a size information of a maximum coding unit and information whether loop filtering for compensating pixel value is to be performed, from a bitstream;
determining, by the processor, the maximum coding unit by splitting a picture, based on the size information of the maximum coding unit;
reconstructing, by the processor, encoded image data of the maximum coding unit;
determining, by the processor, a direction of an edge of the reconstructed image data of the maximum coding unit; and
performing, by the processor, the loop filtering on deblocking filtered data of the reconstructed image data of the maximum coding unit, based on the information of whether the loop filtering is to be performed,
wherein the loop filtering is performed according to the direction of the edge,
wherein a coding unit among at least one coding unit in the maximum coding unit includes at least one prediction unit to perform prediction on the coding unit, and
wherein the coding unit is split into at least one transformation unit independently from the at least one prediction unit.

US Pat. No. 10,218,981

CLIP GENERATION BASED ON MULTIPLE ENCODINGS OF A MEDIA STREAM

WOWZA MEDIA SYSTEMS, LLC,...

1. A method comprising:receiving a media stream;
generating a first encoded version of the media stream and a second encoded version of the media stream, wherein the first encoded version is associated with a first key frame interval and the second encoded version is associated with a second key frame interval that is greater than the first key frame interval, wherein the first key frame interval and the second key frame interval correspond to a spacing between individual key frames of the first encoded version and the second encoded version, respectively, the individual key frames comprising consecutive intracoded frames that are each decodable without referencing other frames;
receiving, from a destination device at a first time, a request to generate a media clip, wherein the request identifies a start point of the media clip, the start point corresponding to a second time in the media stream that precedes the first time;
after receiving the request and in response to a determination that the start point indicated by the request identifies a particular frame of the second encoded version that is decodable using one or more other frames of the second encoded version, generating the media clip based on a first sequence of frames of the first encoded version and a second sequence of frames of the second encoded version, the media clip having a shorter duration than the media stream, wherein the first sequence begins at a first frame corresponding to the start point and ends at a second frame corresponding to a transition point, and wherein the second sequence begins at a third frame following the transition point and ends at a fourth frame corresponding to a stop point of the media clip; and
sending at least one of the media clip or a link to the media clip to the destination device.

US Pat. No. 10,218,979

ENTROPY CODING STATE SEGMENTATION AND RETENTION

Cisco Technology, Inc., ...

1. A method comprising:storing, in an entropy coding state library, entropy coding states for regions of video frames of a sequence of video frames, on completion of coding of those regions;
selecting, from the entropy coding state library, the stored entropy coding states for regions of a prior video frame in the sequence of video frames based on a similarity of one or more properties of the prior video frame to properties of a current video frame; and
based on the stored entropy coding states for the regions of the prior video frame, deriving entropy coding initialization states for corresponding regions of the current video frame.

US Pat. No. 10,218,978

DATA PROCESSING SYSTEMS

Arm Limited, Cambridge (...

1. A method of encoding frames of video data, the method comprising:when a new frame is to be encoded:
for a sub-region of a set of plural sub-regions that the new frame is divided into, selecting one of: (i) performing an encoding operation for the sub-region, wherein a motion estimation operation and a frequency transform operation are performed for the sub-region; (ii) performing only part of the encoding operation for the sub-region, wherein the motion estimation operation is omitted and the frequency transform operation is performed for the sub-region; and (iii) omitting the motion estimation operation and the frequency transform operation for the sub-region, by:
determining whether the sub-region has changed from a previous frame; and
controlling at least a part of the encoding operation for the new frame on the basis of the determination by:
performing the motion estimation operation and the frequency transform operation for the sub-region when the sub-region is determined to have changed from the previous frame; and
omitting the motion estimation operation for the sub-region when the sub-region is determined to be unchanged from the previous frame;
wherein the method further comprises when the sub-region is determined to be unchanged from the previous frame, determining whether the frequency transform operation should be performed or can be omitted for the sub-region by determining whether it can be inferred that frequency coefficients output by the frequency transform operation for encoding the sub-region would be zero; and
when the sub-region is determined to be unchanged from the previous frame and when it can be inferred that frequency coefficients output by the frequency transform operation for encoding the sub-region would be non-zero, performing the frequency transform operation and omitting the motion estimation operation for the sub-region; and
when the sub-region is determined to be unchanged from the previous frame and when it can be inferred that the frequency coefficients output by the frequency transform operation for encoding the sub-region would be zero, omitting the frequency transform operation and omitting the motion estimation operation for the sub-region.

US Pat. No. 10,218,977

METHOD AND SYSTEM OF TRANSFORM BLOCK PROCESSING ACCORDING TO QUANTIZATION MATRIX IN VIDEO CODING

HFI INNOVATION INC., Zhu...

1. A method for processing transform blocks according to quantization matrices in a video coding system, the method comprising:obtaining an initial quantization matrix having a first width and a first height;
obtaining a derived quantization matrix having a second width and a second height, wherein the second width is different from the second height, and the derived quantization matrix is derived from the initial quantization matrix;
receiving a transform block having a block size, where the transform block is associated with a picture; and
selecting the initial quantization matrix or the derived quantization matrix for processing transform coefficients of the transform block according to the block size.

US Pat. No. 10,218,976

QUANTIZATION MATRICES FOR COMPRESSION OF VIDEO

MatrixView, Inc., Sunnyv...

1. A tangible, non-transitory, machine-readable medium storing instructions that when executed by one or more processors effectuate operations comprising:obtaining, with one or more processors, video data comprising a sequence of frames each comprising a plurality of blocks of pixel values of respective regions of pixels in respective frames;
for a first frame in the sequence of frames, for a first block in the first frame, forming, with one or more processors, a first transform matrix by determining a first matrix of frequency-domain transform coefficients based on the pixels values in the first block, the frequency-domain transform coefficients being coefficients of a discrete cosine transform or an asymmetric discrete sine transform of the pixel values of the first block;
for a second block in the first frame, forming, with one or more processors, a second transform matrix;
selecting, with one or more processors, a first predetermined quantization matrix from among a discreet predetermined set of predetermined quantization matrices;
obtaining, with one or more processors, a first modified quantization matrix that comprises a first set of values based on corresponding values in the first predetermined quantization matrix and a second set of values that are different from corresponding values in the first predetermined quantization matrix;
quantizing, with one or more processors, the first transform matrix with the modified quantization matrix to form a first quantized transform matrix;
quantizing, with one or more processors, the second transform matrix;
serializing, with one or more processors, the first quantized transform matrix to form a first sequence of values;
serializing, with one or more processors, the second transform matrix to form a second sequence of values;
compressing, with one or more processors, the first and second sequences of values with entropy coding to produce compressed video data;
forming, with one or more processors, a video bitstream that includes the compressed video data and a header associated with the first frame or segment of the first frame, the header containing a parameter that instructs a decoder to decode the first block with the predetermined quantization matrix and not with the modified quantization matrix; and
storing, with one or more processors, the compressed video bitstream in memory or transmitting the compressed video bitstream on a network,
wherein the compressed video bitstream is compressed by a greater amount than is provided for by one or more quantization parameters in one or more headers of the compressed video bitstream and is decodable by a decoder with access to the first predetermined quantization matrix but not the modified quantization matrix.

US Pat. No. 10,218,975

TRANSFORM PRECISION MANIPULATION IN VIDEO CODING

Qualcomm Incorporated, S...

1. A method of decoding encoded video data, the method comprising:determining that a rectangular transform unit (TU) comprises a number of pixel rows denoted by a first integer value ‘K’ and a number of pixel columns denoted by a second integer value ‘L,’ wherein K, when converted to a binary format, represents a binary value equal to a binary equivalent of an integer value ‘m’ left shifted by one (1), and wherein L, when converted to the binary format, represents a binary value equal to a binary equivalent of an integer value ‘n’ left shifted by one (1);
determining that a sum of n and m is an odd number;
based on the sum of n and m being the odd number, adding a delta quantization parameter (delta QP) value to a quantization parameter (QP) value for the rectangular TU to obtain a modified QP value for the rectangular TU; and
dequantizing the rectangular TU using the modified QP value.

US Pat. No. 10,218,974

RESIDUAL TRANSFORMATION AND INVERSE TRANSFORMATION IN VIDEO CODING SYSTEMS AND METHODS

RealNetworks, Inc., Seat...

1. A video-encoder-device-implemented method of encoding an unencoded video frame to generate an encoded bit-stream representative of the unencoded video frame, the encoded bit-stream including at least a frame header and a video data payload, the video-encoder-device-implemented method comprising:determining a maximum coding-block size for the unencoded video frame, said maximum coding-block size being defined by a maximum horizontal coding-block-dimension and a maximum vertical coding-block-dimension;
determining a maximum-transform-block-size for the unencoded video frame, said maximum-transform-block-size being defined by a maximum horizontal prediction-block-dimension and a maximum vertical prediction-block-dimension;
encoding the unencoded video frame, thereby generating the video data payload of the encoded bit-stream;
generating the frame header of the encoded bit-stream, the frame header including a maximum coding-block size flag and a maximum-transform-block-size flag; and
wherein, said maximum coding-block size flag is set to zero unless said maximum horizontal coding-block-dimension and said maximum vertical coding-block-dimension both equal sixty four pixels and said maximum-transform-block-size flag is set to zero unless said maximum horizontal prediction-block-dimension and said maximum vertical prediction-block-dimension are both greater than sixteen pixels.

US Pat. No. 10,218,973

SCALABLE VIDEO CODING USING SUBBLOCK-BASED CODING OF TRANSFORM COEFFICIENT BLOCKS IN THE ENHANCEMENT LAYER

GE Video Compression, LLC...

1. A scalable video decoder comprising:a first block-based decoding unit configured to decode, using a processor, a base layer residual signal of a base layer signal from a coded data stream; and
a second block-based decoding unit configured to reconstruct, using the processor, an enhancement layer signal, the second block-based decoding unit configured for decoding, using the processor, a transform coefficient block of transform coefficients representing an enhancement layer signal from the coded data stream, by:
selecting, for the transform coefficient block representing the enhancement layer signal, a subblock subdivision among a set of possible subblock subdivisions based on information including a spectral decomposition of a portion of the base layer residual signal or the base layer signal, wherein the transform coefficient block is subdivided into subblocks according to the selected subblock subdivision such that a dimension of at least one of the subblocks is longer along a first axis direction that is transverse to a second axis direction along which a spectral energy distribution of the spectral decomposition is narrower, and
traversing positions of the transform coefficients in units of the subblocks such that all positions within one subblock are traversed before proceeding to a next subblock in a subblock order defined among the subblocks,
wherein for a current subblock being traversed, the second block-based decoding unit is configured for:
decoding from the data stream a syntax element indicating as to whether the current subblock comprises any significant transform coefficient,
if the syntax element indicates that the current-subblock does not comprise any significant transform coefficient, setting the transform coefficients within the current-subblock equal to zero, and
if the syntax element indicates that the current-subblock comprises any significant transform coefficient, decoding from the data stream syntax elements indicating levels of the transform coefficients within the current subblock.

US Pat. No. 10,218,972

APPARATUS FOR ENCODING AND DECODING IMAGE BY SKIP ENCODING AND METHOD FOR SAME

Electronics and Telecommu...

1. An image decoding method performed by a decoding apparatus, comprising:determining whether to perform filtering on signals to be used in generating intra prediction signals of decoding object signals, the signals being reconstructed earlier than the decoding object signals, the signals being adjacent to the decoding object signals;
determining a strength of a filter used in the filtering based on a flag transmitted from an encoding apparatus in response to the determination that the filtering is performed;
performing the filtering on the signals based on the determined strength of the filter; and
generating the intra prediction signals of the decoding object signals using the filtered signals,
wherein the determination of whether to perform the filtering is based on a block size of the decoding object signals and an intra prediction mode of the decoding object signals.

US Pat. No. 10,218,971

ADAPTIVE UPSAMPLING FOR MULTI-LAYER VIDEO CODING

VID SCALE, Inc., Wilming...

1. A method for communicating video data, the method comprising:selecting an upsampling filter for a video sequence to create enhancement layer pictures by:
applying a default upsampling filter to a base layer picture to obtain a processed base layer picture;
identifying blocks of the processed base layer picture that are selected for interlayer prediction of an enhancement layer frame; and
selecting one of the default upsampling filter and the plurality of candidate upsampling filters based on a comparison of block distortion measurements on the identified blocks associated with each of the default upsampling filter and the plurality of candidate upsampling filters;
encoding upsampling filter information, wherein the upsampling filter information comprises a plurality of coefficients of the selected upsampling filter; and
sending the encoded upsampling filter information and the enhancement layer pictures in an output video bitstream.

US Pat. No. 10,218,970

RESAMPLING FILTERS FOR SCALABLE VIDEO CODING WITH PHASE OFFSET ADJUSTMENT AND SIGNALING OF SAME

ARRIS Enterprises LLC, S...

1. A system for scalable video coding, comprising:a first coding layer comprising modules for coding video with a base resolution;
a second coding layer comprising modules for coding video with an enhanced resolution having a higher resolution than a base resolution;
an upsampling unit receiving input video signals from the first coding layer and providing an output signal to the second coding layer after an upsampling process, wherein the upsampling unit output signal enables more efficient coding in the second codling layer, wherein the upsampling unit comprises:
a first module for selecting input samples from the input video signals in the first coding layer;
a second module providing selection of a plurality of filters each having a different phase index for processing the selected input samples; and
a third module including the plurality of filters, the third module for filtering the selected input samples with the selected filters, the third module providing the output signal from the upsampling unit,
wherein the first coding layer is downsampled from the second coding layer,
wherein a different phase offset is generated for each of the phase indices, wherein each of the different phase offsets is generated with a selected phase offset that is used to index a mapping table to an offset that can provide a phase shift in addition to phase rounding that is added to the selected phase offset to generate the different phase offset used to select an appropriate one of the plurality of filters used in the upsampling process to provide the output signal from the upsampling unit, and
wherein signaling of the mapping table to obtain the added phase offset occurs at a picture parameter set (PPS) level.

US Pat. No. 10,218,969

IMAGE PROCESSING DEVICE AND METHOD USING ADJUSTED MOTION VECTOR ACCURACY BETWEEN SUB-PIXELS OF REFERENCE FRAMES

Sony Corporation, Tokyo ...

1. An image processing device comprising:at least one central processing unit (CPU) configured to
perform motion compensation of integer pixel or sub-pixel accuracy for each of reference frames L0 and L1;
apply weighted addition to arithmetic operation results of the motion compensation, wherein the arithmetic operation results are respectively adjusted by multiplying each arithmetic operation result having lower accuracy by an adjustment amount that is determined based on each respective combination of horizontal accuracy and vertical accuracy between the reference frames L0 and L1, including each of the integer pixel or sub-pixel accuracy for horizontal components between each of the reference frames L0 and L1 and the integer pixel or sub-pixel accuracy for vertical components between each of the reference frames L0 and L1; and
apply a rounding process to an arithmetic operation result of the weighted addition,
wherein the adjustment amount is further determined based on a similarity of the accuracy of the horizontal and vertical components between the reference frames L0 and L1 in each respective combination, and
wherein the adjustment amount is determined according to a table indicating a respective adjustment value for each respective combination of the horizontal and vertical components between the reference frames L0 and L1.

US Pat. No. 10,218,968

GAZE-CONTINGENT DISPLAY TECHNIQUE

1. A method for enabling a user of a display screen device to experience an enhanced spatial perception of plenoptic content, the method comprising:a tracking step for tracking a gaze fixation of a user; and
a timing step for timing over a set interval an accumulated time in which the user's gaze fixation rests on each of a plurality of at least two depth planes which have been associated with a plurality of at least one plenoptic image; and
a refocusing step for refocusing a display relating to the plurality of at least one plenoptic image.

US Pat. No. 10,218,966

METHOD FOR COLLECTING IMAGE DATA FOR PRODUCING IMMERSIVE VIDEO AND METHOD FOR VIEWING A SPACE ON THE BASIS OF THE IMAGE DATA

PARALLAXTER, Brussels (B...

1. A method for collecting image data destined for producing an immersive video, which method comprises a setting up of a first set of at least n (n>1) scanners, each being provided for producing scanning beams, which method also comprises the scanning of a predetermined space by each of the scanners of said first set of scanners by means of scanning beams for producing the image data of said space, which image data are stored in a memory, characterized in that a zone of viewpoints is determined by delimiting a volume from which a user of the immersive video will be able to see said space and to perform with his head a movement, in particular a translation movement, inside the zone of viewpoints, a second set of m (m>1) source points located at the ends of the zone of viewpoints being thereafter determined, which setting up of said first set of at least n scanners being realized by placing at each of said source points each time one of said scanners of said first set, said scanning of said space being realized by means of said scanners placed at said source points and by scanning step by step said space according to a succession of at the one hand azimuth angles and on the other hand elevation angles each located in a range predetermined by said zone of viewpoints, which production of image data is realized by collecting for each produced scanning beam the scanning beam reflected by each time a touched point situated within said space and touched by the concerned scanning beam and by determining by each step and on the basis of the reflected scanning beam a distance (d) between the touched point and the scanner having produced the concerned scanning beam as well as a color parameter of said touched point, said data being stored in the memory in the form of a matrix structured according to the azimuth and elevation angles.

US Pat. No. 10,218,965

MULTIPLE CAMERA PHOTOGRAMMETRY SYSTEM

University of Massachuset...

1. A system comprising:a user holdable mounting fixture having multiple attachment points, wherein the user holdable mounting fixture comprises three support structures configured to couple to a backpack frame;
a plurality of mounting arms coupled to the mounting fixture via the attachment points, the mounting arms being user configurable to support cameras at multiple perspective points about an object to be imaged; and
a trigger coupled to provide a command to the cameras to substantially simultaneously capture an image of the object from the multiple perspective points without moving the mounting fixture,wherein the user holdable mounting fixture comprises three support structures configured to couple to a backpack frame.

US Pat. No. 10,218,963

SCANNING PROJECTORS AND IMAGE CAPTURE MODULES FOR 3D MAPPING

APPLE INC., Cupertino, C...

1. Apparatus for mapping, comprising:a radiation source, which is configured to emit a beam of radiation;
a detector and optics, which define a sensing area of the detector;
a scanning mirror assembly, which is configured to receive and scan the emitted beam over a selected angular range within a region of interest while scanning the sensing area over the selected angular range in synchronization with the scanned beam from the radiation source; and
a processor, which is configured to process signals output by the detector in order to construct a three-dimensional (3D) map of an object in the region of interest.

US Pat. No. 10,218,962

SYSTEMS AND METHOD OF HIGH RESOLUTION THREE-DIMENSIONAL IMAGING

TETRAVUE, INC., Vista, C...

1. A three-dimensional imaging system, comprising:an illumination subsystem having a light source configured to emit a light pulse that does not pass through any modulator in the illumination subsystem and has a divergence sufficient to irradiate a scene;
a sensor subsystem comprising:
a receiving lens having a predetermined image plane and a predetermined pupil plane, the receiving lens configured to receive a portion of the light pulse reflected or scattered by the scene, whereby outputting a received light pulse portion having a duration;
a modulator, located along an optical axis of the sensor subsystem behind the receiving lens, configured to modulate as a function of time an intensity of the received light pulse portion to form a modulated received light pulse portion, wherein the modulator is not located in the image plane or pupil plane of the receiving lens;
a first imaging sensor array, in optical communication with the modulator, configured to generate a first image based on the modulated received light pulse portion; and
a second imaging sensor array, in optical communication with the modulator, configured to generate a second image based on the modulated received light pulse portion; and
a processor subsystem configured to obtain a three-dimensional image based on the first and second images.

US Pat. No. 10,218,961

CALIBRATION METHOD, CALIBRATION DEVICE, AND COMPUTER PROGRAM PRODUCT

RICOH COMPANY, LIMITED, ...

1. A calibration method for a stereo camera including a first camera and a second camera located in a position apart from the first camera by a predetermined distance, and configured to calculate distance information, the calibration method comprising:calculating a first correction parameter based on a photographic image acquired by photographing an object with the first camera without interposing a first transparent body, and a photographic image acquired by photographing the object through the first transparent body with the first camera;
calculating a second correction parameter based on a photographic image acquired by photographing the object with the second camera without interposing the first transparent body, and a photographic image acquired by photographing the object through the first transparent body with the second camera, in a state where a positional relation between the first camera and the first transparent body is the same as in calculating the first correction parameter, and the first camera and the second camera are apart from each other by the predetermined distance; and
calculating a third correction parameter for correcting a relative positional deviation in a parallax, the relative positional deviation affecting calculation of the distance information, in a state where the first camera and the second camera are apart from each other by the predetermined distance, based on a first corrected image obtained by correcting, with the first correction parameter, a photographic image acquired by photographing an object through a second transparent body with the first camera, and a second corrected image obtained by correcting, with the second correction parameter, a photographic image acquired by photographing the object through the second transparent body with the second camera.

US Pat. No. 10,218,960

STEREOSCOPIC IMAGING APPARATUS

HITACHI AUTOMOTIVE SYSTEM...

1. A stereoscopic imaging apparatus, comprising:an image acquisition unit configured to acquire a first image and a second image different from the first image in exposure time;
a brightness correction unit configured to correct brightness of one of the acquired first and second images;
a parallax calculation unit configured to calculate parallax from one of the images corrected by the brightness correction unit and the other images and to output a parallax image and parallax information;
a combination image generation unit configured to combine the acquired first and second images together to generate a wide dynamic range image based on selected image regions of the first image and second image and the parallax information, and to output the generated wide dynamic range image;
an image selector unit configured to select any of the first image, the second image different from the first image in exposure time, the parallax image, and the wide dynamic range image and to output the selected image to an image processor; and
an image determination unit configured to receive the first image, the second image different from the first image in exposure time, and the wide dynamic range image and to select one of the received images based on requirements of an imaging process to be performed by the image processor and to output the selection of one of the first image, the second image different from the first image in exposure time, and the wide dynamic range image to the image selector unit, wherein
the image determination unit is configured to determine exposure times of imaging devices such that a bright image and a dark image can be captured, based on outside factor information input from an outside factor unit, and to output the determined exposure times to the imaging devices.

US Pat. No. 10,218,959

METHOD FOR TRANSMITTING AND RECEIVING STEREO INFORMATION ABOUT A VIEWED SPACE

1. A method for obtaining, transmitting, receiving and displaying stereo information, the method executable in a system, the system having:a central video camera;
two side video cameras,
the two side video cameras being symmetrically positioned with respect to the central video camera,
optical axes of the two side video cameras and the central video camera being parallel to each other and being aligned in one plane,
the two side video cameras and the central video camera, each having a photo sensitive video matrix, being synchronized by a line synchronization signal for simultaneous scanning of the viewed space;
a computing unit operatively coupled to the central video camera and the two side cameras;
the method comprising:
capturing, by the central video camera, a central 2D video of a viewed space;
simultaneously with the capturing of the central 2D video, reading, by the two side video cameras and the central video camera, from each photo sensitive matrix, signals of images of a line of the viewed space,
the reading being performed with a horizontal scan speed, to obtain two side line signals and a central line signal for analysis and for detection of conjugated signals;
detecting the conjugated signals by comparing, by the computing unit, the two side line signals and the central line signal, the conjugated signals being detected as being located symmetrically in the two side line signals and being at equal intervals from a central signal, and levels of the conjugated signals being equal to a level of the central signal;
measuring, by the computing unit, with the horizontal scan speed, the equal intervals and determining immediate temporal parallaxes between the central signal and each of the conjugated signals;
generating, by the computing unit, an output video of the central video camera having the central 2D video synchronized with the immediate temporal parallaxes; and
shifting of every point in each line of the central 2D video by a linear value to reconstruct and display stereo frames on a stereo display, the linear value being determined by a receiver's speed of horizontal scan and the immediate temporal parallaxes.

US Pat. No. 10,218,958

IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD

Sony Corporation, Tokyo ...

1. An image processing apparatus comprising:circuitry configured to:
generate a coded bitstream having at least a first color image of a viewpoint and a parallax related image of the viewpoint and viewpoint generation information; and
transmit the coded bitstream, wherein the viewpoint generation information comprises:
information for generating a second color image of an additional viewpoint other than the viewpoint,
at least image-capturing information of the viewpoint, and
at least one of:
information indicating a minimum value and a maximum value of world coordinate value at a position in the parallax related image, and
information indicating a minimum value and a maximum value of parallax, in world coordinates, at a position in a depth direction in the parallax related image, and information for identifying a color image of a base point,
wherein:
 when the parallax related image is a depth image, the circuitry is further configured to generate, as the viewpoint generation information, the information indicating the minimum value and the maximum value of world coordinate value at the position in the depth image, and
 when the parallax related image is a parallax image, the circuitry is configured to generate, as the viewpoint generation information, the information indicating the minimum value and the maximum value of parallax, in world coordinates, at the position in the depth direction in the parallax image, and the information for identifying the color image of the base point.

US Pat. No. 10,218,957

METHOD OF SUB-PU SYNTAX SIGNALING AND ILLUMINATION COMPENSATION FOR 3D AND MULTI-VIEW VIDEO CODING

HFI INNOVATION INC., Zhu...

1. A method for three-dimensional or multi-view video encoding or decoding of video data, the method comprising:receiving input data associated with a current PU (prediction unit);
signaling or parsing a first syntax element associated with a texture sub-PU size only for texture video data, wherein the first syntax element corresponds to IVMP (inter-view motion prediction);
signaling or parsing a second syntax element associated with a depth sub-PU size only for depth video data, wherein the second syntax element corresponds to MPI (motion parameter inheritance);
if the current PU is a texture PU:
locating reference texture sub-PUs in a reference view corresponding to texture sub-PUs partitioned from the current PU according to the texture sub-PU size;
identifying first motion information associated with the reference texture sub-PUs; and
encoding or decoding the texture sub-PUs according to texture multi-candidate motion prediction including IVMP (inter-view motion prediction) using the first motion information;
if the current PU is a depth PU:
locating co-located texture sub-PUs in the reference view corresponding to depth sub-PUs partitioned from the current PU according to the depth sub-PU size;
identifying second motion information associated with the co-located texture sub-PUs; and
encoding or decoding the depth sub-PUs according to depth multi-candidate motion prediction including MPI (motion parameter inheritance) using the second motion information.

US Pat. No. 10,218,956

METHOD AND APPARATUS FOR GENERATING A DEPTH CUE

TELEFONAKTIEBOLAGET LM ER...

1. A method of generating a depth cue for three dimensional video content, comprising:(a) receiving video content comprising a plurality of frames;
(b) detecting, in the received video content, three dimensional video content that will appear in observer space when displayed, and identifying from the detected content one or more objects having a width greater than a threshold value;
(c) identifying a reference projection parameter;
(d) estimating a location of a shadow that would be generated by the one or more identified objects as a consequence of a light emitting light according to the reference projection parameter; and
(e) projecting light content imitating a shadow to the estimated location to coincide with display of the three dimensional video content,
wherein step (b) comprises identifying three dimensional video content according to a sign of its associated disparity value.

US Pat. No. 10,218,955

MOTION BLUR COMPENSATION

1. A method for compensating for motion blur when performing a 3D scanning of at least a part of an object by means of a 3D scanner, where the motion blur occurs because the scanner and the object are moved relative to each other while the scanning is performed, and where the motion blur compensation comprises:acquiring a sequence of focus plane images along an optical axis while moving a plane of focus along the optical axis such that each of the focus plane images is focused on a respective plane of focus, wherein only portions of the object in the respective focus plane are in focus in each of the focus plane images;
generating a 3D surface image from using the in-focus regions of each focus plane image together with data identifying the respective focus plane data;
determining whether there is a relative motion between the scanner and the object during the acquisition of a sequence of focus plane images so that each image in the sequence acquires data from a different portion of the object;
if a relative motion is determined, performing a motion compensation based on the determined motion to align the focus plane images with the part of the object.

US Pat. No. 10,218,954

VIDEO TO DATA

CELLULAR SOUTH, INC., Ri...

1. A method to generate video data from a video comprising:generating audio files and image files from the video;
distributing the audio files and the image files across a plurality of processors and processing the audio files and the image files in parallel;
converting audio files associated with the video to text;
identifying an object in the image files;
determining a contextual topic from the image files;
assigning a probability of accuracy to the identified object based on the contextual topic;
converting the image files associated with the video to video data, wherein the video data comprises the object, the probability, and the contextual topic;
cross-referencing the text and the video data with the video to determine contextual topics;
generating a contextual text, an image, or an animation based on the determined contextual topics;
generating a content-rich video based on the generated text, image, or animation.

US Pat. No. 10,218,953

IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM

Canon Kabushiki Kaisha, ...

1. An image processing apparatus comprising:at least one processor or circuit configured to perform the operations of the following units:
a generation unit configured to generate, from an input image, a plurality of hierarchical images having different frequency bands;
a gain calculation unit configured to calculate, for each of the hierarchical images, a gain based on a luminance value for each image area by using a tone conversion curve that is set according to the corresponding frequency band, wherein the tone conversion curve assigns tone with priority to different luminance ranges according to the frequency bands of the hierarchical images;
a determination unit configured to determine a combined gain by combining gains that are set for the plurality of hierarchical images;
a conversion unit configured to perform tone conversion on the input image by using the combined gain determined by the determination unit; and
a tone conversion curve generation unit configured to generate, based on the input image, a plurality of tone conversion curves including a first tone conversion curve and a second tone conversion curve that assigns tone with priority to a different luminance range from the first tone conversion curve, and generate a tone conversion curve for each of the plurality of hierarchical images by weighting and adding the plurality of tone conversion curves, wherein the tone conversion curve generation unit performs the adding by varying weights of the first tone conversion curve and the second tone conversion curve according to the frequency bands of the hierarchical images.

US Pat. No. 10,218,951

MEMS SCAN CONTROLLED KEYSTONE AND DISTORTION CORRECTION

Microvision, Inc., Redmo...

1. A MEMS scanned beam projector, comprising:a light source to emit a light beam;
a scanning platform to redirect the light beam impinging on the platform; and
a display controller to control the light source and the scanning platform to cause the scanning platform to scan the light beam in a vertical direction and a horizontal direction in a scan pattern to project an image onto a projection surface;
wherein the display controller is configured to compensate for horizontal stretching in the projected image due to geometric distortion by modifying a start position and an end position of video path interpolation as a function of vertical scan position to reduce the horizontal stretching of the projected image, and wherein the display controller is further configured to adjust an interpolation rate to correct for image distortion in the projected image.

US Pat. No. 10,218,949

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM

SONY CORPORATION, Tokyo ...

1. An information processing apparatus comprising:a color correction information setting section configured to use a captured image captured by a calibrated imaging section imaging a projected image projected to a projection plane by a calibrated projection section in order to set color correction information as information for correcting a color of each pixel of the projection section,
wherein the color correction information setting section is implemented via at least one processor.

US Pat. No. 10,218,948

IMAGE DISPLAYING SYSTEM, CONTROLLING METHOD OF IMAGE DISPLAYING SYSTEM, AND STORAGE MEDIUM

CANON KABUSHIKI KAISHA, ...

1. An image displaying system comprising:a first projector configured to project an image to a screen;
a second projector configured to project an image to the screen;
a memory storing instructions; and
at least one hardware processor configured to implement the instructions stored in the memory and execute:
a specifying task that specifies an overlap region on the screen where a first projection region on which an image is projected by the first projector and a second projection region on which an image is projected by the second projector overlap each other;
a measuring task that measures, based on a photographed image of the specified overlap region on the screen photographed in a situation where the first projector is projecting a predetermined image that is not uniform in brightness, and the second projector is projecting no image or only a uniform image that is uniform in brightness onto the overlap region, brightness of each of measuring positions in the specified overlap region in the situation, wherein the predetermined image projected by the first projector has been corrected by first luminance correction for reducing brightness of an image to be projected from the first projector at different degrees for different positions in the specified overlap region;
a setting task that sets, based on the brightness measured by the measuring task in the situation where the first projector is projecting the predetermined image that is not uniform in brightness and the second projector is projecting no image or only the uniform image that is uniform in brightness onto the overlap region, a parameter used for second luminance correction for reducing brightness of an image to be projected from the second projector at different degrees for different positions in the specified overlap region, to lessen a degree of the second luminance correction for a specific position, from among the measuring positions, in a first case where the measured brightness of the specific position is a first value, to less than the degree of the second luminance correction for the specific position, in a second case where the measured brightness of the specific position is a second value that is greater than the first value;
a projecting task that causes the second projector to project a second image onto the screen together with the first projector projecting a first image onto the screen, wherein the first image is an image corrected by the first luminance correction and the second image is an image corrected by the second luminance correction according to the parameter set in the setting task; and
a determining task that determines, based on another photographed image of the specified overlap region photographed in another situation where the second projector is projecting another predetermined image that is not uniform in brightness and the first projector is projecting no image or only a uniform image that is uniform in brightness onto the overlap region, a display characteristic of the second projector,
wherein the setting task sets the parameter used by the second projector to perform the second luminance correction according to both the brightness measured by the measuring task and the display characteristic determined by the determining task.

US Pat. No. 10,218,947

COMPENSATION FOR OVERLAPPING SCAN LINES IN A SCANNING-BEAM DISPLAY SYSTEM

Prysm, Inc., San Jose, C...

1. A display system comprising:a display screen;
a plurality of subsystems each including
a light source to generate at least one beam, the at least one beam including an excitation beam that carries image information, an optical energy carried by the excitation beam controllable by one or more scaling factors,
a beam scanning module to receive the at least one beam and to direct the at least one beam onto a display region of the display screen, the beam scanning module configured to scan the at least one beam along a scanning direction across at least a portion of the display region, and
a servo feedback detector positioned to receive feedback light of one or more of the at least one beam scanning on the display region, to detect a position of the at least one beam on the display screen from the feedback light, and to produce a monitor signal indicative of the position of the at least one beam on the display region,
wherein two adjacent subsystems of the plurality of subsystems are configured such that in operation a first area scanned by one or more excitation beams of a first subsystem of the adjacent subsystems overlaps with a second area scanned by one or more excitation beams of a second subsystem of the adjacent subsystems in an overlap region; and
a control system coupled to the plurality of the subsystems and configured to
determine a range of the overlap region between the adjacent subsystems based on the monitor signals from the servo feedback detectors of the adjacent subsystems, and
determine the scaling factors for the excitation beams for the overlap region of the adjacent subsystems.

US Pat. No. 10,218,946

HIGH SPECTRUM CAMERA

SONY INTERACTIVE ENTERTAI...

1. Apparatus comprising:at least one beam divider to receive white light and output separate color components of the white light to illuminate an object with a single one of the color components at a time, such that the object is illuminated with a first color component and no other color components at a first time and a second color component and no other components at a second time;
at least one black and white imager configured for receiving, from the object, reflections of the separate color components of the white light; and
at least one wavelength reference receiver (WRR) receiving from the beam divider the separate color components of the white light, such that the WRR receives the first color component and no other color components at the first time and the second color component and no other components at the second time, such that information from the WRR can be correlated with pixel information from the black and white imager.

US Pat. No. 10,218,944

METHODS AND DEVICES FOR INCREASING THE SPECTRAL RESPONSE OF AN IMAGE SENSOR

SEMICONDUCTOR COMPONENTS ...

1. An imaging device, comprising:a pixel array; and
a color filter array situated on the pixel array, wherein the color filter array comprises multiple groups of filters; and
wherein at least one group comprises:
a first color and a second color; wherein the second color is selected from the group consisting of transparent and white;
a majority of filters having the first color; and
at least one filter having the second color.

US Pat. No. 10,218,943

METHOD AND APPARATUS FOR TRIGGERING MULTIPLE DATA RECORDING DEVICES

L3 Technologies, Inc., N...

1. A method of controlling multiple mobile vision camera systems for first responders, including body cameras and vehicle cameras, the body cameras and the vehicle cameras being capable of both recording and non-recording modes, the method comprising the steps of:responsive to whether a body camera and/or a vehicle camera are in recording or non-recording mode, sending a triggering signal to other nearby body cameras and/or vehicle cameras to trigger the other cameras to also begin operating in recording mode, wherein the triggering signal is communicated in daisy chain fashion from one of the body camera or the vehicle camera to the other nearby body camera(s) and/or the other nearby vehicle camera(s) via Bluetooth® wireless communication.

US Pat. No. 10,218,942

POSITIONABLE CAMERA

Lenovo (Singapore) Pte. L...

1. A device comprising:a processor;
memory operatively coupled to the processor;
a display operatively coupled to the processor;
a camera that comprises circuitry that operatively couples to the processor and that comprises a mount that positions the camera in a deployed state;
a first housing;
a second housing; and
a hinge assembly that operatively couples the first and second housings and wherein the first and second housings comprise a mounting surface that cooperates with the mount of the camera in the deployed state and a camera socket that receives the camera in an undeployed state.

US Pat. No. 10,218,941

SYSTEMS AND METHODS FOR COORDINATED COLLECTION OF STREET-LEVEL IMAGE DATA

Lyft, Inc., San Francisc...

1. A computer-implemented method comprising:identifying, by a server computer system, a provider computing device for use in capturing street-level imagery, wherein the provider computing device controls a camera positioned to capture street-level imagery near a vehicle associated with the provider device;
determining, by the server computer system, a configuration for the provider computing device, wherein the configuration is configured to control the provider computing device to capture the street-level imagery, wherein the provider computing device is one of a set of provider computing devices that are collectively configured to meet a collection objective, and wherein determining the configuration comprises:
receiving capability data from the provider computing device associated with the vehicle and capability data from at least one alternate provider computing device associated with an alternate vehicle;
selecting the configuration based on the capability data from the provider computing device and the capability data from the alternate provider computing device; and
sending, by the server computer system, the configuration to the provider computing device, wherein the provider computing device captures street-level image data in response to receiving the configuration.

US Pat. No. 10,218,939

METHODS AND SYSTEMS FOR EMPLOYING VIRTUAL SUPPORT REPRESENTATIVES IN CONNECTION WITH MUTLI-PANE VIDEO COMMUNICATIONS

POPIO IP HOLDINGS, LLC, ...

1. A method comprising:establishing a first connection between a mobile device and a support terminal;
providing a virtual support representative that automatically conducts a video chat with the mobile device via the first connection;
determining by the virtual support representative, automatically and without user intervention, a display element to provide to the mobile device;
providing the display element to the mobile device via a second connection during the video chat, wherein the display element causes the mobile device to display the video chat in a first pane and the display element in a second pane;
maintaining a tasks transcript of actions performed by the virtual support representative during the video chat;
determining to transfer the video chat from the virtual support representative to a support representative user based on applying a workflow, a framework, or an input constraint to user input received from the mobile device;
transferring, from the support terminal, the video chat from the virtual support representative to the support representative user via the first connection; and
providing the tasks transcript to the support representative user upon transferring the video chat from the virtual support representative to the support representative user.

US Pat. No. 10,218,936

RANDOMLY ACCESSIBLE VISUAL INFORMATION RECORDING MEDIUM AND RECORDING METHOD, AND REPRODUCING DEVICE AND REPRODUCING METHOD

MITSUBISHI DENKI KABUSHIK...

1. A non-transitory recording medium for a playback device, said recording medium containing video data comprising a plurality of video units each of which includes an intra coded I-picture, a predictive coded P-picture including a group of blocks predicted from one picture and a bi-directionally-predictive coded B-picture including a group of blocks predicted from two pictures, said medium comprising:a video data recording area for storing said video data comprising said video units, at least one of said video units including an access point P-picture coded by motion compensation prediction using an I-picture located at the beginning of said video unit including said access point P-picture or a selected one of preceding P-pictures,
wherein said access point P-picture is a P-picture to be decoded for video reproduction,
wherein any P-pictures, other than access point P-pictures, and any B-pictures following said access point P-picture in the same video unit are coded without referring to any picture preceding said access point P-picture;
wherein said video data recording area contains an attribute information including a picture type information having a first value indicating each access point P-picture included in said video unit, and including a picture type having a second value different from the first value indicating each P picture included in said video unit that is not an access point P-picture but which is necessary to decode the next access point P-picture,
wherein said attribute information is included in supplemental enhancement information (SEI) according to MPEG4-AVC,
wherein said access point P-picture indicated by the first value is decoded using said I-picture and said P-pictures indicated by the second value, and
wherein a playback device accesses said attribute information and identifies pictures which are necessary for decoding said access point P-picture.

US Pat. No. 10,218,935

AUTOMATED RUN-TIME ADJUSTMENT

Cable Television Laborato...

1. A runtime adjustment system configured to adjust a recording time of a media program having a start time and an end time, comprising:a runtime detector having a first input for receiving, from a video source providing the media program, a digital signal facilitating transmission of the media program, wherein the digital signal includes video data and runtime adjustment data specific to the media program, and wherein the runtime detector is configured to monitor the digital signal and detect the runtime adjustment data and having a first output for transmitting a result based on the runtime adjustment data; and
a runtime adjuster having a second input for receiving the result based on the runtime adjustment data from the first output of the runtime detector and having a second output configured to control the recording time as a function of the result based on the runtime adjustment data of a media program recording device equipped to record the media program between the start time and the end time.

US Pat. No. 10,218,934

VEHICLE ENTERTAINMENT TABLE UNIT AND CRADLE

VOXX INTERNATIONAL CORPOR...

1. A vehicle entertainment system, comprising:a tablet unit comprising a display and a touch screen input device disposed on a front surface, a first electrical connection, a first mounting mechanism, and a wireless receiver, wherein the wireless receiver is configured to receive media data from a wireless network and the touch screen input device is configured to receive input from a user; and
a cradle disposed in a vehicle headrest and comprising a second electrical connection and a second mounting mechanism, wherein the tablet unit is electrically connected to the cradle via the first and second electrical connections, and is physically coupled to the cradle via the first and second mounting mechanisms upon mounting the tablet unit into the cradle,
wherein the second mounting mechanism comprises a latch member and a release button, and the latch member is actuated by the release button to permit removal of the tablet unit from the cradle,
wherein the latch member is coupled to the first mounting mechanism and the first electrical connection is electrically coupled to the second electrical connection upon mounting the tablet unit into the cradle,
wherein the front surface of the tablet unit is substantially flush with the outer surface of the vehicle headrest upon mounting the tablet unit into the cradle.

US Pat. No. 10,218,933

CONNECTING STRUCTURE, ELECTRICAL DEVICE AND TELEVISION APPARATUS

FUNAI ELECTRIC CO., LTD.,...

1. A display device comprising:a chassis made of electrically insulating material;
a fastening member; and
a cap member made of electrically insulating material, and including an inner side surface that defines a cavity,
the chassis including a housing side wall that extends in a first direction perpendicular to an inside surface of the chassis and has an inner side surface and an outer side surface that face away relative to each other in a second direction intersecting with the first direction, the inner side surface of the housing side wall defining an interior housing space in which the fastening member is disposed, and
the cap member being attached to the chassis to cover the housing side wall and the fastening member, the inner side surface of the cap member at least partially facing with the outer side surface of the housing side wall in the second direction.

US Pat. No. 10,218,932

LIGHT SOCKET CAMERAS

SkyBell Technologies, Inc...

1. A lighting device, comprising:a housing having a light socket;
a light coupled to the housing, wherein the light is configured to emit visible light;
a communication module coupled to the housing;
a camera coupled to the housing and communicatively coupled to the communication module;
a cone-shaped mirror detachably coupled to the housing whereby a tip of the cone-shaped mirror faces towards the camera; and
a microphone coupled to the housing and communicatively couple to the communication module, wherein the microphone receives an audible instruction to initiate an event with an appliance remotely located with respect to the lighting device, wherein the communication module initiates the event with the appliance, and wherein the audible instruction comprises an identification of the appliance.

US Pat. No. 10,218,931

BROADCAST METHOD AND SYSTEM

Level 3 Communications, L...

1. A method for airing a broadcast signal over a broadcast network comprising:receiving a broadcast signal at a transmission relay circuit of a demarcation and equipment cabinet;
determining the broadcast signal type with a broadcast signal sensing and discerning circuit of the transmission relay circuit;
reconfiguring a signal processing circuit of the transmission relay circuit when the configuration of the signal processing circuit does not support transmission of the determined broadcast signal type,
wherein reconfiguring the signal processing circuit comprises:
signaling the determined broadcast signal type;
identifying a pair of connectors of the signal processing circuit that service the determined broadcast signal type;
using a jumper cable with the pair of signal processing circuit connectors that serve the determined broadcast signal type;
selecting a relay, wherein the relay is communicatively coupled with the signal processing circuit configured for processing the determined broadcast signal type;
activating the relay to connect the received broadcast signal to the signal processing circuit configured for processing the determined broadcast signal type;
issuing one or more alerts when a change of state of the transmission relay circuit is detected;
logging detected changes of state of the transmission relay circuit;
generating the received broadcast signal with a broadcast signal generation circuit of the transmission relay circuit; and
injecting the generated broadcast signal into the signal processing circuit.

US Pat. No. 10,218,929

IMAGING DEVICE

PANASONIC INTELLECTUAL PR...

1. An imaging device comprising:a pixel including:
a photoelectric converter including a first electrode, a second electrode, and a photoelectric conversion layer between the first electrode and the second electrode, the first photoelectric converter generating signal charge,
a charge storage region coupled to the first electrode, the charge storage region accumulating the signal charge, and
a transistor having a gate coupled to the charge storage region, the transistor outputting a signal according to an amount of the signal charge accumulated in the charge storage region; and
first voltage supply circuitry configured to supply a first voltage that is positive and a second voltage that is less than the first voltage, wherein
the first voltage supply circuitry supplies the first voltage to the second electrode in a first period when the charge storage region accumulates the signal charge, and
the first voltage supply circuitry supplies the second voltage to the second electrode in a second period different from the first period.

US Pat. No. 10,218,928

IMAGE CAPTURING APPARATUS AND MOBILE TELEPHONE

Canon Kabushiki Kaisha, ...

1. An image sensor in which a first semiconductor chip and a second semiconductor chip are stacked on each other,the first semiconductor chip including:
a pixel portion including a plurality of pixels each of which includes a photoelectric conversion portion, a charge-voltage conversion portion which converts charges generated in the photoelectric conversion portion to voltage, a transfer portion which transfers the charges from the photoelectric conversion portion to the charge-voltage conversion portion, and a reset portion which resets the charges, wherein the plurality of pixels includes a plurality of first pixels for generating a captured image and a plurality of second pixels for controlling an image capturing operation;
a first driving circuit which supplies first driving signals for driving the reset portion and the transfer portion to the plurality of first pixels; and
a plurality of first driving signal lines each of which transmits a corresponding one of the first driving signals, and
the second semiconductor chip including:
a second driving circuit which supplies second driving signals for driving the reset portion and the transfer portion to the plurality of second pixels independently of driving of the plurality of first pixels; and
a plurality of second driving signal lines each of which transmits a corresponding one of the second driving signals,
wherein the number of the plurality of second pixels is smaller than that of the plurality of first pixels, and
wherein both the first and second driving signal lines are arranged with respect to each pixel row of the pixel portion.

US Pat. No. 10,218,927

IMAGE SENSOR HAVING SHARED PIXEL STRUCTURE WITH SYMMETRICAL SHAPE RESET TRANSISTORS AND ASYMMETRICAL SHAPE DRIVER TRANSISTORS

SK hynix Inc., Icheon-si...

1. An image sensor comprising:a pixel array in which a plurality of pixel blocks are arranged,
each of the plurality of pixel blocks comprising:
a light receiving section including a plurality of unit pixels which share a floating diffusion;
a first driving section disposed at one side of the light receiving section and including a reset transistor; and
a second driving section disposed adjacent to the first driving section and including a driver transistor,
wherein the plurality of pixel blocks include a first pixel block and a second pixel block which is adjacent to the first pixel block, and, with respect to a boundary where the first pixel block and the second pixel adjoin each other, the first driving section of the first pixel block has a shape symmetrical to the first driving section of the second pixel block and the second driving section of the first pixel block has a shape asymmetrical to the second driving section of the second pixel block.

US Pat. No. 10,218,926

DEVICE, SYSTEM AND METHOD FOR CROSS-TALK REDUCTION IN VISUAL SENSOR SYSTEMS

CHRISTIE DIGITAL SYSTEMS ...

1. A system comprising:a display device configured to provide first images, viewable by a first visual sensor system, and second images, viewable by a second visual sensor system, the first images and the second images having common features which align when the first images and the second images are provided concurrently, first images comprising wavelengths viewable by the second visual sensor system; and,
a controller configured to:
determine a second visual sensor system intensity component of the first images using a response curve of the second visual sensor system; and,
reduce intensity of the second images provided at the display device by the second visual sensor system intensity component of the first images, at least when the first images and the second images are concurrently provided,
wherein the first images comprise one or more of blue images, green images and red images, and the second images comprise one or more of the red images and infrared images,
the controller further configured to determine the second visual sensor system intensity component of the first images using the response curve of the second visual sensor system by:
multiplying the response curve by each spectral radiance curve of one or more of: the first images; and a light source used to form the first images; and
summing results of each multiplication.

US Pat. No. 10,218,925

METHOD AND APPARATUS FOR CORRECTING LENS DISTORTION

HUAWEI TECHNOLOGIES CO., ...

1. A method, comprising:performing a first correction of radial lens distortion in image data acquired from a lens in a horizontal direction, before the image data is written into a dynamic memory;
writing the image data into the dynamic memory after performing the first correction; and
performing a second correction in the dynamic memory of the radial lens distortion in the image data written into the dynamic memory in the vertical direction using a column length selected according to a degree of radial distortion of the image data, wherein the column length is a sum of pixels that can be read consecutively in a refresh cycle of the dynamic memory.

US Pat. No. 10,218,924

LOW NOISE CMOS IMAGE SENSOR BY STACK ARCHITECTURE

OmniVision Technologies, ...

1. A pixel circuit for use in a high dynamic range (HDR) image sensor, comprising:a photodiode disposed in a first semiconductor wafer, the photodiode adapted to photogenerate charge carriers in response to incident light during a single exposure of a single image capture of the HDR image sensor;
a floating diffusion disposed in the first semiconductor wafer and coupled to receive the charge carriers photogenerated in the photodiode;
a transfer transistor disposed in the first semiconductor wafer and coupled between the photodiode and the floating diffusion, wherein the transfer transistor is adapted to be switched on to transfer the charge carriers photogenerated in the photodiode to the floating diffusion;
an in-pixel capacitor disposed in a second semiconductor wafer, wherein the first semiconductor wafer is stacked with and coupled to the second semiconductor wafer; and
a dual floating diffusion (DFD) transistor disposed in the first semiconductor wafer and coupled between the floating diffusion and the in-pixel capacitor, wherein the DFD transistor is coupled to be enabled or disabled in response to a DFD signal such that the in-pixel capacitor is selectively coupled to the floating diffusion through the DFD transistor in response to the DFD signal, wherein the floating diffusion is set to low conversion gain in response to the in-pixel capacitor being coupled to the floating diffusion, and wherein the floating diffusion is set to high conversion gain in response to the in-pixel capacitor being decoupled from the floating diffusion.

US Pat. No. 10,218,923

METHODS AND APPARATUS FOR PIXEL BINNING AND READOUT

SEMICONDUCTOR COMPONENTS ...

1. An imaging apparatus capable of identifying a predetermined feature and producing an output image, comprising:a pixel array, comprising a plurality of pixels arranged to form a plurality of rows;
an image signal processor coupled to the pixel array and configured to:
receive pixel data from the plurality of pixels; and
determine:
a region of interest according to the predetermined feature, wherein the region of interest corresponds to a first group of consecutive rows from the plurality of rows; and
a region of non-interest comprising a plurality of remaining rows from the plurality of rows;
a readout circuit coupled to the pixel array and configured to:
facilitate combining portions of the pixel data from the region of non-interest to form a plurality of second groups;
readout each row of the first group according to a first readout rate; and
readout each of the second groups according to a second readout rate;
wherein the first readout rate is substantially equal to the second readout rate.

US Pat. No. 10,218,922

SOLID-STATE IMAGING DEVICE

OLYMPUS CORPORATION, Tok...

1. A solid-state imaging device comprising:a first semiconductor substrate to which light is incident;
a second semiconductor substrate that is stacked on a surface of the first semiconductor substrate, the surface being opposite with respect to a surface on which the light is incident to the first semiconductor substrate;
n first photoelectric conversion devices that are periodically arranged in the first semiconductor substrate, the n first photoelectric conversion devices generating first electric charge signals by performing photoelectric conversion of the incident light;
n first reading circuits arranged in correspondence with each of the n first photoelectric conversion devices in the first semiconductor substrate, each of the n first reading circuits accumulating the first electric charge signal generated by a corresponding one of then first photoelectric conversion devices, and each of the n first reading circuits outputting a signal voltage corresponding to the accumulated first electric charge signal as a first pixel signal;
a driving circuit that outputs the first pixel signal by sequentially driving each of the n first reading circuits;
m second photoelectric conversion devices that are periodically arranged in one of the first semiconductor substrate and the second semiconductor substrate, the m second photoelectric conversion devices generating second electric charge signals by performing photoelectric conversion of the incident light; and
m second reading circuits that sequentially output a second pixel signal indicating a change in the second electric charge signal, the second electric charge signal being generated by a corresponding second photoelectric conversion device among the m second photoelectric conversion devices,
wherein each of the m second reading circuits includes:
a detection circuit that detects a temporal change of the second electric charge signal generated by the corresponding one of the second photoelectric conversion devices and the detection circuit outputs an event signal indicating a direction of a change when a change exceeding a predetermined threshold is detected; and
a pixel signal generating circuit that is arranged in the second semiconductor substrate and the pixel signal generating circuit outputs the second pixel signal, the second pixel signal being generated by adding address information indicating a position at which the corresponding one of the second photoelectric conversion devices is arranged to the event signal,
wherein n is a natural number equal to 2 or more than 2, and
wherein m is a natural number equal to 2 or more than 2.

US Pat. No. 10,218,920

IMAGE PROCESSING APPARATUS AND CONTROL METHOD FOR GENERATING AN IMAGE BY VIEWPOINT INFORMATION

Canon Kabushiki Kaisha, ...

1. An image processing apparatus comprising:a processor; and
a memory storing one or more programs configured to be executed by the processor, the one or more programs including instructions for:
acquiring virtual viewpoint information indicating a virtual viewpoint;
generating a virtual viewpoint image based on both of a plurality of captured images captured by multiple cameras from a plurality of directions and the virtual viewpoint information acquired in the acquiring, wherein, according to the virtual viewpoint information acquired in the acquiring, an inclination correction process for correcting an inclination based on the virtual viewpoint information is executed to generate the virtual viewpoint image to be output; and
outputting the generated virtual viewpoint image.

US Pat. No. 10,218,919

IMAGE PICKUP SYSTEM, IMAGE PROCESSING METHOD, AND RECORDING MEDIUM

OLYMPUS CORPORATION, Tok...

1. An image pickup system in which an interchangeable lens is detachably attached to a camera main body, the image pickup system comprising:an image pickup circuit provided in the camera main body and configured to pick up, on an image pickup plane on which a plurality of pixels are arrayed, an optical image formed by the interchangeable lens and output picked-up image data with a predetermined sampling frequency;
a resolution changing circuit configured to generate, on the basis of the picked-up image data corresponding to a partial region in a screen obtained from the image pickup circuit, image data with resolution higher than resolution of the picked-up image data; and
a control circuit configured to determine, on the basis of an MTF characteristic value corresponding to the partial region among a plurality of MTF characteristic values corresponding to a plurality of regions in the screen of the interchangeable lens and the predetermined sampling frequency, an upper limit of the resolution of the image data generated by the resolution changing circuit.

US Pat. No. 10,218,918

IMAGE PROCESSING DEVICE AND METHOD WITH IMAGE PROCESS EFFECT SELECTION BASED UPON FLOW VECTOR PATTERNS

Sony Corporation, Tokyo ...

1. An image processing device comprising:a flow vector detection unit configured to detect flow vectors of pixels in an input image; and
an effect selection unit configured to select a process of effect to the input image, on the basis of a pattern of the flow vectors,
wherein the pattern of the flow vectors indicates a presence or absence of a vanishing point.

US Pat. No. 10,218,917

METHOD AND APPARATUS TO CREATE AN EOTF FUNCTION FOR A UNIVERSAL CODE MAPPING FOR AN HDR IMAGE, METHOD AND PROCESS TO USE THESE IMAGES

KONINKLIJKE PHILIPS N.V.,...

1. A tangible processor readable storage medium that is not a transitory propagating wave or signal having processor readable program code for operating on a processor for performing a method of constructing a code allocation function for allocating pixel colors having pixel luminances to luma codes encoding such pixel luminances, the method comprising acts of:constructing a luma code mapping from at least two partial functions by determining a code allocation function applied to a linear luminance of a pixel to obtain a luma code value, the constructing comprising acts of:
mapping the luma code to provide a non-linear mapping of pixel linear luminances to luma values,
defining a non-linear invertible mapping of an entire luminance range of a linear luminance input value to an entire luma range of a first output luma value using a first partial function of the at least two partial functions, and
defining a non-linear invertible mapping of an entire luma range of an input luma value being the first output luma value to an entire luma range of a second output luma value using a second partial function to be consecutively applied to the luma value from the first partial function of the at least two partial functions.

US Pat. No. 10,218,916

CAMERA WITH LED ILLUMINATION

GOOGLE LLC, Mountain Vie...

1. A camera, comprising:a camera lens configured to capture visual data of a field of view;
a plurality of light sources configured to illuminate the field of view; and
bypass circuit coupled to the plurality of light sources and configured to bypass a subset of the plurality of light sources, wherein the bypass circuit is configured to select one of a plurality of light source subsets, and at least two of the plurality of light source subsets include distinct light source members configured to illuminate different regions of the field of view of the camera;
wherein:
the camera includes a first mode and a second mode;
in the first mode, the plurality of light sources are electrically coupled to form a string and driven by a boosted drive voltage; and
in the second mode, the one of the plurality of light source subsets is selected and driven by a regular drive voltage that is lower than the boosted drive voltage.

US Pat. No. 10,218,915

COMPACT MULTI-ZONE INFRARED LASER ILLUMINATOR

TRILUMINA CORP., Albuque...

1. An infrared illumination system, comprising:a plurality of infrared illumination sources including a first infrared illumination source and a second infrared illumination source, the first infrared illumination source configured to provide illumination to a first zone of a plurality of zones, the second infrared illumination source configured to provide illumination to a second zone of a plurality of zones, the first zone being different than the second zone, and each of the plurality of zones corresponding to at least part of an angular portion of a field of view of an image sensor, wherein the plurality of infrared illumination sources comprise one or more arrays of a plurality of vertical-cavity surface-emitting lasers (VCSELs);
plurality of microlenses, each microlens among the plurality of microlenses corresponding to each VCSEL among the plurality of VCSELs, wherein each microlens directs illumination from a corresponding VCSEL to at least one of the plurality of separate zones; and
an image processor in communication with the plurality of infrared illumination sources and the image sensor, and configured to define an area of interest in the field of view of the image sensor and separately control each of the plurality of infrared illumination sources to provide an adjustable illumination power to at least one of the plurality of separate zones and alter an illumination of the area of interest, in response to image data indicative of an illumination of one or more areas in the field of view of the image sensor.

US Pat. No. 10,218,913

HDR/WDR IMAGE TIME STAMPS FOR SENSOR FUSION

Qualcomm Incorporated, S...

1. An apparatus to determine timestamp information, the apparatus comprising:an image sensor configured to capture a plurality of sub-frames of a scene, wherein each sub-frame comprises an image of the scene captured using an exposure time that is different from at least one other exposure time of at least one other sub-frame of the plurality of sub-frames; and
at least one processor coupled to the image sensor and configured to:
receive, from the image sensor, for each of the plurality of sub-frames, sub-pixel image data corresponding to a first portion of an image frame;
determine composite image data corresponding to the first portion of the image frame based on values of the received sub-pixel image data for the plurality of sub-frames and by selecting, from the plurality of sub-frames a particular sub-frame to be associated with the composite image data based on luminosity values of the sub-pixel image data of the plurality of sub-frames;
identify an indicator based on the sub-frames corresponding to the received sub-pixel image data used to determine the composite image data; and
determine timestamp information, based on the identified indicator, wherein the timestamp information corresponds to the composite image data based upon timing information for the plurality of sub-frames.

US Pat. No. 10,218,912

INFORMATION PROCESSING APPARATUS, METHOD OF CONTROLLING IMAGE DISPLAY, AND STORAGE MEDIUM

CANON KABUSHIKI KAISHA, ...

1. An information processing apparatus, comprising:a display controller configured to exert control such that an image is displayed on a display screen in one of a plurality of display modes, the plurality of display modes including a first display mode and a second display mode; and
a first comparison unit configured to compare, when the first display mode is switched to the second display mode, a display size of the image that is displayed on the display screen in the first display mode and a given display size that is associated with the second display mode,
wherein the display controller is configured to exert control such that the image is displayed on the display screen in the second display mode in either the given display size that is associated with the second display mode or the display size of the image that is displayed in the first display mode, based on a result of the comparison by the first comparison unit.

US Pat. No. 10,218,911

MOBILE DEVICE, OPERATING METHOD OF MOBILE DEVICE, AND NON-TRANSITORY COMPUTER READABLE STORAGE MEDIUM

HTC Corporation, Taoyuan...

1. A method comprising:capturing a preview image;
displaying the preview image;
detecting a photograph in the preview image;
in response to the photograph being detected in the preview image, searching a video file corresponding to the photograph in a database; and
in response to a video file corresponding to the photograph being searched, playing a video of the searched video file over at least a part of the displayed preview image,
wherein the operation of playing the searched video file over at least a part of the displayed preview image comprises:
calculating a corresponding relationship between vertexes of the photograph and vertexes of the video of the searched video file; and
playing the video with a shape and a size changed according to the corresponding relationship between the vertexes of the photograph and the vertexes of the video of the searched video file at a position of the photograph in the preview image.

US Pat. No. 10,218,909

CAMERA DEVICE, METHOD FOR CAMERA DEVICE, AND NON-TRANSITORY COMPUTER READABLE STORAGE MEDIUM

HTC Corporation, Taoyuan...

1. A method applied to a camera device comprising:acquiring an angular velocity signal;
calculating an angular displacement according to a high frequency portion and a low frequency portion of the angular velocity signal;
generating a compensation value according to the angular displacement;
adjusting the compensation value according to a frequency corresponding to the angular velocity signal to generate an adjusted compensation value, wherein the adjusted compensation value is determined by using a first adjusting filter, the first adjusting filter corresponds to a first function of the angular displacement, and an order of the first function is less than 1; and
controlling an optical image stabilization (OIS) system to align an optical axis of a camera of the camera device according to the adjusted compensation value.

US Pat. No. 10,218,908

IMAGE PROCESSING APPARATUS CAPABLE OF PERFORMING IMAGE SHAKE CORRECTION, IMAGE PICKUP APPARATUS, AND CONTROL METHOD

Canon Kabushiki Kaisha, ...

1. An image processing apparatus includes at least one processor or circuit configured to perform the operations of following units:a temporary storage unit configured to temporarily store a video image including a (N-M)-th frame;
an output control unit configured to output a video image of frames up to N-th frame and later than the (N-M)-th frame, to a first signal path, and output a video image of the (N-M)-th frame that is temporarily stored in the temporary storage unit to a second signal path;
a first correction amount calculating unit configured to calculate a first correction amount used for correcting an image shake relating to the video image of the (N-M)-th frame based on the video image of the frames up to N-th frame and later than the (N-M)-th frame, output to the first signal path;
a first image shake correction unit configured to correct the image shake relating to the video image of the (N-M)-th frame by cutting out a predetermined region from the video image of the (N-M)-th frame based on the first correction amount; and
a recording unit configured to record the (N-M)-th frame, the image shake of which is corrected by the first image shake correction unit;
wherein the first signal path is connected to a display unit configured to display a video image for confirmation on a main body of the image processing apparatus.

US Pat. No. 10,218,907

IMAGE PROCESSING APPARATUS AND CONTROL METHOD DETECTION AND CORRECTION OF ANGULAR MOVEMENT DUE TO SHAKING

Canon Kabushiki Kaisha, ...

1. An image processing apparatus comprising:at least one processor or circuit configured to perform the operations of the following units:
an acquisition unit configured to acquire an angular velocity of panning of an imaging device detected by an angular velocity sensor and a motion vector of an object detected from a plurality of image data successively imaged by an imaging element;
a decision unit configured to decide a value of a frame rate when the image data is acquired from the imaging element used to detect the motion vector; and
a calculation unit configured to calculate an angular velocity of the object with respect to the imaging device from the angular velocity of the panning and the motion vector of the object,
wherein the decision unit decides the value of the frame rate corresponding to the angular velocity of the panning and the acquisition unit acquires the motion vector of the object detected from the plurality of the image data imaged at the frame rate decided by the decision unit.

US Pat. No. 10,218,906

CAMERA DEVICE AND METHOD FOR CAMERA DEVICE

HTC Corporation, Taoyuan...

1. A method applied to a camera device comprising:receiving one or all of an angular velocity signal and an acceleration signal;
selecting one of predetermined motion modes according to the one or all of the angular velocity signal and the acceleration signal;
configuring one or more of an exposure time of a camera, an auto white balance (AWB) configuration of the camera, and an auto exposure (AE) configuration of the camera according to the selected motion mode;
configuring an auto focus (AF) configuration of the camera, wherein if magnitudes of one or more of vectors in the angular velocity signal or the acceleration signal are lower than a predetermined threshold, an AF speed of the AF configuration is configured to be a fast value, and if the magnitudes of the one or more of the vectors in the angular velocity signal or the acceleration signal are greater than the predetermined threshold, the AF speed is configured to be a medium value; and
capturing an image or recording a video according to the one or more of the exposure time of the camera, the AF configuration of the camera, the AWB configuration of the camera, and the AE configuration of the camera;
wherein the predetermined motion modes comprise a walk mode and a rotate mode, a first AF speed is determined according to the angular velocity signal or the acceleration signal of the walk mode and a second AF speed is determined according to the angular velocity signal or the acceleration signal of the rotate mode, and the first AF speed is different from the second AF speed.

US Pat. No. 10,218,905

SLIDING ACCESSORY LENS MOUNT WITH AUTOMATIC MODE ADJUSTMENT

Nokia Technologies Oy, E...

1. A device comprising a camera, the camera comprising:an objective;
an image capture assembly;
an attachment indicator configured to detect attaching of a sliding accessory lens mount to the device;
an optical mode detector configured to receive an optical mode indication from an optical mode indicator of the sliding accessory lens mount such that one of different optical elements is linearly moved to a co-operating position with the objective;
a touch detection surface, wherein the device is configured to detect a detection object of the optical mode indicator proximate the touch detection surface, and wherein the device is configured to use the touch detection surface for detection of a position of the mode indicator; and
a processor configured to automatically determine one or more imaging parameters based on the optical mode indication and to correspondingly control the operation of one or more of the objective and the image capture assembly.

US Pat. No. 10,218,904

WIDE FIELD OF VIEW CAMERA FOR INTEGRATION WITH A MOBILE DEVICE

ESSENTIAL PRODUCTS, INC.,...

1. An imaging device, comprising:an array of lenses corresponding to photo sensors disposed around a substrate,
wherein a first subset of the array of lenses includes wide-angle lenses and a second subset of the array of lenses include standard-angle lenses; and
a connection mechanism to transfer data associated with images captured by the photo sensors to cause a processor to receive any of the captured images and create a wide view image of an environment around the imaging device,
wherein the captured images include a distorted image and a standard image, and
wherein creating the wide view image includes merging pixels of the distorted image and the standard image.

US Pat. No. 10,218,903

DIGITAL 3D/360 DEGREE CAMERA SYSTEM

1. A method for exporting digital images in a digital camera system comprising a plurality of digital cameras, the method comprising:storing digital image data and embedded metadata in an electronic file;
generating, based on a calibration process that exposes pixels of the plurality of digital cameras to distinct coordinate points, a pixel vector map, wherein the pixel vector map includes a collection of data that identifies a geometry of each of the plurality digital cameras in the digital camera system;
storing, based on the generated pixel vector map, pixel vector map data in the electronic file, wherein the file includes pixel vector map data describing the digital camera system; and
exporting the file via a communication interface to an external device, wherein the exporting includes delivering the image data and the pixel vector map data for processing, and wherein the exporting is based on a request received by the digital camera system from an external processing system and a determination that no more images are to be captured.

US Pat. No. 10,218,902

APPARATUS AND METHOD FOR SETTING CAMERA

Samsung Electronics Co., ...

1. A method for controlling an electronic device, the method comprising:detecting environmental information associated with the electronic device using a sensor, wherein the electronic device comprises the sensor, a first image sensor, and a second image sensor;
changing first setting information of the first image sensor based on the environmental information;
detecting a user's viewpoint;
selecting one of the first image sensor or the second image sensor based on the user's viewpoint; and
changing the first setting information or a second setting information of the second image sensor based on the selected image sensor,
wherein the detecting of the user's viewpoint comprises:
receiving orientation information from a head mounted display (HMD); and
determining the user's viewpoint based on the orientation information.

US Pat. No. 10,218,901

PICTURE COMPOSITION ADJUSTMENT

INTERNATIONAL BUSINESS MA...

1. A computer-implemented method comprising:determining, based on a predefined composition rule, whether a picture composition of a first object and a second object needs to be adjusted; wherein the determining comprises:
determining whether the first object overlaps with the second object;
in response to determining that the first object overlaps with the second object, determining whether a ratio of areas of a first region of the first object and a total region of the picture composition is less than a ratio threshold; and
in response to determining that the ratio is less than the ratio threshold, determining that the picture composition needs to be adjusted;
in response to determining that the picture composition needs to be adjusted, determining an adjusting pattern based on the predefined composition rule; and
providing the adjusting pattern to a user, to indicate the user to adjust the picture composition based on the adjusting pattern.

US Pat. No. 10,218,900

DOCUMENT REORIENTATION PROCESSING

NCR Corporation, Atlanta...

1. A method, comprising:capturing, by a device, document images for a document;
identifying four edges for the document from the document images;
obtaining a camera preview image when a camera of the device is in a camera preview mode;
resolving an optimal orientation of the device for capturing an optimal image of the document based on the four edges and the camera preview image; and
activating the device to capture the optimal image for the document in the optimal orientation, including displaying on a display of the device an indication of the optimal orientation of the device;
wherein displaying includes presenting a guiding rectangle corresponding to the optimal orientation of the device in a screen on the display of the device and superimposed over the document images appearing in the display;
wherein presenting further includes presenting a graphical illustration within the screen that illustrates moving the device from a current orientation to the optimal orientation and identifying when particular edges of a particular document image instance are aligned with a top-leftmost corner of the guiding rectangle and when a center of the particular document image instance corresponds to a calculated center for the four edges.

US Pat. No. 10,218,899

CONTROL METHOD IN IMAGE CAPTURE SYSTEM, CONTROL APPARATUS AND A NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM

Canon Kabushiki Kaisha, ...

1. A control method in an image capture system having a first image capture apparatus and a second image capture apparatus, the method comprising:analyzing a captured image captured by the first image capture apparatus;
deciding whether a likelihood of a capturing target that is an analysis result of the captured image is smaller than a predetermined threshold; and
in a case where the likelihood is smaller than the predetermined threshold, controlling an imaging area of the second image capture apparatus so that the imaging area of the second image capture apparatus becomes wider than in a case where the likelihood is not smaller than the predetermined threshold until the capturing target is found in a captured image captured by the second image capture apparatus,
wherein the controlling of the imaging area until the capturing target is found comprises: (a) making the imaging area wider, (b) determining whether or not the capturing target is found in the captured image captured by the second image capture apparatus using characteristic information on the capturing target, and (c) in response to the determination that the capturing target is not found in the captured image captured by the second image capture apparatus, repeating making the imaging area wider.

US Pat. No. 10,218,898

AUTOMATED GROUP PHOTOGRAPH COMPOSITION

International Business Ma...

1. A computer-implemented method for the automatic composition of group photograph framings as a function of relationship data, comprising executing on a computer processor:identifying a person appearing to a user via a viewfinder of a camera within a photographic image framing for acquisition of image data by the camera;
determining a geographic location of an additional person who is related to the person identified within the photographic image framing, wherein the additional person does not appear to the user within the photographic image framing, and the determined geographic location of the additional person is within a specified proximity range to a geographic location of the person identified within the photographic image framing; and
in response to determining that a relationship of the additional person to the person identified within the image framing indicates that the additional person should be included within photographic images of the identified person, recommending that the additional person be added to the photographic image framing prior to acquisition of image data by the camera from the photographic image framing.

US Pat. No. 10,218,896

FOCUS ADJUSTMENT DEVICE, FOCUS ADJUSTMENT METHOD, AND NON-TRANSITORY STORAGE MEDIUM STORING FOCUS ADJUSTMENT PROGRAM

Olympus Corporation, Tok...

1. A focus adjustment device which includes an imager to receive a light flux passing through an imaging lens including a focus lens and then generate an image signal and which performs focus adjustment on the basis of the image signal, the focus adjustment device comprising:a direction judgment unit which calculates an evaluation value based on an image signal of a focus detection region set in a region of the imager where the light flux is received, thereby judging a drive direction of the focus lens to be in focus based on a difference of an evaluation value of a different position of the focus lens; and
a control unit which controls a focus adjustment operation on the basis of the drive direction judged by the direction judgment unit,
wherein the control unit causes the direction judgment unit to repeatedly judge the drive direction, and after the focus lens is slightly driven in a first direction judged on the basis of a first evaluation value and then the focus lens is slightly driven in a second direction different from the first direction on the basis of a subsequently calculated second evaluation value, when a drive amount of the focus lens in the second direction which is calculated based on the second evaluation value does not exceed a predetermined drive amount, the control unit forbids the slight driving of the focus lens in the first direction even though a drive direction judged on the basis of a further subsequently calculated third evaluation value is the first direction and the slight driving of the focus lens in the second direction is continuously performed,
wherein the control unit does not forbid the slight driving of the focus lens in the first direction, when a drive amount of the focus lens in the second direction which is calculated based on the second evaluation value exceeds the predetermined drive amount, and causes the direction judgment unit to repeatedly judge the drive direction by slightly driving the focus lens in the first direction, and
wherein the control unit determines that the focus lens is in focus when a sum of a number of times when the drive direction of the slight driving of the focus lens is changed from the first direction to the second direction and a number of the times when the drive direction is changed from the second direction to the first direction becomes a predetermined value.

US Pat. No. 10,218,895

ENHANCED FIELD OF VIEW TO AUGMENT THREE-DIMENSIONAL (3D) SENSORY SPACE FOR FREE-SPACE GESTURE INTERPRETATION

Leap Motion, Inc., San F...

1. A rim mounted space imaging apparatus, mounted in a rim of a display that has a vertical axis, comprising:a camera mounted in a rim of a display with optical axis facing within 20 degrees of tangential to a vertical axis of the display;
at least one Fresnel prismatic element that redirects the optical axis of the camera, giving the camera a field of view that covers at least 45 to 80 degrees from tangential to the vertical axis of the display; and
a camera controller coupled to the camera that compensates for redirection by the Fresnel prismatic element and determines a position of at least one control object within the field of view of the camera.

US Pat. No. 10,218,894

IMAGE CAPTURING APPARATUS, IMAGE CAPTURING METHOD, AND STORAGE MEDIUM

Canon Kabushiki Kaisha, ...

1. An image capturing apparatus comprising:an image sensor including a plurality of pixels, each pixel including a plurality of photoelectric conversion units that generate focus detection signals from light flux that have passed through different regions in an exit pupil in an optical system; and
an image capturing unit configured to continuously capture a plurality of images by using the image sensor, the image capturing unit being configured to acquire a signal in a first acquisition mode or a second acquisition mode for each pixel, the first acquisition mode being a mode in which an image signal obtained by adding the focus detection signals of the plurality of photoelectric conversion units is acquired, and the second acquisition mode being a mode in which the focus detection signals are acquired in addition to the image signal,
wherein the image capturing unit is configured to alternately capture a recording image and a focus detection image having a smaller number of pixels than the recording image, apply all pixels to the first acquisition mode when capturing the recording image, and apply at least a part of the pixels to the second acquisition mode when capturing the focus detection image.

US Pat. No. 10,218,893

IMAGE CAPTURING SYSTEM FOR SHAPE MEASUREMENT OF STRUCTURE, METHOD OF CAPTURING IMAGE OF STRUCTURE FOR SHAPE MEASUREMENT OF STRUCTURE, ON-BOARD CONTROL DEVICE, REMOTE CONTROL DEVICE, PROGRAM, AND STORAGE MEDIUM

Mitsubishi Electric Corpo...

1. An image capturing system for shape measurement of a structure, the image capturing system comprising:an image capturing device configured to capture an image of a target structure to measure a shape of the target structure;
an air vehicle having the image capturing device mounted thereon, the air vehicle being configured to fly and be unmoved in air;
a distance measurement device mounted on the air vehicle and configured to measure a distance between the air vehicle and the target structure;
an image capturing scenario storage configured to store an image capturing scenario, the image capturing scenario including:
a plurality of image capturing points at each of which the air vehicle is unmoved in air with a distance from the target structure being maintained when capturing the image of the target structure to measure the shape of the target structure with the target structure being unmoved, and
a flight route set in accordance with a positional relation between the target structure and each of the image capturing points or coordinates of each of the image capturing points such that the air vehicle having the image capturing device mounted thereon and configured to capture the image of the target structure flies via the image capturing points sequentially;
an on-board control device mounted on the air vehicle, the on-board control device including:
an image capturing controller configured to control the image capturing device in accordance with the image capturing scenario, and
a flight controller configured to control the air vehicle in accordance with the image capturing scenario based on the distance measured by the distance measurement device; and
a remote control device including:
a scenario creator configured to create the image capturing scenario based on the image capturing points, and
a scenario transferor configured to transfer the image capturing scenario created by the scenario creator to the on-board control device to store the image capturing scenario in the image capturing scenario storage,
the scenario creator configured to:
check whether or not a path connecting a first image capturing point to a second image capturing point in a straight line meets the target structure, the first image capturing point and the second image capturing point being different image capturing points,
when the path does not meet the structure, create the flight route including the path connecting the first image capturing point to the second image capturing point in the straight line, and
when the path meets the structure, create the flight route including a path avoiding the target structure in flight between the first image capturing point and the second image capturing point.

US Pat. No. 10,218,891

COMMUNICATION APPARATUS, CONTROL METHOD FOR THE SAME, AND STORAGE MEDIUM FOR PRIORITY IMAGE TRANSFER

Canon Kabushiki Kaisha, ...

1. A communication apparatus, comprising:a communication unit capable of communicating with an external apparatus;
a display unit configured to display an image;
an operation unit configured to accept a user operation; and
a designation unit configured to, in response to a first operation of pressing both a first operation member and a second operation member being performed during display of an image, designating the image being displayed as an image to be transferred, which is to be transferred to the external apparatus,
wherein in response to a second operation that is different from the first operation and uses a plurality of operation members being performed, the designation unit designates the image being displayed as an image to be priority transferred, which is to be transferred with greater priority than the image to be transferred, and
the plurality of operation members to be used in the second operation include at least one of the first operation member and the second operation member.

US Pat. No. 10,218,890

DEVICE FOR INHIBITING OPERATION OF AN IMAGE RECORDING APPARATUS

KHALIFA UNIVERSITY OF SCI...

1. A device for attachment to an image recording apparatus, the device comprising: a blocker, attachable to said image recording apparatus, for inhibiting said apparatus from recording an image when attached thereto; a transducer, connected to the blocker, for detecting a change of position of the blocker between an attached position, wherein said apparatus is inhibited, and another position; and—a controller, connected to the transducer, for storing the position of the blocker when attached to said image recording apparatus and indicate indicating if the blocker has changed position after it has been attached; and—wherein the blocker is a sticker for sticking over one or more image recording apparatus lens or sensor.

US Pat. No. 10,218,889

SYSTEMS AND METHODS FOR TRANSMITTING AND RECEIVING ARRAY CAMERA IMAGE DATA

FotoNation Limited, (IE)...

1. A method of transmitting image data, comprising:capturing image data using a first set of active cameras in an array of cameras;
generating a first line of image data by multiplexing at least a portion of the image data captured by the first set of active cameras using a predetermined process, wherein the predetermined process is selected from a plurality of predetermined processes for multiplexing captured image data;
generating a first set of additional data containing information identifying the cameras in the array of cameras that form the first set of active cameras and information indicating the predetermined process used to multiplex at least the portion of the image data;
transmitting the first set of additional data and the first line of image data;
capturing image data using a second set of active cameras in the array of cameras, wherein the second set of active cameras is different from the first set of active cameras;
generating a second line of image data by multiplexing at least a portion of the image data captured by the second set of active cameras;
generating a second set of additional data containing information identifying the cameras in the array of cameras that form the second set of active cameras; and
transmitting the second set of additional data and the second line of image data.

US Pat. No. 10,218,888

APPARATUS AND METHOD FOR PROVIDING A WIRELESS, PORTABLE, AND/OR HANDHELD, DEVICE WITH SAFETY FEATURES

1. An apparatus, comprising:a case;
a plurality of cameras, wherein each camera of the plurality of cameras is configured to record or obtain a view, a picture, or a video, wherein each camera of the plurality of cameras is positioned on, or is housed by, the case;
a plurality of microphones, wherein each microphone of the plurality of microphones is configured to record or obtain audio, wherein each microphone of the plurality of microphones is positioned on, or is housed by, the case;
at least one speaker;
a processor, wherein the processor is specially programmed to control a plurality of operations of the apparatus;
a display, wherein the display displays the view, the picture, or the video, obtained by at least one camera of the plurality of cameras;
a touchscreen keyboard, wherein the touchscreen keyboard is displayed within a portion of the display;
a global positioning system, wherein the global positioning system determines a position or a location of the apparatus; and
a collision avoidance sensor, wherein the collision avoidance sensor detects an object, a structure, or an individual,
wherein the processor is specially programmed to automatically detect a texting operational mode, an e-mail operational mode, a game or gaming operational mode, or a speakerphone operational mode, of operation of the apparatus and, wherein, in response to detecting the texting operational mode, the e-mail operational mode, the game or gaming operational mode, or the speakerphone operational mode, of operation of the apparatus, the processor activates at least one camera of the plurality of cameras and activates the collision avoidance sensor, and further wherein the processor activates at least a portion of the display to display a view in front of, or an anticipated travel path of movement of, the apparatus, and further wherein the display simultaneously displays the view in front of, or the anticipated travel path of movement of, the apparatus and information for providing the continued use of the texting operational mode, the e-mail operational mode, the game or gaming operational mode, or the speakerphone operational mode, of operation of the apparatus, and further wherein the processor is specially programmed to automatically deactivate the at least one camera of the plurality of cameras and the collision avoidance sensor upon detecting a completion of the texting operational mode, the e-mail operational mode, the game or gaming operational mode, or the speakerphone operational mode, of operation of the apparatus, wherein the apparatus displays the view obtained by the at least one camera of the plurality of cameras, on or via the display, from a first time when the at least one camera of the plurality of cameras is activated until a second time when the at least one camera of the plurality of cameras is deactivated upon the detecting of the completion of the texting operational mode, the e-mail operational mode, the game or gaming operational mode, or the speakerphone operational mode, of operation of the apparatus,
wherein the apparatus stores information regarding a safe area of travel, and further wherein the global positioning system determines a position or location of the apparatus, wherein the processor determines whether or not the apparatus is located at a location outside of the safe area of travel, and wherein, if the apparatus is determined to be outside of the safe area of travel, the processor is specially programmed to activate the at least one camera of the plurality of cameras or programmed to activate a second camera of the plurality of cameras, to record a picture or video at the location of the apparatus, wherein the processor generates a notification message containing the picture or the video or containing a link to the picture or the video, and further wherein the apparatus transmits the notification message to a communication device associated with an authorized individual or law enforcement personnel.

US Pat. No. 10,218,887

PHOTOGRAPHY SYSTEM

Chronext Service Germany ...

1. Photography system comprising:a rotating unit configured to rotate around a first axis,
an illumination unit,
a wristwatch holder, the wristwatch holder being mounted on the rotating unit,
a camera unit, and
a control unit,
wherein the control unit is configured to control the camera unit and the rotating unit such that a series of pictures of a wristwatch mounted on the wristwatch holder can be generated automatically, the pictures of the series of pictures showing the wristwatch in different angles with respect to the camera unit, and wherein the wristwatch holder comprises an arch-shaped portion adapted for mounting a wristwatch such that a backside of a watchcase of the wristwatch is exposed further comprising a chamber, wherein an interior of the chamber comprises the wristwatch holder; wherein the illumination unit comprises at least one light dispersive element, wherein the at least one light dispersive element is attached to one or more inner sidewalls of the chamber, to the inner top surface of the chamber and/or to the inner bottom surface of the chamber.

US Pat. No. 10,218,886

PORTABLE DEVICE, COMMUNICATION SYSTEM, COMMUNICATION CONNECTION METHOD, AND COMPUTER-READABLE RECORDING MEDIUM

Olympus Corporation, Tok...

1. A portable device comprising:a first communication unit configured to communicatively connect with an external communication device by using any one of a plurality of communication channels through which notification of a network identifier has been given;
a communication state determination unit configured to determine, out of the plurality of communication channels, a communication channel not used by other devices, or a communication channel used by a relatively small number of devices in comparison with other communication channels, as a communication channel in a preferable communication state;
a mode switching unit configured to switch a communication mode of the portable device between: a stable communication mode for establishing communication connection with the communication device by using a communication channel in the preferable communication state in the plurality of communication channels in accordance with a determination result obtained by the communication state determination unit; and a connection priority communication mode for establishing communication connection with the communication device by using a predetermined communication channel in the plurality of communication channels without determination by the communication state determination unit;
a communication controller configured to start communication connection with the communication device via the first communication unit; and
a second communication unit configured to communicatively connect with the communication device by a communication system different from a communication system of the first communication unit, wherein
the mode switching unit is configured to switch the communication mode of the portable device to the connection priority communication mode when a start instruction of the portable device is received from the communication device via the second communication unit, and
the communication controller is configured to start communication connection with the communication device via the first communication unit in the connection priority communication mode selected by the mode switching unit after the portable device is started in accordance with the start instruction.

US Pat. No. 10,218,885

THROWABLE CAMERAS AND NETWORK FOR OPERATING THE SAME

1. An image capture apparatus comprising:a housing suitable for being thrown into an airborne trajectory;
a first camera positioned in the housing with at least a partial view to the exterior environment;
a transmitter for transmitting a first image captured by the first camera;
a processing unit positioned in the housing having inputs electrically connected to the first camera and the transmitter; wherein
the processing unit identifies a feature of a ground-based subject of interest within the first image.

US Pat. No. 10,218,884

INFRARED VIDEO DISPLAY EYEWEAR

SEIKO EPSON CORPORATION, ...

1. A display system for viewing video images of objects comprising:a wearable display including a transparent display that is positioned in a user's field of vision when the wearable display is worn;
a stereoscopic video camera device including at least two cameras that each capture video images of the surrounding environment; and
a projection system mounted to the wearable display that receives the video image from the stereoscopic camera device, and projects (i) a first video image onto a transparent left eye viewport portion of the transparent display, the first video image overlapping the user's left eye field of vision and (ii) a second video image onto a transparent right eye viewport portion of the transparent display, the second video image overlapping the user's right eye field of vision;
a belt that is disposed around a hard hat worn by the user;
a bracket attachment that is fastened to the belt; and
a support device that is rotatably fixed to the bracket attachment,
wherein the stereoscopic video camera device is attached to one end of the support device.

US Pat. No. 10,218,883

DIGITAL IMAGING AND ANALYSIS SYSTEM

THE BOARD OF REGENTS OF T...

1. A digital imaging, environmental sensing and analysis system, comprising:a camera and data logger system encased by a weatherproof housing for easy deployment and maintenance and protection of the camera and data logger system from harsh conditions, which is associated with a memory to which imagery acquired by said digital camera is saved; wherein the digital camera is configured to be customized and pre-programmed, said wherein imagery is subject to custom visualization;
a plurality of sensors electronically associated with the camera and data logger system, wherein data is stored and environmental thresholds from one or more of the sensors linked to the data logger is programmed to trigger the camera system to permit the camera and data logger system to acquire said imagery and image a same image footprint of said imagery in RGB, HSV, l*a*b* color spaces; and
wherein selectable regions of interest with respect to said imagery are saved in said memory.

US Pat. No. 10,218,882

FEEDBACK FOR OBJECT POSE TRACKER

Microsoft Technology Lice...

1. A computing device comprising:a processor; and
a storage device comprising instructions, which when executed by the processor, configure the processor to:
receive data captured by at least one capture device, the data depicting a first object in an environment;
track, using a tracker, a real-world position and orientation of the first object in a tracking volume using the captured data and wherein the tracker is configured to track a real-world position of a second object in the environment; and
compute and output feedback about performance of the tracker, where the feedback encourages a user to move the first object for improved tracking of the object by the tracker and wherein the feedback is computed by applying an offset or a non-linear mapping to pose parameters of the first object and the second object.

US Pat. No. 10,218,881

IMAGING APPARATUS AND IMAGE PROCESSING APPARATUS

SONY CORPORATION, Tokyo ...

1. An imaging apparatus, comprising:an imager configured to generate RAW data by imaging of a subject; and
circuitry configured to:
acquire first metadata generated automatically when the RAW data is generated,
process the RAW data to generate a first image signal for displaying a first image on a display,
cause the display to display selectable information to enable the user to select a setting of the imaging apparatus as setting information,
acquire second metadata including the setting information selected by the user after the RAW data is generated,
generate a second image signal for displaying a second image corresponding to the RAW data and based on the second metadata, and
output the RAW data, the first metadata and the second metadata.

US Pat. No. 10,218,880

METHOD FOR ASSISTED IMAGE IMPROVEMENT

1. A method for image processing, the method executed at least in part by a computer system and comprising:a) acquiring a digital image as a collection of image pixel data;
b) for at least a first color channel, executing a first intra-channel analysis by calculating one or more statistical values that characterize the distribution of digital image values over a range;
c) reshaping each color channel of the acquired digital image by:(i) expanding the range of values within the channel;(ii) identifying at least first and second non-overlapping sub-ranges of the expanded range of values, wherein a boundary between the sub-ranges is defined according to the calculated statistical value from the at least the first color channel;(iii) re-distributing values by expanding the first sub-range and compressing the second sub-range;d) for at least one of the reshaped color channels, executing a second intra-channel analysis according to the re-distributed sub-ranges of values for the reshaped color channel and modifying at least brightness data for the at least one reshaped color channel, according to an analysis of a population of images, to form a color balanced image;
and
e) displaying the color balanced image.

US Pat. No. 10,218,879

PRINT DATA GENERATOR, PRINTER, METHOD, AND COMPUTER-READABLE MEDIUM FOR GENERATING PRINT DATA IN THE HSV COLOR SPACE REDUCING COLORANTS USED

Brother Kogyo Kabushiki K...

5. A printer comprising:a printing head configured to perform printing based on print data; and
a print data generator configured to perform a print data generating process to generate the print data from original full-color data, the print data being expressed with a smaller number of colors including a reference color and an emphasis color, than the original full-color data, the emphasis color being a chromatic color different from the reference color, the print data generating process comprising:
setting one of pixels forming the original full-color data as a target pixel;
converting RGB data representing a color of the target pixel into HSV data;
calculating a difference value between a hue value of the target pixel in the HSV data and a hue value of the emphasis color;
determining whether the hue value of the target pixel is within a specific hue range for the emphasis color, based on the calculated difference value;
in response to determining that the hue value of the target pixel is within the specific hue range for the emphasis color, setting a density value of the reference color of the target pixel based on a value value of the target pixel in the HSV data, and setting a density of the emphasis color of the target pixel based on a saturation value of the target pixel in the HSV data and the calculated difference value; and
in response to determining that the hue value of the target pixel is out of the specific hue range for the emphasis color, setting the density value of the reference color of the target pixel based on a luminance value of the target pixel derived from the RGB data, and setting the density value of the emphasis color of the target pixel to zero.

US Pat. No. 10,218,878

METHOD FOR CORRECTING DENSITY IRREGULARITY, PRINTING APPARATUS, AND IMAGING MODULE

Seiko Epson Corporation, ...

1. A method for correcting density irregularity comprising:acquiring imaging data by imaging, using an imaging device mounted in a printing apparatus, a correcting pattern for correcting density irregularity, which contains a first pattern with first density and a second pattern with second density different from the first density; and
correcting the density irregularity based on the imaging data,
wherein the acquiring of the imaging data includes
acquiring first division data by imaging a first region of the correcting pattern with the imaging device in a first stopped position relative to the correcting pattern,
acquiring second division data by imaging a second region of the correcting pattern with the imaging device in a second stopped position relative to the correcting pattern, and
synthesizing the first division data and the second division data to acquire the imaging data.

US Pat. No. 10,218,877

IMAGE PROCESSING APPARATUS, AND IMAGE PROCESSING METHOD

KYOCERA Document Solution...

1. An image processing apparatus comprising:an image acquiring portion configured to acquire an image for each page of a plurality of documents composing a booklet;
a pair generating portion configured to arrange a plurality of images acquired by the image acquiring portion in order of pages of the booklet, and to generate pairs from the arranged images, each of the pairs being a pair of pages adjacent to each other in a spread state of the booklet;
a first determination portion configured to determine whether or not a drawn image is present in a region having a predetermined width and including a boundary portion between two images in each of the pairs generated by the pair generating portion;
a second determination portion configured to determine whether or not the two images are drawn images of letters, and when determining that the two images are drawn images of letters, determine whether or not the two images show, by successive letters of the two images, a string of letters that compose one word, then when determining that the two images show a string of letters that compose one word, determine that there is drawing continuity of the drawn images of letters between the two images, and when determining that the two images do not show a string of letters that compose one word, determine that there is no drawing continuity of drawn images of letters between the two images;
a third determination portion configured to determine whether or not the two images have to be combined to each other, based on a determination result of the first determination portion and a determination result of the second determination portion; and
an image combining portion configured to combine the two images, when the third determination portion determines that the two images have to be combined to each other, wherein
the third determination portion determines that the two images have to be combined to each other, when it is determined by the first determination portion that a drawn image is not present in the region and it is determined by the second determination portion that there is the drawing continuity of drawn images of letters between the two images, and
the third determination portion determines that the two images do not have to be combined to each other, when it is determined by the first determination portion that a drawn image is not present in the region and it is determined by the second determination portion that there is no drawing continuity of drawn images of letters between the two images.

US Pat. No. 10,218,876

INFORMATION PROCESSING APPARATUS, METHOD OF CONTROLLING THE SAME, AND STORAGE MEDIUM

CANON KABUSHIKI KAISHA, ...

6. A method of controlling an information processing apparatus having a first wireless LAN interface that performs first wireless communication which is relayed through an access point and a second wireless LAN interface that performs second wireless communication which is not relayed through an access point, wherein the second wireless LAN interface is configured to perform the second wireless communication using a same frequency channel as a frequency channel of the first wireless communication in a state where the first wireless LAN interface is performing the first wireless communication, the method comprising the steps of:determining whether or not the first wireless LAN interface is connected to the access point;
allowing the second wireless LAN interface to start the second wireless communication in a case where the first wireless LAN interface is determined to be connected to the access point; and
inhibiting the second wireless LAN interface from starting the second wireless communication in a case where the first wireless LAN interface is not determined to be connected to the access point.