US Pat. No. 10,219,124

TERMINATING AN INCOMING CONNECTION REQUEST AND ACTIVE CALL MOVEMENT

1. A method of telecommunication comprising:providing functionality via a primary communication device of a user to associate a secondary communication device with the user, the primary communication device having a processor and non-transitory memory;
defining a condition in which a communication action is to automatically occur that involves the secondary communication device based on a location identifier, the location identifier comprising one of: (i) a connection between the primary communication device and an access point, (ii) a location of the primary communication device, (iii) a Bluetooth connection between the primary communication device and the secondary communication device, and (iv) a direct wireless communication connection between the primary communication device and the secondary communication device;
automatically performing the communication action upon a determination that the condition is met based on the location identifier, the communication action comprising at least one of:
(a) terminating an incoming connection request to the primary device by forwarding the incoming connection request to the secondary communication device, and
(b) moving an existing connection from the primary communication device to the secondary communication device.

US Pat. No. 10,219,123

PERIODIC AMBIENT WAVEFORM ANALYSIS FOR DYNAMIC DEVICE CONFIGURATION

Facebook, Inc., Menlo Pa...

1. A method comprising:by a computing system, generating a waveform fingerprint based on captured ambient audio data;
by the computing system, calculating a location of the computing system;
by the computing system, sending the generated waveform fingerprint and the location to a server;
by the computing system, receiving instructions from the server to adjust one or more device settings of an output device of the computing system, the instructions being based at least in part on identifying one or more audio fingerprints that match the generated waveform fingerprint and correlating one or more of the identified audio fingerprints to a physical environment of the computing system; and
by the computing system, adjusting one or more of the device settings of the output device of the computing system in accordance with the received instructions.

US Pat. No. 10,219,122

STATE-BASED ELECTRONIC MESSAGE MANAGEMENT SYSTEMS AND CONTROLLERS

The Travelers Indemnity C...

1. A state-based message processing system comprising:a processing device;
an electronic communications network transceiver device in communication with the processing device; and
a memory device in communication with the processing device, the memory device storing instructions configured in a set of programmatically-distinct modules, the modules comprising:
(i) a message processing module,
(ii) a message intent analysis module,
(iii) a conversation management module, and
(iv) a conversation state analysis module,wherein the modules, when executed by the processing device, direct the processing device to:receive as input, into the message intent analysis module, data indicative of at least one intent rule for recognizing a message intent;
receive as input, into the conversation management module, data indicative of at least one conversation rule for generating an outgoing message;
receive as input, into the conversation state analysis module, data indicative of at least one conversation state rule for identifying a conversation state;
receive as input, into the message processing module and from a remote user device associated with a user, a text message from the user;
identify, by the message intent analysis module and based on (i) the text message and (ii) the at least one intent rule, a message intent associated with the text message;
generate, by the conversation management module and based on (i) the text message and (ii) the at least one conversation rule, an outgoing message to the user to transmit to the remote user device;
associate, by the conversation management module, the text message and the outgoing message with a conversation identifier that identifies a conversation;
identify, by the conversation state analysis module, a current state of the conversation based on the at least one conversation state rule and at least one of the following: (i) the message intent, (ii) the text message, and (iii) the outgoing message;
output, by the message processing module via the electronic communications network transceiver device, the outgoing message to the user; and
store, by the conversation state analysis module, an indication of the current state in association with the conversation.

US Pat. No. 10,219,121

METHOD AND APPARATUS FOR TRANSMITTING AND RECEIVING MESSAGE USING CUSTOMIZED TAG

SAMSUNG ELECTRONICS CO., ...

1. A message receiving device, comprising: a message receiving unit configured to receive, using a communication network, a first message including a first customized tag from a message transmitting device, the first message comprising a plurality of items for selecting by the message receiving device;a control unit configured to control the message receiving device based on a control operation described in the first customized tag, the first customized tag being generated by the message transmitting device and the control operation being performed by displaying, at the message receiving device, the plurality of items included in the first message received from the message transmitting device based on the first customized tag and receiving a user input indicating a selection of at least one item from among the displayed plurality of items, the user input being made via the message receiving device by a user;
a message generating unit which generates a second message including a second customized tag comprising at least one value corresponding to the at least one item selected by the user at the message receiving device; and
a message transmitting unit which transmits the second message including the second customized tag to the message transmitting device,
wherein the first customized tag comprises at least one of (i) a device identifier of the message receiving device and (ii) a password as an attribute.

US Pat. No. 10,219,120

SYSTEM AND METHOD FOR SEAMLESS PUSH-TO-TALK SERVICES

Mutualink, Inc., Walling...

1. A system, comprising:a memory;
one or more processors coupled to the memory, wherein the one or more processors are configured to:
receive a telephone call from a computing device operated by a user, wherein the telephone call comprises an automatic number identification (ANI) identifying the computing device, wherein the computing device initiates the telephone call after determining that a voice quality associated with a data channel satisfies a threshold, wherein one or more talk groups are associated with the ANI;
request the one or more talk groups associated with the ANI;
receive instructions corresponding to the one or more talk groups associated with the ANI;
based on the instructions, set up a routing path to a voice communication session corresponding to a talk group of the one or more talk groups; and
route voice communications via the voice communication session to the talk group of the one or more talk groups to provide the user with push-to-talk (PTT) services.

US Pat. No. 10,219,119

FACILITATING USER INTERACTIONS BASED ON PROXIMITY

GROUPON, INC., Chicago, ...

1. A computer-implemented method for providing functionality based on location-based virtual groups of users of mobile devices, the method comprising:creating, by one or more programmed computing systems, a temporary location-based virtual group of users of mobile devices by
receiving information from a user to define the virtual group, the received information including an anchor point with a geographic location around which a geographic area of the virtual group is centered, termination criteria indicating when the virtual group will end, and user interaction rules that specify types of actions that users who are part of the virtual group may take;
receiving a request from a user to join the virtual group, the user having a mobile communication-capable device that provides information regarding a current geographic location of the user; and
determining whether to admit the user to the virtual group based at least in part on the current geographic location of the user being within the geographic area of the virtual group;
automatically providing, by one of the one or more programmed computing systems, functionality to users of the virtual group in accordance with the user interaction rules of the virtual group based at least in part on enabling communications between the mobile communication-capable devices of the users of the virtual group; and
creating, by one of the one or more programmed computing systems, a residual permission group allowing the users of the virtual group to communicate with each other even after termination of the virtual group.

US Pat. No. 10,219,118

METHOD AND TRANSMISSION NODE FOR PROVIDING DATA PACKETS TO A PLURALITY OF RECEIVERS USING NETWORK CODING

KONINKLIJKE KPN N.V., Ro...

1. A method for wirelessly providing a number of data packets to a plurality of receivers in a cell of a transmission node of a cellular telecommunications system, the method comprising:storing a number of network coded data packets at the transmission node;
cyclically transmitting the stored network coded data packets from the transmission node to the plurality of receivers;
wherein a number of transmitted network coded data packets in a cycle is at least equal to the number of data packets to be provided to each receiver of the plurality of receivers and wherein each network coded data packet is a linear combination of two or more data packets to be provided to each receiver.

US Pat. No. 10,219,117

SYSTEMS AND METHODS FOR RADIO ACCESS INTERFACES

CalAmp Corp., Irvine, CA...

1. A vehicle telematics device, comprising:a processor; and
a communications interface coupled to the processor;
a memory coupled to the processor;
wherein, in response to one or more instructions stored in the memory, the vehicle telematics device:
communicates with a mobile communications device by using the communications interface;
provides data to the mobile communications device to generate provided data, where the provided data is to be transmitted to the remote server system; and
provides command data to the mobile communications device, wherein the command data instructs the mobile communication device to:
process the provided data to generate formatted data based on the command data;
wait for the remote server system to be available to the mobile communications device; and
transmit the formatted data to the remote server system by using at least one communication protocol based on the command data.

US Pat. No. 10,219,116

DETECTION OF MOBILE DEVICE LOCATION WITHIN VEHICLE USING VEHICLE BASED DATA AND MOBILE DEVICE BASED DATA

Allstate Insurance Compan...

1. A location analysis computing device comprising:a processing unit comprising at least one processor; and
a memory unit storing computer-executable instructions, which when executed by the processing unit, cause the location analysis computing device to:
receive first mobile device sensor data collected by mobile device accelerometers of a mobile device located within a vehicle, the first mobile device sensor data including first-axis accelerometer data, second-axis accelerometer data, and third-axis accelerometer data;
translate the first mobile device sensor data into X-axis accelerometer data, Y-axis accelerometer data, and Z-axis accelerometer data, resulting in translated first mobile device sensor data;
detect a first occurrence of an event in the translated first mobile device sensor data, wherein detecting the first occurrence of the event comprises determining that a first change in magnitude of the Z-axis accelerometer data exceeds a first predetermined threshold;
calculate a first occurrence vector comprising a first occurrence magnitude and a first occurrence angle based on the detected first occurrence of the event;
detect a second occurrence of the event in the translated first mobile device sensor data;
calculate, based on the detected second occurrence of the event, a second occurrence vector comprising a second occurrence magnitude and a second occurrence angle;
compare the calculated first occurrence vector and the calculated second occurrence vector; and
determine, based on the comparison, a position of the mobile device within the vehicle.

US Pat. No. 10,219,115

FACILITATION OF MOBILE DEVICE GEOLOCATION

1. A method, comprising:estimating, by a wireless network device of a wireless network comprising a processor, a probability density of a random variable associated with grouping location data, according to a geographic coordinate system, wherein the location data is representative of previous locations of a mobile device of mobile devices, wherein the estimating results in an estimated number of the mobile devices;
based on the estimated number of the mobile devices being determined to exceed a defined number of the mobile devices associated with a network capacity of a transceiver of the wireless network, generating an indication that additional network capacity beyond the network capacity is requested;
identifying, by the wireless network device, a location point based on the probability density; and
based on the location point and the indication, determining, by the wireless network device, a probable location of the mobile device according to a defined probability function.

US Pat. No. 10,219,114

MONITORING A STATUS OF A DISCONNECTED DEVICE BY A MOBILE DEVICE AND AN AUDIO ANALYSIS SYSTEM IN AN INFRASTRUCTURE

International Business Ma...

1. A computer program product for monitoring an operation status of a disconnected device by a mobile device and an audio analysis system in an infrastructure, wherein the mobile device has connectivity to the infrastructure and the disconnected device has no connectivity to the infrastructure, the computer program product comprising a computer readable storage medium having program code embodied therewith, the program code executable to:receive from the user, by the mobile device, a selection of the disconnected device in a list of disconnected devices;
determine, by the mobile device, a location of the mobile device;
determine, by the mobile device, whether the mobile device is in proximity to a predefined location of the disconnected device;
invoke, by the mobile device, passive listening of the mobile device to a sound generated by the disconnected device, in response to determining that the mobile device is in proximity to the predefined location of the disconnected device;
determine, by the mobile device, whether the sound can be detected by the mobile device;
stream, by the mobile device, audio with information of the location of the mobile device to the audio analysis system, in response to determining that the sound can be detected by the mobile device, wherein the audio is recorded during the passive listening; and
receive, by the mobile device, a notification of the operation status of the disconnected device.

US Pat. No. 10,219,113

IN-VEHICLE WIRELESS COMMUNICATION MANAGEMENT SYSTEM AND METHOD OF CONTROLLING SAME

HYUNDAI MOTOR COMPANY, S...

1. A method for managing wireless communication between vehicles, where each of the vehicles is equipped with a near field wireless communication router, the method comprising:acquiring, with a first router included in a first vehicle, power information from at least one wireless device located inside the first vehicle and from a second vehicle having a second router;
determining whether interference occurs in near field wireless communication based on the power information;
when it is determined that the interference occurs, determining a location of a source of the interference to be either an interior of the first vehicle or an exterior of the first vehicle; and
depending on the location of the source of the interference, requesting power adjustment to either the at least one wireless device or the second router such that the interference is mitigated,
wherein determining the location of the source of the interference comprises:
determining a received signal strength indicator (RSSI) of any wireless device of a plurality of wireless devices;
when the RSSI is not problematic, determining that the location of the source of the interference is the exterior of the first vehicle; and
when the RSSI is problematic, determining that the location of the source of the interference is the interior of the first vehicle.

US Pat. No. 10,219,112

DETERMINING DELIVERY AREAS FOR AERIAL VEHICLES

Amazon Technologies, Inc....

1. A delivery system comprising:a plurality of unmanned aerial vehicles, wherein the plurality of unmanned aerial vehicles comprises a first unmanned aerial vehicle and a second unmanned aerial vehicle;
at least one memory component; and
at least one computer processor,
wherein the at least one computer processor is configured to at least:
receive, from the first unmanned aerial vehicle, a first set of coordinates of the first unmanned aerial vehicle during a first delivery of at least a first item to a location at a first time;
identify a geolocation corresponding to the location;
determine a first level of uncertainty associated with at least one of the first unmanned aerial vehicle at the first time or the first set of coordinates;
determine a first geoscan comprising a first Gaussian distribution having a first mean location at the first set of coordinates and the first level of uncertainty;
define a first region at the location based at least in part on the first geoscan and the geolocation, wherein the first region comprises a second Gaussian distribution having the first mean location at the first set of coordinates and the first level of uncertainty;
store information regarding the first region in at least one data store;
receive a request for a second delivery of at least a second item to the location;
identify at least one attribute of at least one of the second delivery or the second item based at least in part on the request;
determine that the first region is suitable for the second delivery based at least in part on the at least one attribute;
determine a path from an origin to the first region; and
transmit, to a second unmanned aerial vehicle, an instruction to deliver at least the second item from the origin to the first region along the path.

US Pat. No. 10,219,111

VISITATION TRACKING SYSTEM

Snap Inc., Santa Monica,...

1. A method comprising:retrieving location data from a client device;
identifying a geo-cell from among a set of geo-cells based on the location data from the client device;
accessing a database that comprises location identifiers of one or more physical locations within the geo-cell, each location identified by the location identifiers associated with media content, the media content including a first media object associated with a first location within the geo-cell;
determining a user density of at least the first location located within the geo-cell, the user density based on a number of request to access the first media object associated with the first location;
ranking the first location among the location identifiers within the database based on the user density of the first location; and
loading the first media object at the client device based on the ranking of the first location among the location identifiers.

US Pat. No. 10,219,110

SYSTEM TO TRACK ENGAGEMENT OF MEDIA ITEMS

Snap Inc., Santa Monica,...

1. A system comprising:a memory; and
at least one hardware processor coupled to the memory and comprising an engagement tracking application that causes the system to perform operations comprising:
delivering an ephemeral message to a client device, the ephemeral message including a media item associated with a physical location;
receiving an access request for the media item from the client device;
detecting an exposure of the client device to the media item, based on the receiving the access request;
tracking user actions of the client device, in response to the detecting the exposure of the client device to the media item;
detecting the client device at the physical location associated with the media item, based on the tracking the user actions of the client device; and
calculating an engagement score of the media item based on the tracked user actions, in response to the detecting the client device at the physical location associated with the media item.

US Pat. No. 10,219,109

METHOD, SYSTEM AND DEVICE FOR ENABLING AN OBJECT TO ACCESS A THIRD PARTY ASSET

1. A computer implemented method of providing access to a third party asset or a service provided by said third party asset, the method comprising the steps of:obtaining location and time data for each of a plurality of mobile electronic devices, each of said plurality of mobile electronic devices being associated with a respective one of a plurality of users, said plurality of users comprising a group of associated users;
comparing location and time data obtained for at least one of said plurality of mobile electronic devices to location and time data obtained for at least one other of said plurality of mobile electronic devices to determine that said at least two mobile electronic devices remain in close proximity to each other for a predetermined period of time and over a predetermined distance to thereby infer that at least two users of said group of associated users have remained in close proximity for said predetermined period of time and over said predetermined distance;
determining from said comparison of location and time data for said at least two mobile electronic devices that said at least two mobile electronic devices have reached a location on or within a boundary of a geo-fence associated with a third party asset;
in response to a determination that said at least two mobile electronic devices have reached a location on or within a boundary of said geo-fence, communicating data to a system or device of the third party associated with the geo-fence, said communicated data providing data relating to an identity and/or an attribute of one or more users from the group of associated users to enable the third party system or device to provide said one or more users of the group of associated users with access to the third party asset or a service provided by said third party asset.

US Pat. No. 10,219,108

USE OF POSITIONING REFERENCE SIGNAL CONFIGURATION AS INDICATION OF OPERATIONAL STATE OF A CELL

Sprint Spectrum L.P., Ov...

1. A method for using a positioning reference signal (PRS) to communicate an operational state of a cell, wherein a base station broadcasts the PRS in the cell to facilitate mobile device location determination, the method comprising:determining by the base station the operational state of the cell;
based on the determined operational state of the cell, selecting by the base station a PRS frequency-shift from a plurality of PRS frequency-shifts; and
broadcasting by the base station the PRS with the selected PRS frequency-shift as an indication of the operational state of the cell,
whereby the broadcast PRS serves a dual purpose of facilitating mobile device location determination and providing the indication of the operational state of the cell.

US Pat. No. 10,219,107

TRACKING DEVICE LOCATION IDENTIFICATION

Tile, Inc., San Mateo, C...

1. A method for determining a last known location of a tracking device, the method comprising:receiving, by a tracking server from a first mobile device, a disconnection event indicating that a first tracking device has disconnected from the first mobile device, the disconnection event comprising an identifier of the first tracking device, a location of the first mobile device when the first tracking device disconnects from the first mobile device, and a first timestamp representative of a time of the disconnection;
receiving, by the tracking server, a plurality of location updates from one or more mobile devices of a plurality of mobile devices other than the first mobile device, each location update in response to a corresponding mobile device detecting the first tracking device and comprising a location of the corresponding mobile device when the corresponding mobile device detected the tracking device, a measure of accuracy of the location, the identifier of the first tracking device, and a timestamp representative of a time of the detection;
selecting, by the tracking server, one or more location updates of the plurality of location updates based at least in part on the timestamp of the disconnection event and the timestamps of the one or more location updates;
determining, based on the selected location updates and the disconnection event, a last known location for the first tracking device; and
storing, by the tracking system, the last known location in association with the first tracking device.

US Pat. No. 10,219,106

SECURE BLE BROADCAST SYSTEM FOR LOCATION BASED SERVICE

Hong Kong Applied Science...

1. An apparatus comprising:a memory configured to store a mapping data structure received from a server, the mapping data structure associated with a time period and includes a plurality of encoded beacon identifiers (IDs), each encoded beacon ID of the plurality of encoded beacon IDs mapped to a corresponding beacon device identifier,
a receiver configured to receive an encoded beacon ID from a beacon device associated with a location based service, the encoded beacon ID generated based on a beacon device identifier of the beacon device and a time value; and
a processor coupled to the memory and the receiver, the processor configured to:
in response to a determination that the encoded beacon ID is included in the mapping data structure, identify, based on the mapping data structure, a beacon identifier that corresponds to the encoded beacon ID; and
generate a beacon application identified based on the beacon identifier, the beacon amplification corresponds to the location based service.

US Pat. No. 10,219,105

APPARATUS AND METHOD FOR DISTANCE-BASED OPTION DATA OBJECT FILTERING AND MODIFICATION

Groupon, Inc., Chicago, ...

1. An apparatus comprising at least one processor and at least one memory coupled to the processor, wherein the processor is configured to at least:determine a triangulation location of a user;
determine a predefined geographic area that encompasses the triangulation location of the user;
receive a predetermined distance parameter associated with the predefined geographic area;
receive a set of option data objects associated with the user;
extract a set of option data object parameters from each option data object within the set of option data objects, wherein the option data object information comprises:
an identification of a location associated with the option data object; and
a weighted value associated with the option data object;
calculate, based at least in part on the location associated with the option data object, the triangulation location of the user, and the predefined geographic area, a distance associated with the option data object;
determine whether the distance associated with the option data object exceeds the predetermined distance parameter; and
in response to determining whether the distance associated with the option data object exceeds the predetermined distance parameter, calculate an updated weighted value associated with the option data object.

US Pat. No. 10,219,104

SYSTEM AND METHOD FOR COORDINATING MEETINGS BETWEEN USERS OF A MOBILE COMMUNICATION NETWORK

1. A system comprising:one or more servers accessible from a network via one or more network interfaces;
one or more databases communicatively coupled to the one or more servers, the one or more servers configured to:
identify a first location corresponding to a current location of a first mobile device associated with a first end user of a plurality of end users;
identify a category based at least in part on detected user input as entered in real-time by the first end user into the first mobile device, wherein the category:
(a) relates to a type of location and is identified without a selection of the first end user;
(b) is identified based in part on parsing the user input received via the first mobile device to identify one or more keywords; and
(c) is identified based at least in part on the one or more keywords;
identify a chain of businesses as a preference of the first end user;
identify first information that is associated with the first end user, the first information comprising one or more of:
profile information relating to the first end user;
interest information relating to interests of the first end user;
historical information relating to past activities of the first end user; and/or
preference information corresponding to one or more businesses associated with the first end user;
select a second location for proposal for the first end user, the second location corresponding to a first business, wherein the selecting is based at least in part on the first location, the category, identifying the first business as associated with the chain of businesses, and one or more of:
the interest information relating to interests of the first end user;
the historical information relating to past activities of the first end user; and/or
the preference information corresponding to one or more businesses associated with the first end user; and
transmit an indication of the second location for proposal to the first mobile device.

US Pat. No. 10,219,103

POWER-EFFICIENT LOCATION ESTIMATION

Google LLC, Mountain Vie...

1. A method comprising:scanning, by a wireless computing device, a frequency set of frequencies used by a first group of base stations;
determining a first location of the wireless computing device;
selecting, by the wireless computing device, a frequency subset of the frequency set, wherein the frequency subset excludes at least one frequency in the frequency set, wherein a second group of base stations use the respective frequencies of the frequency subset, and wherein the second group of base stations is a subset of the first group of base stations;
estimating, by the wireless computing device, a second location of the wireless computing device based on information relating to one or more base stations in the second group of base stations;
determining, by the wireless computing device, whether the first location and the second location are within a threshold distance of one another;
after determining that the first location and the second location are within the threshold distance of one another, the wireless computing device performing an additional frequency scan of the frequency subset of the frequency set;
after performing the additional frequency scan, the wireless computing device determining that a third group of base stations use the respective frequencies of the frequency subset, wherein the third group of base stations is different from the second group of base stations; and
based on the third group of base stations being different from the second group of base stations, the wireless computing device estimating a third location of the wireless computing device.

US Pat. No. 10,219,102

METHOD FOR RECOGNIZING LOCATION AND ELECTRONIC DEVICE IMPLEMENTING THE SAME

Samsung Electronics Co., ...

1. An electronic device comprising:a communication module comprising communication circuitry configured to receive access point (AP) information from each of a plurality of AP devices;
a memory; and
a processor,
wherein the processor is configured to:
obtain, from the AP information, a strength value of a signal and a unique value of the AP device that transmits the signal,
process the AP information based on the strength value of the signal and the unique value of the AP device,
determine whether the received AP information satisfies a condition for performing a predefined function for controlling an external electronic device with regard to a previously stored AP list stored in the memory, and
perform the predefined function if the condition is satisfied.

US Pat. No. 10,219,101

INFORMATION DELIVERY SYSTEM FOR SENDING REMINDER TIMES BASED ON EVENT AND TRAVEL TIMES

Sony Corporation, Tokyo ...

1. An advertisement delivery server comprising:at least one circuitry configured to
receive past location information and date and time information of at least one mobile communication terminal, the past location information indicating at least one past location of the at least one mobile communication terminal, the at least one past location being different from a current location of the at least one mobile communication terminal, the date and time information indicating a past date and time at which the at least one mobile communication terminal was present at the at least one past location;
store advertisement data and delivery condition data, wherein
the advertisement data includes text and/or image data, and
the delivery condition data includes
a keyword to determine a target user of the advertisement data,
an advertisement serving period of the advertisement data,
an advertisement delivery range from an advertisement delivery point, and
user attribute information, the user attribute information including an age of a user, a gender of a user and/or an interest of a user;
select a mobile communication terminal to which the advertisement data is to be delivered based on the delivery condition data; and
deliver the advertisement data to the selected mobile communication terminal.

US Pat. No. 10,219,100

DETERMINING PROXIMITY FOR DEVICES INTERACTING WITH MEDIA DEVICES

1. A media device, comprising:a controller in electrical communication with systems including
a data storage system including configuration data associated with configuring the media device,
a radio frequency (RF) system including at least one RF antenna configured to be selectively electrically de-tunable, the RF antenna electrically coupled with a plurality of RF transceivers that communicate using different wireless protocols,
an audio/video (AN) system including at least one loudspeaker electrically coupled with a power amplifier and at least one microphone electrically coupled with a preamplifier,
wherein the RF system is configured to detect a RF signature, a RF signal strength, or both of one or more other wireless devices including a wireless activity monitoring and reporting device configured to be worn by a user, the data comprises alarm data set by the user for execution by the wireless activity monitoring and reporting device, and the controller commandeers execution of the alarm using the alarm data only when the wireless activity monitoring and reporting device is within a first proximity distance of the media device, the RF system configured to electrically de-tune the RF antenna to determine proximity, location, or both of the one or more other wireless devices, and
wherein the controller is configured to process the RF signature, the RF signal strength, or both, to determine the proximity, the location, or both of the one or more other wireless devices relative to the media device, the controller being configured to process the RF signature including establishing a wireless communications link with the one or more other wireless devices using the RF system, the controller being further configured to harvest data from the one or more other wireless devices using the wireless communications link.

US Pat. No. 10,219,099

SYSTEM FOR COMPENSATION OF PRESENTATION BY WIRELESS DEVICES

AMAZON TECHNOLOGIES, INC....

1. A first audio device comprising:a first wireless local area network (WLAN) interface;
a first Bluetooth interface coupled to a first wireless speaker;
a first memory storing first computer-executable instructions;
a first hardware processor to execute the first computer-executable instructions to:
determine an expected number of samples of audio data per unit time;
send, at a first time, first audio data to the first wireless speaker, wherein the first audio data is sampled at a first sample rate;
determine, for a plurality of time intervals between the first time and a second time, an actual number of samples of the first audio data sent per unit time to the first wireless speaker;
determine, for a portion of the plurality of time intervals, a first set of residuals by calculating a difference between the expected number of samples per unit time and the actual number of samples sent per unit time for a particular one of the portion of the plurality of time intervals;
determine a first regression line using a linear regression of the first set of residuals;
determine, based on a slope of the first regression line, a drift value;
retrieve, based on the drift value, a skew value from previously stored data, wherein the skew value is indicative of an initial delay between sending audio data to the first wireless speaker and output of corresponding audio by the first wireless speaker;
determine an adjusted number of samples per unit time by summing an additive inverse of the drift value to the expected number of samples per unit time; and
send, at a third time to the first wireless speaker, second audio data that includes a presentation delay with a length of the skew value and wherein the second audio data is sent at the adjusted number of samples per unit time.

US Pat. No. 10,219,098

LOCATION ESTIMATION OF ACTIVE SPEAKER

GM GLOBAL TECHNOLOGY OPER...

1. A method of performing an estimation of a location of an active speaker in real time, the method comprising:designating any one microphone of an array of microphones as a reference microphone;
storing a relative transfer function (RTF) for each microphone of the array of microphones other than the reference microphone associated with each potential location among potential locations as a set of stored RTFs;
obtaining a voice sample of the active speaker and obtaining a speaker RTF for each microphone of the array of microphones other than the reference microphone;
performing an RTF projection of the speaker RTF for each microphone on the set of stored RTFs; and
Determining, using a processor, one of the potential locations as the location of the active speaker based on the performing the RTF projection, wherein the obtaining the speaker RTF for each microphone of the array of microphones other than the reference microphone includes computing, for each of the potential locations, a ratio of an acoustic transfer function of the voice sample at the microphone to an acoustic transfer function of the voice sample at the reference microphone.

US Pat. No. 10,219,097

METHOD AND APPARATUSES FOR IMPLEMENTING A HEAD TRACKING HEADSET

Nokia Technologies Oy, E...

1. An apparatus comprising at least one processor and at least one memory including computer code for one or more programs, the at least one memory and the computer code configured to with the at least one processor cause the apparatus to:receive at least one of position and orientation of a first head-mounted device in relation to a first user device, wherein the at least one of the position and orientation received is used to train a model using machine learning;
detect input data indicative of a change in the at least one of the position and orientation of the first head-mounted device in relation to the first user device;
determine at least one signal quality parameter based on the input data, wherein the at least one signal quality parameter is measured between the first head-mounted device and the first user device; and
determine a filter pair corresponding with a direction to which a spatial audio signal is rendered based at least in part on the at least one signal quality parameter and the model so as to control spatial audio signal reproduction to take effect the change in the at least one of the position and orientation of the first head-mounted device during rendering of the spatial audio signal.

US Pat. No. 10,219,096

SEAT-OPTIMIZED REPRODUCTION OF ENTERTAINMENT FOR AUTONOMOUS DRIVING

Bayerische Motoren Werke ...

1. A method for routing signal components of individual channels of a multichannel audio signal to multiple loudspeakers in a vehicle having at least one front seat position in a front audio zone of the vehicle and at least one rear seat position in a rear audio zone of the vehicle, wherein the rear audio zone is arranged behind the front audio zone in a forward direction of the vehicle, wherein, referenced to the forward direction, the vehicle has at least one left front loudspeaker and one right front loudspeaker that are arranged in front of the front seat position; one left center loudspeaker and one right center loudspeaker that are arranged in front of the rear seat position; and a central front loudspeaker that is arranged centrally in front of the front seat position; the method comprising:routing the signal components of the individual channels of the multichannel audio signal to the loudspeakers, so that the multichannel audio signal is reproduced stereophonically in both the front and the rear audio zones, wherein when the front seat position is directed in the forward direction of the vehicle, a left channel is routed to the left front loudspeaker and the left center loudspeaker, a right channel is routed to the right front loudspeaker and the right center loudspeaker, a center channel is routed to the central front loudspeaker, a left surround channel is routed to the left center loudspeaker, and a right surround channel is routed to the right center loudspeaker; and
sensing whether the front seat position has been rotated in a backward direction of the vehicle;
wherein, when the front seat position is rotated in the backward direction of the vehicle, the left channel is routed only to the left center loudspeaker, the right channel is routed only to the right center loudspeaker, the left surround channel is routed to the left front loudspeaker and the right surround channel is routed to the right front loudspeaker, and routing of the center channel to the central front loudspeaker is interrupted.

US Pat. No. 10,219,095

USER EXPERIENCE LOCALIZING BINAURAL SOUND DURING A TELEPHONE CALL

1. A method that improves a user experience during a video call between a first user and a second user when the first user holds a handheld portable electronic device (HPED) less than one meter away from a face of the first user, the method comprising:displaying, on a display of the HPED and while the first user holds the HPED less than one meter away from the face of the first user, the second user engaged in the video call with the first user; and
improving the user experience of the first user during the video call by convolving, with a processor, a voice of the second user with far-field head related transfer functions (HRTFs) to a location behind the HPED relative to the face of the first user while the HPED is a near-field distance from the face of the first user.

US Pat. No. 10,219,094

ACOUSTIC DETECTION OF AUDIO SOURCES TO FACILITATE REPRODUCTION OF SPATIAL AUDIO SPACES

1. An apparatus comprising:a housing;
a plurality of transducers disposed in the housing and configured to emit audible acoustic signals into a region external to the housing, the region including one or more audio sources;
a plurality of acoustic probe transducers configured to emit ultrasonic signals, at least a subset of the acoustic probe transducers each is configured to emit a unique ultrasonic signal;
a plurality of acoustic sensors configured to sense received ultrasonic signals reflected from the one or more audio sources;
a controller configured to determine a position of at least one audio source of the one or more audio sources; and
a signal modulator configured to generate the unique ultrasonic signal; and
a driver configured to maintain an acoustic probe transducer at an approximate maximum displacement during a shift from a first characteristic to a second characteristic.

US Pat. No. 10,219,093

MONO-SPATIAL AUDIO PROCESSING TO PROVIDE SPATIAL MESSAGING

1. A method comprising:receiving data representing a message to present acoustically at a loudspeaker;
determining whether an audio signal is in communication with the loudspeaker, including determining no audio signal is in communication with the loudspeaker, and generating a reference audio signal;
determining a type of the message associated with the message;
modulating spatially a message audio signal for the message as a function of the type of message to form a spatially-modulated message audio signal;
forming a mono-spatial audio signal based on the audio signal and the spatially-modulated message; and audio signal, a mono-spatial audio space overlay being used to form the mono-spatial audio signal after the mono-spatial audio space overlay is generated and configured to simulate an originating location, direction, or distance associated with the mono-spatial audio signal; and
transmitting the mono-spatial audio signal to the loudspeaker.

US Pat. No. 10,219,092

SPATIAL RENDERING OF A MESSAGE

Nokia Technologies Oy, E...

1. A method comprising:determining that a message comprising audio and/or visual content, from a first user in a virtual three-dimensional space, is targeted to a first sub-set of users in the virtual three-dimensional space, wherein the message is not targeted to a second sub-set of users in the virtual three-dimensional space;
determining a first audio and/or visual object and a second audio and/or visual object, wherein the first audio and/or visual object comprises the audio and/or visual content of the message and one or more first values for one or more first spatial rendering parameters, and the second audio and/or visual object comprises the audio and/or visual content of the message and one or more second values for one or more second spatial rendering parameters;
causing the first audio and/or visual object to be rendered to the first sub-set of users based on the first values such that the audio and/or visual content is rendered at a first position in the virtual three-dimensional space; and
causing the second audio and/or visual object to be rendered to the second sub-set of users based on the one or more second values such that the audio and/or visual content is rendered at a different, second position in the virtual three-dimensional space.

US Pat. No. 10,219,091

DYNAMICALLY CHANGING MASTER AUDIO PLAYBACK DEVICE

Bose Corporation, Framin...

1. A method for dynamically changing the master audio playback device of a set comprising at least two audio playback devices, wherein one audio playback device of the set is a first set master audio playback device that is configured to receive audio data from an audio data source and send the received audio data to at least one other slave audio playback device of the set, wherein the at least one other slave audio playback devices of the set is configured to receive audio data only from the master audio playback device and not from the audio data source, the method comprising:receiving at a first slave audio playback device of the set the selection of the first slave audio playback device of the set as a new recipient of audio data from the audio data source; and
in response to receiving at the first slave audio playback device of the set the selection of the first slave audio playback device of the set as the new recipient of audio data from the audio data source, designating the first slave audio playback device of the set as a new set master audio playback device; and
designating the first set master audio playback device as a new slave audio playback device;
wherein the new set master audio playback device is configured to receive audio data from the audio data source and send the received audio data to the new slave audio playback device, wherein the new slave audio playback device is configured to receive audio data only from the new set master audio playback device and not from the audio data source.

US Pat. No. 10,219,090

METHOD AND DETECTOR OF LOUDSPEAKER DIAPHRAGM EXCURSION

Analog Devices Global, H...

1. A method of detecting diaphragm excursion of an electrodynamic loudspeaker, comprising steps of:receiving a digital audio signal having a first sample rate,
using an up-sampler and modulator circuit, up-sampling the digital audio signal to a greater second sample rate and adding a high-frequency probe signal to the up-sampled digital audio signal to generate a pulse-modulated composite drive signal, wherein the probe signal has a frequency that exceeds a Nyquist frequency of the received digital audio signal,
applying the pulse-modulated composite drive signal to a voice coil of the electrodynamic speaker through an output amplifier,
detecting a composite drive signal current flowing through the voice coil in response to the application of the pulse-modulated composite drive signal,
detecting a modulation level of a probe signal current from the composite drive signal current, and
identifying an excursion of a diaphragm of the loudspeaker based on the detected modulation level of the probe signal current.

US Pat. No. 10,219,089

HEARING LOSS COMPENSATION APPARATUS AND METHOD USING 3D EQUAL LOUDNESS CONTOUR

Samsung Electronics Co., ...

1. A hearing loss compensation apparatus, comprising:a sound direction detection device configured to detect, using a microphone, a sound generation direction from which a sound is generated; and
a sound compensation device configured to amplify the sound based on hearing characteristics of a user corresponding to the sound generation direction and comprising an equal loudness contour of the user, in response to detecting a difference between the equal loudness contour of the user and a reference equal loudness contour,
wherein the equal loudness contour of the user is determined by mapping hearing thresholds corresponding to azimuths and frequencies,
wherein the equal loudness contour of the user exists in 3D space comprising a first axis corresponding to the azimuths, a second axis corresponding the frequencies, and a third axis corresponding to the hearing thresholds,
wherein the sound compensation device is further configured to determine the hearing characteristics of the user by comparing hearing abilities of the user at the azimuths and average hearing abilities of other users at the respective azimuths, and amplify the sound based on a characteristic, among the hearing characteristics of the user, that corresponds to an azimuth corresponding to the sound generation direction, and
wherein the other users share a common demographic characteristic with the user.

US Pat. No. 10,219,088

PHOTOACTIVE SELF-CLEANING HEARING ASSISTANCE DEVICE

Starkey Laboratories, Inc...

1. A hearing assistance device comprising: a housing; a transducer within the housing; a sound port extending through the housing; a barrier layer covering the sound port; photoactive nanoparticles disposed on or in the housing or the barrier layer, wherein the photoactive nanoparticles provide a localized surface plasmon resonance effect when illuminated with light; and a light source within the housing in optical communication with the photoactive nanoparticles.

US Pat. No. 10,219,087

HEARING AID THAT CAN BE INTRODUCED INTO THE AUDITORY CANAL AND HEARING AID SYSTEM

Eberhard Karls Universita...

1. A hearing aid configured to be inserted into an ear canal of a patient, comprising an actuator that effects a mechanical stimulation of the tympanic membrane, wherein the actuator comprises an inner surface associated with the tympanic membrane and an outer surface associated with the ear canal, the actuator being configured as an areal disk actuator, whose deformation stimulates the tympanic membrane by areal deformation; wherein the hearing aid further comprises at least one first receiver for energy signals; wherein the at least one first receiver comprises at least one optoelectronic sensor that converts light energy to electrical energy; wherein the at least one first receiver comprises an areal array of optoelectronic sensors.

US Pat. No. 10,219,086

MOBILE WIRELESS CONTROLLER FOR A HEARING AID

AN DIRECT B.V., Rotterda...

1. A method for configuring a mobile user device as a wireless controller for a hearing aid, said mobile user device comprising:a communication module for wireless communication with said hearing aid, said communication module comprising a radio interface, an antenna and an antenna controller; and,
a computer-readable memory storing a software application for remotely controlling said hearing aid on the basis of said communication module; said communication module being connectable to a configuration module on a computer or a server for configuring said communication module on the basis of hearing aid setting information;
said method comprising:
sending authentication information to said configuration module;
receiving hearing aid setting information from said configuration module if said configuration module determines on the basis of said authentication information that access to said hearing aid setting information by said communication module is allowed; and,
configuring at least part of said communication module and/or said hearing aid on the basis of said hearing aid setting information.

US Pat. No. 10,219,085

ANTENNA UNIT

1. A hearing aid device comprising:a housing configured to be worn at an ear of a person, the housing comprising a top part and respective, opposite, first and second sides,
an antenna unit arranged in the housing, the antenna unit comprising:
a radiating antenna structure,
a structure forming a ground for the radiating antenna structure,
a feed arranged between the radiating antenna structure and the structure forming the ground,
and an additional element that forms an extended ground plane, the additional element being arranged at a distance from the radiating antenna structure, the additional element comprises a first part arranged at the first side of the housing and a second part arranged at the second side of the housing, the extended ground plane being electrically connected to the structure forming the ground, and
a communication unit connected with the radiating antenna structure for reception and/or transmission of data over a wireless link to an external unit via the radiating antenna structure.

US Pat. No. 10,219,084

ACOUSTIC OUTPUT DEVICE WITH ANTENNA

1. An apparatus, comprising:a first portion configured to be arranged behind an ear of a user and to provide a signal to a second portion, the ear having an ear canal;
the second portion configured to be arranged at the ear or in the ear canal of the user and to provide acoustic output to the user;
an antenna for wireless communication, the antenna comprising an electrically conducting element; and
a coupling element configured for coupling the first portion and the second portion, the coupling element comprising the electrically conducting element, wherein the electrically conducting element is at least a part of the antenna that is configured for electromagnetic signal emission and/or electromagnetic signal reception.

US Pat. No. 10,219,083

METHOD OF LOCALIZING A SOUND SOURCE, A HEARING DEVICE, AND A HEARING SYSTEM

1. A hearing system comprisinga multitude of M of microphones, where M is larger than or equal to two, adapted for being located on a user and for picking up sound from the environment and to provide M corresponding electric input signals rm(n), m=1, . . . , M, n representing time, the environment sound at a given microphone comprising a mixture of a target sound signal propagated via an acoustic propagation channel from a location of a target sound source and possible noise signals vm(n) as present at the location of the microphone in question;
a transceiver configured to receive a wirelessly transmitted version of the target sound signal and providing an essentially noise-free target signal s(n);
a signal processor connected to said number of microphones and to said wireless transceiver,
the signal processor being configured to estimate a direction-of-arrival of the target sound signal relative to the user based on
a signal model for a received sound signal rm at microphone m (m=1, . . . , M) through the acoustic propagation channel from the target sound source to the mth microphone when worn by the user, wherein the mth acoustic propagation channel subjects the essentially noise-free target signal s(n) to an attenuation ?m and a delay Dm;
a maximum likelihood methodology;
relative transfer functions dm representing direction-dependent filtering effects of the head and torso of the user in the form of direction-dependent acoustic transfer functions from each of M?1 of said M microphones (m=1, . . . , M, m?j) to a reference microphone (m=j) among said M microphones,wherein said attenuation ?m is assumed to be independent of frequency whereas said delay Dm is assumed to be frequency dependent.

US Pat. No. 10,219,082

METHOD OF OPERATING A HEARING AID SYSTEM AND A HEARING AID SYSTEM

1. A method of operating a hearing aid system comprising the steps of:generating a first electrical signal representing noise;
filtering said first electrical signal in a filter bank hereby providing a plurality of frequency band signals,
determining a reference level for a frequency band signal;
estimating a signal level for said frequency band signal;
calculating a level difference between the estimated signal level and the reference level for said frequency band signal;
determining a gain value to be applied to said frequency band signal based on said level difference and an expansion factor;
applying said gain value to said frequency band signal hereby providing a processed frequency band signal;
providing a plurality of processed frequency band signals;
summing said plurality of processed frequency band signals into an output signal; and
presenting said output signal to the output transducer of a hearing aid.

US Pat. No. 10,219,081

CONFIGURATION OF HEARING PROSTHESIS SOUND PROCESSOR BASED ON CONTROL SIGNAL CHARACTERIZATION OF AUDIO

Cochlear Limited, Macqua...

1. A method comprising:receiving by a hearing prosthesis, from a device external to the hearing prosthesis, both audio content outputted by the device external to the hearing prosthesis and configurable prosthesis data outputted by the device external to the hearing prosthesis, the configurable prosthesis data being data usable by the prosthesis to configure itself to operate in a given manner;
responsive to receipt of the configuration data, the hearing prosthesis automatically configuring a sound processor of the hearing prosthesis in a manner based at least in part on the configurable prosthesis data.

US Pat. No. 10,219,080

ELECTRICAL-ACOUSTIC TRANSFORMATION DEVICE

GOERTEK, INC., Shandong ...

1. An electrical-acoustic transformation device, comprising:a vibration system and a magnetic circuit system with a magnetic gap;
wherein the vibration system includes: a diaphragm, a voice coil provided below the diaphragm and suspending in the magnetic gap, a piezoelectric plate provided on one side of the diaphragm, a first frequency division circuit connected to the voice coil, a second frequency division circuit connected to the piezoelectric plate, and a spider fixed to the voice coil, the spider including a first conductive line and a second conductive line formed on a surface thereof; and
the first frequency division circuit performs frequency division on an externally input first audio signal and outputs the same to the voice coil, with the externally input first audio signal being input to the first frequency division circuit via the first conductive line; and the second frequency division circuit performs frequency division on an externally input second audio signal to obtain a high frequency signal to drive the piezoelectric plate, with the externally input second audio signal being input to the second frequency division circuit via the second conductive line.

US Pat. No. 10,219,079

DISPLAY DEVICE FOR GENERATING SOUND BY VIBRATING PANEL

LG Display Co., Ltd., Se...

1. A display device, comprising: a display panel configured to emit light; a support structure at a rear of the display panel; a sound generation actuator supported by the support structure and configured to vibrate the display panel to generate sound; and a cap member surrounding the sound generation actuator and secured to the support structure at an area of the support structure, the area being near the sound generation actuator and wherein the sound generation actuator includes a lower plate, a magnet disposed on the lower plate, a center pole disposed on the central region of the lower plate, a bobbin disposed to surround the center pole, and a coil wound around the bobbin.

US Pat. No. 10,219,078

LOUDSPEAKER MODULE

Goertek Inc., Weifang (C...

1. A loudspeaker module, comprising:an independent housing, the independent housing encloses a sealed cavity, and the independent housing is provided with an opening communicating the cavity with an exterior of the independent housing;
a loudspeaker unit, the loudspeaker unit comprises a vibration system, a magnetic circuit system and a casing and a front cover that receive the vibration system and the magnetic circuit system, a sidewall of the casing is provided with a rear sound aperture, acoustic wave generated by the loudspeaker unit above the vibration system directly radiates to the exterior, and acoustic wave generated by the loudspeaker unit below the vibration system radiates to a side via the rear sound aperture; and
the loudspeaker unit and the independent housing combine, so that the rear sound aperture and the opening are sealingly engaged and communicate, and the cavity of the independent housing forms a rear acoustic cavity of the loudspeaker module; wherein a height of the independent housing and a height of the loudspeaker unit are equal, and an upper surface and a lower surface of the independent housing are respectively flush with an upper surface and a lower surface of the loudspeaker unit.

US Pat. No. 10,219,077

DISPLAY DEVICE

SAMSUNG DISPLAY CO., LTD....

1. A display device including a folding region and a rigid region, the display device comprising:a foldable display panel module to fold in the folding region;
a folding sensor to sense a folding state of the foldable display panel module;
a support on the foldable display panel module in the rigid region;
a vibrator on the foldable display panel module in the folding region; and
a vibration controller to control a vibration operation of the vibrator based on the folding state.

US Pat. No. 10,219,076

AUDIO SIGNAL PROCESSING DEVICE, AUDIO SIGNAL PROCESSING METHOD, AND STORAGE MEDIUM

CANON KABUSHIKI KAISHA, ...

1. An audio signal processing apparatus comprising:at least one hardware processor; and
a memory which stores instructions executable by the at least one hardware processor to cause the audio signal processing apparatus to perform at least:
acquiring audio data generated by collecting a sound in a sound collection target space;
determining a priority of at least one of a plurality of areas in the sound collection target space; and
outputting, based on the audio data acquired in the acquiring, first processed data obtained by first predetermined signal processing for sound of the one or more areas selected based on the priority determined in the determining; and
outputting, based on the audio data acquired in the acquiring, second processed data obtained by second predetermined signal processing for sound of areas including an area different from the one or more areas selected based on the priority determined in the determining, after outputting the first processed data obtained by the first predetermined signal processing for sound of the one or more areas selected based on the priority determined in the determining.

US Pat. No. 10,219,075

METHOD AND SYSTEM FOR SPEAKER ARRAY ASSEMBLY AND POROUS DISPLAY DEVICE

International Business Ma...

1. A device, comprising:an acoustically-permeable display, comprising:
a first layer defining a first plurality of pores, each of the first plurality of pores being configured to permit the passage of sound through the first layer;
a second layer defining a second plurality of pores, each of the second plurality of pores being configured to permit the passage of sound through the second layer, wherein the second layer overlays the first layer, defining a gap between the first layer and the second layer, wherein each pore of the first plurality of pores is unaligned with respect to each pore of the second plurality of pores such that light may not pass directly through both a pore of the first plurality of pores and a pore of the second plurality of pores;
a plurality of speakers arranged in an array, each of the plurality of speakers being positioned and oriented to direct sound through at least one portion of the acoustically-permeable display different from at least one other speaker of the plurality of speakers
a controller configured to identify a portion of the acoustically-permeable display according to a sensor signal representing contour of a user's ear in contact with the acoustically-permeable display, and determine which speaker of the plurality of speakers is positioned and oriented to direct sound through the identified portion of the acoustically-permeable display
wherein the controller is further configured to transmit an audio signal to the speaker positioned and oriented to direct sound through the portion of the acoustically-permeable display in contact with the user's ear; and
wherein the controller is further configured to predict, from the identified contour, the location of the user's interior ear with respect to the acoustically-permeable display.

US Pat. No. 10,219,074

LOUDSPEAKER PROTECTION SYSTEMS AND METHODS

Cirrus Logic, Inc., Aust...

1. A method of thermal protection of a voice coil of a loudspeaker comprising:determining an estimate of a temperature of the voice coil,
determining an estimate of a rate of change of the temperature of the voice coil,
determining an estimate of power dissipation in the voice coil, and
generating a gain control signal for modulating a gain applied to an input signal in a signal path between an input terminal and said loudspeaker,
wherein the gain control signal is generated by:
based on the estimate of the rate of change of the temperature of the voice coil, setting a threshold power value,
based on the threshold power value and on the estimate of the temperature of the voice coil, setting an allowed power value, and
based on the allowed power value and the estimate of power dissipation in the voice coil, generating the gain control signal for modulating the gain applied to the input signal in the signal path between the input terminal and said loudspeaker.

US Pat. No. 10,219,073

VEHICLE AUDIO SYSTEM

FORD GLOBAL TECHNOLOGIES,...

1. A two-wire communication system comprising:a first audio circuit;
a second audio circuit electrically connected to the first audio circuit via a two-wire bi-directional multi-node communication bus, wherein the two-wire bi-directional multi-node communication bus connects the first audio circuit to the second audio circuit wherein an audio signal, an enable signal, and a clip detect signal are transmitted over the bus;
a first two-wire communication circuit having an audio enable input, a clip detect output, and a first two-wire communication chip electrically connected to the bus, wherein the audio enable input and the clip detect output are electrically coupled to the first audio circuit; and
a second two-wire communication circuit having an audio enable output, a clip detect input, and a second two-wire communication chip electrically connected to the first two-wire communication chip via the bus, wherein the audio enable output and the clip detect input are electrically coupled to the second audio circuit.

US Pat. No. 10,219,072

DUAL MICROPHONE NEAR FIELD VOICE ENHANCEMENT

Panasonic Automotive Syst...

1. A dual microphone near field voice enhancement arrangement in a motor vehicle, the arrangement comprising:a seat including a headrest, the headrest having two opposite lateral sides;
two microphones, each said microphone being mounted on a respective one of the two opposite lateral sides of the headrest, each said microphone being configured to produce a respective microphone signal indicative of sounds within a passenger compartment of the motor vehicle; and
an electronic processor communicatively coupled to the microphones and configured to:
receive the microphone signals;
calculate a time delay between the microphone signals;
use the calculated time delay to estimate amplitudes of the microphone signals;
apply a respective time delay to each of the microphone signals based on the calculated time delay to produce two time-aligned signals;
apply a respective gain to each of the time-aligned microphone signals based on the estimated amplitudes to produce two time-aligned and gain corrected signals; and
sum the time-aligned and gain corrected signals.

US Pat. No. 10,219,071

SYSTEMS AND METHODS FOR BANDLIMITING ANTI-NOISE IN PERSONAL AUDIO DEVICES HAVING ADAPTIVE NOISE CANCELLATION

Cirrus Logic, Inc., Aust...

9. A method comprising:receiving a reference microphone signal indicative of ambient audio sounds at the acoustic output of a transducer;
receiving an error microphone signal indicative of an acoustic output of the transducer and the ambient audio sounds at the acoustic output of the transducer;
generating an anti-noise signal from filtering the reference microphone signal with an adaptive filter to reduce the presence of the ambient audio sounds heard by a listener and shaping a response of the adaptive filter in conformity with the error microphone signal and the reference microphone signal by adapting the response of the adaptive filter to minimize the ambient audio sounds present in the error microphone signal;
further adjusting the response of the adaptive filter by combining injected noise with the reference microphone signal;
receiving the injected noise by a copy of the adaptive filter so that the response of the copy of the adaptive filter is controlled by the adaptive filter adapting to cancel a combination of the ambient audio sounds and the injected noise; and
controlling the response of the adaptive filter with the coefficients adapted in the copy of the adaptive filter, whereby the injected noise is not present in the anti-noise signal;
wherein each of a sample rate of the copy of the adaptive filter and a rate of adapting of the adaptive filter is significantly less than a sample rate of the adaptive filter and the sample rate of the copy of the adaptive filter is significantly less than the rate of adapting of the adaptive filter.

US Pat. No. 10,219,070

ACOUSTIC SYSTEM AND METHOD

1. An acoustic system, comprising:a coconut endocarp, said endocarp being a round shape,
said endocarp further comprising one opening for installation of a loudspeaker, wherein the opening is formed by two individual cuts,
the first cut being a circular cutout performed as the coconut endocarp rotates about the endocarp's longitudinal axis, the first cut forming a first flat surface exposing fibers of an endocarp membrane,
the second cut being a flat cross sectional cut made adjacent to said first cut, the second cut forming an adjacent flat edge at an angle relative to the first flat surface,
the two cuts creating a direct contact with fibers of the endocarp membrane, the cuts combining to maximize a surface area of exposed fibers of the endocarp,
wherein a loudspeaker is positioned flush against the endocarp such that parasitic acoustic resonances generated from one or more sides of the loudspeaker are directed to a dampening channel for a dissipation of undesired frequencies.

US Pat. No. 10,219,068

HEADSET WITH MAJOR AND MINOR ADJUSTMENTS

Voyetra Turtle Beach, Inc...

1. An audio headset, the headset comprising:a headband;
a headband endcap at each end of the headband;
a headband slide coupled to each headband endcap;
ear cups operatively coupled to the headband slides, each ear cup comprising a guide for restricting movement of a cross-bar element of a corresponding headband slide away from the ear cup while allowing vertical movement of the cross-bar with respect to the ear cup; and
a second headband located only above the headband slides, said second headband comprising a flexible band coupled to the headband endcaps and said second headband not in contact with the headband slides, wherein an adjustment of force on a user of the headset is enabled by actuation of at least one headband slide in a vertical direction, the actuation of the headband slide limited by a corresponding cross-bar element and guide.

US Pat. No. 10,219,067

AUTO-CALIBRATING NOISE CANCELING HEADPHONE

Harman International Indu...

1. A sound system comprising:a headphone including a transducer and at least two microphones disposed over the transducer and adapted to receive sound radiated therefrom;
an equalization filter adapted to equalize an audio input signal based on at least one predetermined coefficient; and
a loop filter circuit including a leaky integrator circuit adapted to generate a filtered audio signal based on the equalized audio input signal and a feedback signal indicative of sound received by the at least two microphones, and to provide the filtered audio signal to the transducer; and
a switch adapted to switch between a first position, in which the equalization filter is connected to an audio source for receiving a first audio input signal, and a second position, in which the equalization filter is adapted to receive a second audio input signal;
a controller programmed to:
control the switch to be arranged in the second position in response to a user command;
generate the second audio input signal which is indicative of a test signal;
receive a second feedback signal indicative of a test sound received by the at least two microphones;
calibrate the headphone by updating the at least one predetermined coefficient of the equalization filter based on the second feedback signal; and
control the switch to be arranged in the first position in response to the at least one predetermined coefficient being updated.

US Pat. No. 10,219,066

INTERCHANGEABLE WEARING MODES FOR A HEADSET

Plantronics, Inc., Santa...

1. A head-mountable sound delivery device, comprising:a mounting element configured to support the apparatus on a user's head;
a speaker capsule with an external surface having an upwardly-facing elongate groove defined therein;
a C-shaped retention element moveably coupled to the mounting element, into which the speaker capsule may be snapped such that the C-shaped retention element is located in the groove defined in the speaker capsule, thereby to prevent movement of the speaker capsule relative to the C-shaped retention element when the two are engaged, the C-shaped retention element comprising an inner surface, a first free end, and a second free end and a retention formation at each of the first and second free ends that in use engage corresponding retention formations in the speaker capsule.

US Pat. No. 10,219,065

TELECOIL ADAPTER

Otojoy LLC, Santa Barbar...

1. An apparatus comprising:an adapter including a telecoil, the adapter including an audio plug for audio signals received wirelessly via the telecoil to be communicated to and from the adapter via an electrical connection to an external device, and the adapter further including at least one of the following: an audio jack to physically receive an external audio plug in which the audio jack of the adapter is to receive the audio signals from the audio plug of the adapter via the electrical connection to the external device and/or headphones physically part of the adapter and electrically connected to receive the audio signals from the audio plug of the adapter, wherein the telecoil is incorporated into the adapter and the adapter is integrated into a cable electrically connected to the headphones at one end and including the audio plug at the other end; and
a mobile device including a storage medium, wherein the mobile device further comprises at least one of the following: a smart phone, a tablet, a laptop, a personal digital assistant, or a wearable computing device, the storage medium having stored thereon instructions executable by a computing device to process the audio signals, the audio signals to comprise electrical signals, the electrical signals to be induced in the telecoil by an electromagnetic (EM) field, the EM field to be generated by an external hearing loop, the electrical signals to be induced to further be amplified, and the instructions further executable to generate a quality rating for a hearing loop system associated with the electrical signals, the quality rating to be based at least in part on the electrical signals.

US Pat. No. 10,219,064

TRI-MICRO LOW FREQUENCY FILTER TRI-EAR BUD TIPS AND HORN BOOST WITH RATCHET EAR BUD LOCK

Acouva, Inc., San Franci...

1. A system for improving use of an in-ear utility device, the system comprising:Tri-Ear Buds adapted for a connection to an in-ear main trunk support extending from a solid portion of the in-ear utility device, wherein the Tri-Ear Buds are configured to reside in a user's ear canal within a first bend of the ear canal, wherein the Tri-Ear Buds comprise an end configured to reside in the user's ear canal at a distance less than 16 millimeters from the entrance of the user's ear canal;
a ratchet ear bud lock adapted to physically associate with the Tri-Ear Buds to facilitate the connection between the Tri-Ear Buds and the in-ear main trunk support, wherein the ratchet ear bud lock comprises locking features configured for a removal force adjustable from at least one of 0.25 Lbs/0.5 Lbs/0.75 Lbs/1.25 Lbs/1.5 Lbs/2.25 Lbs/2.5 Lbs, by way of decreasing jaws of the locking features; and
a horn boost component adapted to physically associate with the in-ear utility device, and wherein the horn boost component is configured to facilitate an acoustic horn effect.

US Pat. No. 10,219,063

IN-EAR WIRELESS DEVICE WITH BONE CONDUCTION MIC COMMUNICATION

Acouva, Inc., San Franci...

1. A wireless in-ear utility device, comprising:a housing comprising an oval shaped trunk configured to reside in a user's ear canal within the first bend of the ear canal, the housing comprising a proximal end configured to reside in the user's ear canal at a distance less than 16 millimeters from the entrance of the user's ear canal;
a microphone port located on an external surface of the housing and configured to receive first ambient external sounds from the low/mid/high frequencies (50 Hz to 10,000 kHz); a microphone located within the housing configured to receive the first ambient external sounds via the microphone port, wherein the received first ambient external sounds comprise sounds from the low/mid/high frequencies (50 Hz to 10,000 kHz);
a bone conduction microphone configured to detect resident frequencies to facilitate user voice recognition; a communications module located within the housing and configured for wireless communications, wherein the communication module receives second ambient external sounds from a second in-ear utility device located in the user's second ear, wherein the second ambient external sounds from the second in-ear utility device comprise sounds from the low/mid/high frequencies (50 Hz to 10,000 kHz);
and a processing system located within the housing, wherein the processing system is configured to identify the user based on a frequency profile shape of the user's voice and at least one of the first ambient external sounds and the second ambient external sounds process the first and the second ambient external sounds based on a frequency profile shape of the user's voice.

US Pat. No. 10,219,062

WIRELESS AUDIO OUTPUT DEVICES

Apple Inc., Cupertino, C...

1. A method comprising:determining that a first audio output device is not wirelessly communicatively coupled to a second audio output device;
detecting a user action associated with a housing configured to store the first audio output device and the second audio output device;
in response to detecting the user action:
allowing the first audio output device to become discoverable by a source communication device, and
detecting a pairing request to wirelessly communicatively couple the first audio output device to the source communication device within a threshold period of time from detecting the user action; and
in response to detecting the pairing request within the threshold period of time:
causing one or more wireless link keys stored on the first audio output device and the second audio output device to be erased, and
wirelessly communicatively coupling the first audio output device with the second audio output device.

US Pat. No. 10,219,061

LIGHT AND LOUDSPEAKER DRIVER DEVICE

Native Design Limited, (...

1. A combined light and loudspeaker driver device comprising:a loudspeaker driver having a loudspeaker diaphragm with an opening formed around a central longitudinal axis of the device, the central longitudinal axis defining a forward and a rearward direction of the device;
a housing for supporting the loudspeaker driver,
a light source positioned radially inwardly of the opening of the loudspeaker diaphragm, with respect to the central longitudinal axis and configured to direct light forward and away from the device;
a heat removal element comprising a heat sink having at least an axially central part formed rearwardly of the housing along the central longitudinal axis of the device, and a beat removal column extending from the axially central part of the heat sink in the forward direction along the central longitudinal axis of the device, the light source being mounted at the forward end of the heat removal column; and
a ring radiator tweeter positioned radially inwardly of the opening in the loudspeaker diaphragm and radially outwardly of the light source, with respect to the longitudinal axis.

US Pat. No. 10,219,060

HELMET-WORN DEVICE FOR ELECTRONIC COMMUNICATIONS DURING HIGH MOTION ACTIVITY

HEARSHOT INC., Toronto (...

1. An assembly for transmitting vibrations to a helmet worn by a user, the device comprising:an element adapted to adhere to an outer surface of the helmet;
an assembly which connects with the element and comprises
a bottom housing having a floor which passes through the element, teeth near the floor which engage with the element, and a sidewall extending upwardly around the perimeter of the bottom housing;
a top housing having an outer sidewall that fits outside of the sidewall of the bottom housing and also having a plurality of apertures in a top surface;
a PCB which sits within the top housing,
a pressure transducer placed atop the floor and in electrical connection with the PCB; and
a mechanical user interface placed above the PCB and having at least one button which extends upwardly and through one of the apertures on the top surface of the top housing.

US Pat. No. 10,219,059

SMART PASSENGER SERVICE UNIT

1. A passenger service unit for an aircraft cabin, comprising:an oxygen supply module comprising an oxygen canister and a plurality of oxygen masks;
a lighting module comprising a plurality of LED reading light units disposed on a single contiguous flexible printed circuit board;
at least one mini-speaker comprising a horn element, wherein a first mini-speaker of the at least one mini-speaker is integrated with a first LED reading light unit of the plurality of LED reading light units, an LED for illuminating the first LED reading light unit is at least partially disposed in the horn element of the first mini-speaker, and sound waves from the first mini-speaker travel adjacent to the LED, wherein the horn element of the first mini-speaker is shaped and positioned with respect to the first LED reading light unit such that a geometric plane passes through the first LED reading light unit and a circular cross-section of the horn element of the first mini-speaker; and
control circuitry for controlling the oxygen supply module, the lighting module, and the at least one mini-speaker, wherein the control circuitry is connected to
a power converter for converting an external power supply to voltage usable by the control circuitry, and
a single communications interface for communicating with an external management computing system.

US Pat. No. 10,219,058

ELECTRONIC DEVICE HAVING L-SHAPED SOUND CHANNEL AND METHOD FOR MANUFACTURING THE SAME

Chiun Mai Communication S...

1. An electronic device, comprising:a display;
a cover, attached to the display and defining a notch;
a frame, comprising:
a bottom wall, comprising a first surface and a second surface opposite to the first surface, wherein the bottom wall defines a through hole, and the first surface defines a first recess corresponding to the through hole; and
a side wall, extending from a peripheral edge of the bottom wall;
a sound assembly, comprising:
a sound output hole, defined and surrounded by the notch and the side wall;
a sound channel, formed in the frame and comprising a first channel and a second channel, wherein the first channel is formed by horizontally cutting from an inner surface of the first recess towards the side wall, the second channel is formed by vertically cutting from an upper surface of the side wall towards the bottom wall, and the second channel communicates with the first channel to form an L-shaped sound channel communicating with the sound output hole;
a sealing member, positioned on the first recess and sealing the first recess; and
a sound generating module, positioned on the second surface and corresponding to the through hole, wherein sound generated by the sound generating module is transmitted to the sound output hole through the through hole and the L-shaped sound channel.

US Pat. No. 10,219,057

AUDIO MODULE FOR AN ELECTRONIC DEVICE

Apple Inc., Cupertino, C...

1. An audio module for an electronic device, the audio module comprising:a driver assembly comprising:
a diaphragm defining a speaker plane; and
a voice coil attached to the diaphragm and positioned adjacent one or more magnets; and
an enclosure surrounding the driver assembly and defining:
a front volume positioned on a first side of the speaker plane and coupled to a sound port;
a back volume positioned on the first side of the speaker plane and on a second side of the speaker plane; and
a resonant cavity coupled to the front volume via a resonant cavity port and separated from the back volume by a resonant cavity cover.

US Pat. No. 10,219,056

WATERPROOF CASE

CATALYST LIFESTYLE LIMITE...

1. A protective case for an electronic device comprising:a main housing;
a lid;
the main housing and lid removably joined to define an air and water tight volume receiving an electronic device said electronic device including a switch;wherein the main housing includes a slot formed therein proximate the switch of the electronic device and a toggle rotatively positioned within the slot, the toggle including a C-shaped contact portion and the switch positioned in the C-shaped contact portion and actuated by rotation of the toggle.

US Pat. No. 10,219,055

LOUDSPEAKER MODULE

GOERTEK INC., Weifang (C...

1. A loudspeaker module, comprising: a module housing, wherein a loudspeaker unit is accommodated in the module housing, the loudspeaker unit comprises a unit housing and a unit front cover combined with each other, and a vibration system and a magnetic circuit system are accommodated in a space defined by the unit housing and the unit front cover, wherein an end surface of at least one sidewall of the unit housing is provided with an ultrasonic surface ultrasonically welded with the module housing, and the module housing is provided with an ultrasonic line which is provided at a position corresponding to the ultrasonic surface and is combined with the ultrasonic surface by ultrasonic welding, and wherein the loudspeaker unit is disposed adjacent to an edge of one side of the module housing, a sidewall of the unit housing is exposed to the outside of the module housing, and the ultrasonic surface is provided on the sidewall of the unit housing exposed to the outside of the module housing.

US Pat. No. 10,219,054

PROTECTIVE MEMBER FOR ACOUSTIC COMPONENT AND WATERPROOF CASE

NITTO DENKO CORPORATION, ...

1. A protective member for an acoustic component, the protective member comprising a sound-transmissive sheet that consists essentially of an elastomer,wherein no lamination of another layer is formed on any major surface of the sound-transmissive sheet, or a lamination of another layer is formed only on an edge region of any major surface of the sound-transmissive sheet; and
wherein the elastomer has a type A hardness in a range from 20 to 80 as measured according to JIS K 6253, and the sound-transmissive sheet has a thickness of 10 to 150 ?m.

US Pat. No. 10,219,053

FIBER-TO-COAX CONVERSION UNIT AND METHOD OF USING SAME

Viavi Solutions, Inc., S...

1. An apparatus, comprising:a housing,
an optical network unit positioned in the housing, the optical network unit configured to convert a fiber optical signal into an electrical signal suitable for transmission via an Ethernet cable, and
an adaptor positioned in the housing and connected to the optical network unit, the adaptor configured to convert the the electrical signal into an Ethernet-based RF signal, the adaptor including a port configured to receive a coaxial cable connector.

US Pat. No. 10,219,052

AGILE RESOURCE ON DEMAND SERVICE PROVISIONING IN SOFTWARE DEFINED OPTICAL NETWORKS

FUTUREWEI TECHNOLOGIES, I...

1. A method performed by a controller in signal communication with a reconfigurable optical add-drop multiplexer (ROADM) controlling a first link in an optical network portion of a communications network, the method comprising:receiving, by the controller, a first request for a first connection, the first link being in a first route in the communications network for the first connection;
sending, by the controller, first commands to the ROADM to allocate first bandwidth to the first link for at least the first connection;
receiving, by the controller, a second request for a second connection, the first link being in a second route in the communications network for the second connection; and
sending, by the controller, second commands to the ROADM to allocate second bandwidth to the first link for at least the first and second connections.

US Pat. No. 10,219,051

COMMUNICATION PLATFORM WITH FLEXIBLE PHOTONICS PAYLOAD

1. An apparatus, comprising:a spacecraft; and
a payload positioned on the spacecraft, the payload is configured to receive a wireless communication and to select a subset of sub-bands of a first optical signal representing the wireless communication, the selecting being such that bandwidth and center frequency of the sub-bands are independently programmable for each sub-band, the payload is configured to send content of the selected subset of sub-bands to one or more entities off of the spacecraft.

US Pat. No. 10,219,050

VIRTUAL LINE CARDS IN A DISAGGREGATED OPTICAL TRANSPORT NETWORK SWITCHING SYSTEM

FUJITSU LIMITED, Kawasak...

1. An optical transport networking switching system comprising:an Ethernet fabric including a number M of Ethernet switches, each of the M Ethernet switches having a number N of Ethernet switch ports, each of the N Ethernet switch ports having a number P of Ethernet switch sub-ports, wherein a variable i having a value ranging from 1 to M denotes the ith Ethernet switch corresponding to one of the M Ethernet switches, a variable j having a value ranging from 1 to N denotes the jth Ethernet switch port corresponding to one of the N Ethernet switch ports, and a variable k having a value ranging from 1 to P denotes the kth Ethernet switch sub-port corresponding to one of the P Ethernet switch sub-ports, and wherein N, M, and Pare greater than one; and
a plug-in universal (PIU) module having M PIU ports, wherein the ith PIU port of the M PIU ports corresponds to the ith Ethernet switch,
wherein the optical transport networking switching system switches optical data units through the Ethernet fabric using the PIU modules and a virtual switch fabric associated with the PIU modules.

US Pat. No. 10,219,049

OPTICAL RECEPTION APPARATUS, OPTICAL TRANSMISSION APPARATUS, OPTICAL COMMUNICATION SYSTEM, AND SKEW ADJUSTING METHOD

FUJITSU LIMITED, Kawasak...

1. An optical reception apparatus comprising:a memory; and
a processor coupled to the memory, wherein the processor executes a process comprising:
receiving an optical signal including a plurality of first pilot symbols obtained by modulating values of bits in a predetermined bit pattern by an optical transmission apparatus by a BPSK method in an IQ complex plane, and converting the received optical signal into an electrical signal;
performing suppression processing to suppress fluctuations in amplitude of the electrical signal;
extracting the first pilot symbols from the electrical signal having been subjected to the suppression processing;
first calculating a ratio of an amplitude component to a phase component of each of the first pilot symbols extracted by the extracting; and
transmitting information relating to skew adjustment based on the ratio of the amplitude component to the phase component calculated by the first calculating for each of a plurality of different control values for skew to the optical transmission apparatus.

US Pat. No. 10,219,048

METHOD AND SYSTEM FOR GENERATING REFERENCES TO RELATED VIDEO

ARRIS Enterprises LLC, S...

1. A method of generating references to related videos, comprising the steps of:comparing a keyword-context pair for a primary video to a plurality of keyword-context pairings, wherein:
the keyword-context pair for the primary video comprises: a keyword comprising one or more words identified within closed caption text of the primary video, and a context of the keyword, the context comprising program metadata of the primary video,
the plurality of keyword-context pairings is provided in a knowledge base that is a stored database separate from the primary video, the knowledge base comprising:
a pre-determined listing of a plurality of known keywords, each keyword comprising no express identification of, and no direct reference to, another video,
a plurality of known contexts, and
a plurality of pre-determined rules,
the plurality of keyword-context pairings stored in the knowledge base pairs each one of the known keywords with one or more of the known contexts,
each one of the keyword-context pairings stored in the knowledge base is associated with a corresponding one of the pre-determined rules stored in the knowledge base, and
each one of the pre-determined rules comprises one or more actions that, when performed, identify a reference video from the associated keyword-context pairing, the rules being pre-determined using semantic matching that is based on a contextual meaning of the keyword in the known context of the associated keyword-content pairing, to deduce a reference to another video;
based on the comparing, determining a match of the keyword-context pair with a matching one of the keyword-context pairings in the listing, wherein the keyword-context pair comprises: the keyword identified from the primary video, and the context of the keyword;
taking the one or more actions specified by the rule in the listing associated with the matching one of the keyword-context pairings in the listing;
obtaining, from a result of the one or more actions, information identifying a reference video related to the primary video; and
creating an annotation comprising program metadata of the reference video related to the primary video.

US Pat. No. 10,219,047

MEDIA CONTENT MATCHING USING CONTEXTUAL INFORMATION

GOOGLE LLC, Mountain Vie...

1. A method, comprising:determining whether a similarity value between characteristics of a portion of a probe media content item and characteristics of a portion of a reference media content of a reference media content item exceeds a threshold, wherein the threshold depends on the characteristics of the portion of the probe media content item and the characteristics of the portion of the reference media content;
in response to determining that the similarity value exceeds the threshold, determining that the probe media content item includes the portion of reference media content of the reference media content item;
upon determining that the probe media content item includes the portion of the reference media content, receiving a pair of media content items comprising the probe media content item and the reference media content item;
receiving metadata associated with the pair, the metadata associated with the pair providing information about the probe media content item, the reference media content item, and the portion of reference media content reused in the probe media content;
classifying the pair into a reuse group of a plurality of reuse groups based on the metadata associated with the pair, each of the plurality of reuse groups associated with a corresponding amount of reuse different from other reuse groups, the corresponding amount of reuse used for determining whether or not the probe media content item is to be flagged for removal;
comparing a first amount of the portion of reference media content to a second amount of reuse associated with the reuse group into which the pair is classified; and
responsive to the first amount of the portion of reference media content being greater than the second amount of reuse associated with the reuse group, flagging the probe media content item for removal.

US Pat. No. 10,219,046

DISTRIBUTION MANAGEMENT FOR MEDIA CONTENT ASSETS

PREMIERE DIGITAL SERVICES...

1. A computer-implemented method for distributing media content comprising:acquiring, into a computer database, title information for a media asset;
maintaining in the computer database, distribution requirements for one or more retailers and one or more territories, wherein the distribution requirements comprise one or more file requirements for one or more files required to distribute the media asset;
selecting one or more desired retailers from the one or more retailers;
based on the title information and the one or more desired retailers, automatically selecting one or more territories with distribution requirements that match the title information, wherein the title information comprises an original spoken language of the media asset;
displaying the one or more file requirements for the selected one or more desired retailers and the selected one or more territories, wherein the displaying the file requirements comprises interactively indicating whether one or more of the one or more files is available, is not available, or should be automatically created including displaying a cost associated with a creation of the one or more files;
creating an order, wherein the creating is based on the title information, the selected one or more desired retailers, the selected one or more territories, and the one or more file requirements, wherein the title information comprises one or more languages of the one or more files that are to be included in the order, and wherein the automatically selecting the one or more territories comprises selecting the one or more territories with language requirements that match the one or more languages of the one or more files that are to be included in the order;
receiving the one or more files identified by the one or more file requirements; and
based on the order, automatically submitting the one or more received files for distribution of the media asset via the selected one or more retailers in the one or more territories.

US Pat. No. 10,219,045

SERVER, IMAGE PROVIDING APPARATUS, AND IMAGE PROVIDING SYSTEM COMPRISING SAME

LG ELECTRONICS INC., Seo...

1. A server comprising: a memory to store a personal server list and network information of an image providing apparatus corresponding to the personal server list;an interface to receive a connection request from a terminal in response to a Web address input to the terminal;
a processor electrically coupled to the memory and the interface, and configured to: perform a control operation to transmit information for connection to a personal server to the terminal according to a connection request;
perform a control operation, when login information is received from the terminal, to transmit personal server list information corresponding to the login information to the terminal;
perform a control operation, when the terminal makes a request for information corresponding to a specific personal server list of the personal server list, to transmit, to the terminal, network information of an image providing apparatus corresponding to a corresponding personal server,
wherein a corresponding personal server stores thumbnails for a shared content list and a recommended content list,
wherein the network information comprises public IP information and private IP information of the image providing apparatus,
wherein the private IP information changes whenever the image providing apparatus is turned on, and wherein the interface connects to the image providing apparatus to receive new network information of the image providing apparatus whenever the image providing apparatus is turned on; and
perform a control operation to update the memory with the new network information.

US Pat. No. 10,219,044

ELECTRONIC PROGRAMMING GUIDE WITH SELECTABLE CATEGORIES

Intel Corporation, Santa...

1. A computer system capable of being used in association with a television and a remote control, the computer system comprising:a wireless communication interface capable of permitting, when the computer system is in operation, wireless communication between the computer system and a remote control;
a processor to execute program instructions that, when executed, result in performance of operations comprising:
displaying, on the television, a graphical user interface comprising an electronic programming guide that is capable of presenting, in response to user input, user selectable icons and related information associated, at least in part, with video content items that are capable of being selected, via the graphical user interface, for viewing on the television, the video content items to be received, at least in part, by the computer system via Internet;
the icons comprising video content category icons, a search icon, and a chat icon;
the category icons being associated with one or more respective selectable video content subcategory icons whose selection results in display of one or more associated selectable video content item icons;
the search icon being to facilitate keyword searching for available video content items, the available video content items including at least one video content item associated with currently ongoing video content;
the chat icon being capable of being selected, after the at least one video content item has been selected for display, to permit chatting related to the at least one video content item;
wherein:
the computer system is capable, when the computer system is in the operation, of receiving user-entered information indicating that a user of the computer system has decided that certain video content is to be categorized as being in favorite video content categorization as defined by the user;
the favorite video content categorization is to be associated with another icon that, when selected by the user when the computer system is in the operation, results in displaying, via the graphical user interface, of favorite video content item icons that have been categorized as being in the favorite video content categorization;
the favorite video content item icons are configurable to include at least one favorite video content item icon and at least one other favorite video content item icon;
the at least one favorite video content item icon is associated with other video content that is not yet available for viewing; and
the at least one other favorite video content item icon is associated with broadcast news video content that is currently available for viewing;
wherein the at least one favorite video content item icon and the at least one other favorite video content item icon are configurable to be displayed in separate categorizations.

US Pat. No. 10,219,043

SHARING VIDEO CONTENT FROM A SET TOP BOX THROUGH A MOBILE PHONE

11. A device comprising:a processing system including a processor of a mobile phone; and
a memory that stores executable instructions that, when executed by the processing system, facilitate performance of operations comprising:
receiving a first signal during presentation of video content at a display device coupled to a media processor, the first signal indicating that sharing of a portion of the video content is to be performed;
receiving a second signal initiating a message via a messaging client of the mobile phone;
obtaining the portion of the video content from a cache accessible to the media processor;
receiving a third signal representing an endpoint of a video clip from the portion of the video content;
editing the video clip in accordance with user input at the mobile phone;
converting a format of the video clip, thereby producing a converted video clip and enabling presentation of the converted video clip at a recipient device; and
transmitting the converted video clip to the recipient device via the messaging client,
wherein the media processor and the mobile phone comprise a natively integrated device, the converted video clip accordingly being produced and transmitted without installation of an application on the mobile phone being required, and wherein the media processor and the mobile phone are mutually authenticated.

US Pat. No. 10,219,042

METHOD AND APPARATUS FOR MANAGING PERSONAL CONTENT

1. A method, comprising:obtaining, by a system including a processor, first personal content associated with a first mobile communication device, wherein the first personal content is generated based on sensory information obtained by the first mobile communication device and a second mobile communications device, wherein the first mobile communication device obtains a portion of the sensory information from the second mobile communication device in response to a broadcast of a wireless signal by the first mobile communication device representing a notice to obtain additional sensory information as the portion of the sensory information, wherein the sensory information is associated with an environment of the first mobile communication device, and wherein the sensory information comprises images of the environment;
obtaining, by the system, image recognition information associated with an object;
performing, by the system, image recognition on the first personal content using the image recognition information to detect the object being present in a first image of the first personal content;
obtaining, by the system, second personal content associated with the first mobile communication device;
performing, by the system, additional image recognition on the second personal content using the image recognition information to detect the object being present in a second image of the second personal content; and
generating, by the system, combined media content based on the object being present in both the first personal content and the second personal content.

US Pat. No. 10,219,041

SYSTEMS AND METHODS FOR TRANSMITTING MEDIA ASSOCIATED WITH A MEASURE OF QUALITY BASED ON LEVEL OF GAME PLAY IN AN INTERACTIVE VIDEO GAMING ENVIRONMENT

ROVI GUIDES, INC., San J...

1. A method for providing supplemental content in 3D interactive video gaming environments, the method comprising:generating, by processing circuitry, a display of a 3D interactive video gaming environment;
identifying a subset of a plurality of videos, stored on a storage device, the plurality of videos having been provided by a plurality of other users;
selecting, by the processing circuitry, a video of the subset of the plurality of videos for display;
determining a position between a user and a background object in the 3D interactive video gaming environment; and
generating for display the video at the position.

US Pat. No. 10,219,040

VIDEO FRAME BOOKMARKING USER INTERFACE COMPONENT

THE DIRECTV GROUP, INC., ...

7. A system for bookmarking a frame of media content comprising:(a) a computer comprising a processor;
(b) a media player executing on the computer via the processor; and
(c) a user interface (UI) component that controls playback of the media content in the media player, wherein the UI component comprises:
(1) a circular progress bar;
(2) a scaled circular keyframe within the circular progress bar, wherein the scaled circular keyframe comprises a preview of the frame of the media content located at a time within the media content that is identified by a progress marker;
(3) a bookmark indicator that is displayed on the circular progress bar, wherein the bookmark indicator reflects a location in the media content where the frame is located, and wherein the frame is identified by a user, and wherein the bookmark indicator is different from the progress marker; and
(4) a UI feature for:
(i) accepting user input identifying the frame within the media content;
(ii) accepting user input associating the frame with a bookmark;
(iii) accepting user input selecting the bookmark indicator; and
(iv) in response to the user input selecting the bookmark indicator, playing the media content from the frame identified by the bookmark indicator.

US Pat. No. 10,219,039

METHODS AND APPARATUS TO ASSIGN VIEWERS TO MEDIA METER DATA

The Nielsen Company (US),...

1. A method to impute panelist household viewing behavior, the method comprising:monitoring, via a first meter, a first media presentation device in a tuning household associated with tuning panelists, the first meter structured to collect first media identification data indicative of first media to which the first media presentation device is tuned without collecting person identifying information indicative of which of the tuning panelists are exposed to the first media;
monitoring, via second meters, second media presentation devices in a set of viewing households associated with viewing panelists, the second meters structured to (i) collect second media identification data indicative of respective second media to which the second media presentation devices are tuned and (ii) collect person identifying information indicative of which of the viewing panelists are exposed to the respective second media;
transmitting, via a network, first data from the first meter, the first data including the first media identification data, the first data not including the person identifying information associated with the tuning panelists;
transmitting, via the network, second data from the second meters, the second data including the second media identification data and the person identifying information associated with the viewing panelists;
calculating, by executing an instruction with a processor, first viewing probabilities for the tuning panelists during a first set of time periods, the first viewing probabilities calculated based on the first data;
identifying a plurality of candidate viewing households from among the set of viewing households based on a similarity of household characteristics with the tuning household;
calculating, by executing an instruction with the processor, second viewing probabilities for the viewing panelists in the plurality of candidate viewing households during a second set of time periods, the second viewing probabilities calculated based on the second data;
identifying, by executing an instruction with the processor, a matching one of the plurality of candidate viewing households that matches the tuning household based on an absolute difference value between an average value of the first viewing probabilities and respective ones of average values of the second viewing probabilities; and
imputing, by executing an instruction with the processor, ones of tuning minutes of the tuning household as viewing minutes for ones of the tuning panelists when the matching one of the plurality of candidate viewing households exhibits viewing activity during one of the second set of time periods that matches one of the first set of time periods, the tuning minutes indicative of when the first media presentation device was tuned to the first media, the viewing minutes indicative of when the ones of the tuning panelists were exposed to the first media to which the first media presentation device was tuned.

US Pat. No. 10,219,038

HANDLING DISRUPTION IN CONTENT STREAMS RECEIVED AT A PLAYER FROM A CONTENT RETRANSMITTER

SLING MEDIA PVT LTD, Ban...

1. A method for handling disruption in content streams received at a player device from a content retransmitter, the method comprising:the player device monitoring bandwidth of a communication connection utilized to receive a first portion of content at the player device from the content retransmitter wherein the first portion of the content is encoded at a first resolution level; and
the player device signaling the content retransmitter to encode a second portion of the content at a second resolution level when the player device determines that the bandwidth of the communication connection has decreased below a threshold value, wherein the player device determines that the bandwidth of the communication connection has decreased below the threshold value by measuring a number of frames of the first portion of the content dropped by the player device during at least one of decoding the first portion of the content or rendering the first portion of the content.

US Pat. No. 10,219,036

DYNAMICALLY ADJUSTING VIDEO MERCHANDISING TO REFLECT USER PREFERENCES

NETFLIX, INC., Los Gatos...

1. A method, comprising:displaying a plurality of still images associated with a plurality of video assets within a display, wherein the plurality of still images are amenable to interaction;
detecting that a user has selected a first video asset included in the plurality of video assets based on at least one interaction with a first still-image included in the plurality of still-images and corresponding to the first video asset;
determining that the user has continued interest in the first video asset;
in response to the user having continued interest:
determining a target scene included in the first video asset for the user;
displaying a video of the target scene within the display; and
continuing to display the plurality of still images within the display concurrently with displaying the video of the targeted scene, wherein the plurality of still images remains amenable to interaction;
determining that the user has continued interest in the first video asset after displaying the video of the target scene; and
in response to the user having continued interest after displaying the video of the target scene and prior to a request from the user to play back the first video asset:
determining a starting scene included in the first video asset; and
playing back the first video asset from the starting scene within the display.

US Pat. No. 10,219,035

SYSTEM AND METHOD FOR PROVIDING A TELEVISION NETWORK CUSTOMIZED FOR AN END USER

1. A system for treating an individual at an end user location coping with neurodegeneration, comprising:a first signal feed including content including personal imagery or video comprising persons, places, or things that are personal to the life of the specific individual;
a second signal feed including visual or audiovisual content; and
a control device configured to:
access the first and second signal feeds;
select specific content from the first signal feed;
select specific content from the second signal feed;
combine the selected content from the first signal feed and the selected content from the second signal feed to generate an output feed; and
present the generated output feed with an audiovisual display of the combined selected content from each of the first and second signal feed at the end user location to treat the neurodegeneration of the specific individual by refreshing a memory of the specific individual with the personal imagery or video provided from the first signal feed and visual or audiovisual content provided from the second signal feed;
electronically detect an event at the end user location via a monitoring device; and
select specific content from the first signal feed responsive to the detected event at the end user location and modify the output feed to include the selected specific content.

US Pat. No. 10,219,034

METHODS AND APPARATUS TO DETECT SPILLOVER IN AN AUDIENCE MONITORING SYSTEM

THE NIELSEN COMPANY (US),...

1. An apparatus comprising:an audio sample selector to select first audio samples of a first audio signal received by a first microphone, the first audio signal associated with media;
an offset selector to select first offset samples corresponding to a second audio signal received by a second microphone, the second audio signal associated with media, the first offset samples offset from the first audio samples by a first offset value;
a cluster analyzer to, if a first count of occurrences of the first offset value satisfies a count threshold, accept the first offset value into a cluster;
a weighted averager to calculate a weighted average of a second offset value and the first offset value in the cluster;
a direction determiner to determine an origin direction of the media based on the weighted average and
a code reader to, when the origin direction is within a threshold angle, monitor audio codes embedded in the media.

US Pat. No. 10,219,033

METHOD AND APPARATUS OF MANAGING VISUAL CONTENT

SNELL ADVANCED MEDIA LIMI...

1. A method of managing visual content, comprising the steps of:receiving a stream of video fingerprints, derived in a fingerprint generator by an irreversible data reduction process from respective temporal regions within a particular visual content stream, at a fingerprint processor that is physically separate from the fingerprint generator via a communication network; and
processing said fingerprints in the fingerprint processor to generate metadata which is not directly encoded in the fingerprints; wherein said processing includes:
windowing the stream of video fingerprints with a time window,
deriving frequencies of occurrence of particular fingerprint values or ranges of fingerprint values within each time window by converting each set of particular fingerprint values to a histogram,
determining statistical moments or entropy values of said frequencies of occurrence,
comparing said statistical moments or entropy values with expected values for particular types of content,
generating metadata representing the type of the visual content, and
providing the metadata to a control system for managing video content distribution.

US Pat. No. 10,219,032

TUNING BEHAVIOR ENHANCEMENT

ARRIS Enterprises LLC, S...

1. A method for processing content received from a communications network, the method comprising:in one or more processors of a Customer Premises Equipment (CPE) device communicatively coupled to a communications network:
receiving from the communications network a signal containing a program;
processing the received signal to display the program;
receiving a request to tune to other content;
determining whether the request was generated from a user input device or from a module of the CPE device; and
differentiating, based on a result of the determining, between a user-initiated command to tune to other content and a non-user initiated command to tune to other content, the differentiating comprising:
responsively to the request being generated from a user input device, stopping the processing of the signal containing the program while obtaining the other content, and during a delay after the user-initiated command to tune to other content has been received, displaying one of a mute to black and a mute to still; and
responsively to the request being generated from a module of the CPE device, during a delay after the non-user initiated command to tune to other content has been received, continuing to process the signal containing the program for display of the program while obtaining the other content, and continuing to display the program during the delay.

US Pat. No. 10,219,031

WIRELESS VIDEO/AUDIO SIGNAL TRANSMITTER/RECEIVER

Untethered Technology, LL...

1. A system for wirelessly mirroring video from a mobile device to a display screen, the system comprising:a transmitter device comprising:
a communications connector configured to electronically connect to a standard communications port on a mobile device;
a first one or more video and audio signal processors electronically connected to the communications connector and preconfigured to provide communications with a second one or more video and audio signal processors, the first one or more video and audio signal processors configured to:
receive a video and audio signal from the mobile device via the communications connector;
generate, based on the video and audio signal received from the mobile device, an HDMI video and audio signal; and
generate, based on the HDMI video and audio signal, a wireless network transmission signal;
a first antenna; and
a first RF transceiver electronically connected to the first one or more video and audio signal processors and to the first antenna, the first RF transceiver configured to communicate the wireless network transmission signal wirelessly to a second RF transceiver via the first antenna and without retransmission by additional wireless networking devices; and
a receiver device preconfigured for operation with the transmitter device, the receiver device comprising:
a second antenna;
an HDMI output connector configured to electronically connect to an HDMI input port on a display screen;
the second RF transceiver electronically connected to the second one or more video and audio signal processors and to the second antenna, the second RF transceiver configured to receive the wireless network transmission signal from the first RF transceiver via the second antenna and communicate the wireless network transmission signal to the second one or more video and audio signal processors; and
the second one or more video and audio signal processors electronically connected to the HDMI output connector and preconfigured to provide communications with the first one or more video and audio signal processors, the second one or more video and audio signal processors configured to:
receive the wireless network transmission signal from the second RF transceiver;
generate, based on the wireless network transmission signal, the HDMI video and audio signal; and
output the HDMI video and audio signal to the display screen via the HDMI output connector.

US Pat. No. 10,219,030

MULTI-INTERFACE STREAMING MEDIA SYSTEM

Roku, Inc., Los Gatos, C...

1. A system, comprising:an audio/visual device; and
a media device for accessing streamed data and operatively coupled to the audio/visual device, wherein the media device is configured to
detect a type of audio/visual interface that is utilized by the media device via an audio/visual connector of the media device,
detect whether the media device is properly connected to an external power source via a power connector and a removable power cord, wherein the removable power cord is operatively coupled to the power connector, and
determine whether additional power is required to fully operate the media device based at least on the type of audio/visual interface that is utilized by the media device, wherein the type of audio/visual interface includes a first type of audio/visual interface capable of fully operating the media device without the additional power and a second type of audio/visual interface not capable of fully operating the media device without the additional power.

US Pat. No. 10,219,029

DETERMINING ONLINE CONTENT INSERTION POINTS IN AN ONLINE PUBLICATION

Google LLC, Mountain Vie...

1. A computer-implemented method for determining online content insertion points in an online publication, comprising:receiving, by a break point identifying (“BPI”) computer device in communication with a memory device, a candidate online publication that includes a plurality of audio segments;
determining, by the BPI computer device, a threshold proportional to a total length of the candidate online publication;
comparing, by the BPI computer device, a portion of each of the plurality of audio segments to a plurality of reference audio segments to identify a number of the plurality of audio segments that match one of the plurality of reference audio segments;
determining, by the BPI computer device and responsive to the number of the plurality of audio segments that match one of the plurality of reference audio segments being above a second threshold, a plurality of break candidates within the candidate online publication;
determining, by the BPI computer device, a first aggregate time for a first break candidate of the plurality of break candidates, the first aggregate time comprising a duration between the first break candidate and a first prior break candidate;
determining, by the BPI computer device, the first aggregate time for the first break candidate is less than the threshold proportional to the total length of the candidate online publication;
excluding, responsive to the first aggregate time for the first break candidate being less than the threshold proportional to the total length of the candidate online publication, the first break candidate as a content insertion point within the candidate online publication, wherein the content insertion point represents a point in the candidate online publication for presenting online content;
determining, by the BPI computer device, a second aggregate time for a second break candidate, the second aggregate time comprising a time between the second break candidate and a second prior break candidate;
determining, by the BPI computer device, the second aggregate time for the second break candidate is greater than the threshold;
selecting, responsive to the second aggregate time for the second break candidate being greater than the threshold, the second break candidate as the content insertion point within the candidate online publication; and
storing the content insertion point in association with the candidate online publication in the memory device.

US Pat. No. 10,219,028

DISPLAY, DISPLAY DEVICE, PLAYER, PLAYING DEVICE, AND PLAYING DISPLAY SYSTEM

BOE Technology Group Co.,...

1. A display, comprising:at least two first display interfaces, a decoder, and at least two data channels, wherein the at least two first display interfaces have a one-to-one correspondence with the at least two data channels, and a correspondence between the at least two first display interfaces and the at least two data channels is determined by the decoder;
wherein, when each of the at least two first display interfaces is connected, via connecting lines, to a corresponding one of at least two second display interfaces of a player, in a one-to-one connection to transmit at least two first image data streams from the at least two second display interfaces of the player to the at least two first display interfaces, the at least two first display interfaces receive the at least two first image data streams, transmit the received at least two first image data streams to the decoder, each first image data stream of the at least two first image data streams including at least one start data frame and a second image data stream, the start data frame carrying a data channel identifier, wherein the data channel identifier indicates a data channel, among the at least two data channels, corresponding to the first image data stream;
the decoder receives the at least two first image data streams transmitted from the at least two first display interfaces, obtains each second image data stream and data channel identifier corresponding to each second image data stream according to the at least one start data frame included in each of the at least two first image data streams, determines the at least two data channels corresponding to the at least two display interfaces according to the data channel identifiers, and transmits each second image data stream to the corresponding data channel; and
each data channel receives a corresponding second image data stream, and outputs the corresponding second image data stream.

US Pat. No. 10,219,027

SYSTEM FOR PROVIDING MUSIC CONTENT TO A USER

Music Choice, Horsham, P...

1. A method for providing an enhanced television service to a user of a user device in communication with a television system, the method comprising:receiving information indicating that the user desires to consume a selected programmed linear video channel; and
in response to receiving the information, displaying on a display device of the user device a user interface screen, wherein the user interface screen comprises:
i) a first display area for displaying scheduled video content transmitted by the television system on the selected programmed linear channel and in accordance with a video content schedule for the selected programmed linear video channel,
ii) a second display area for displaying a group of graphic images;
displaying, in the first display area, first scheduled video content transmitted by the television system on the selected programmed linear channel in accordance with the video content schedule for the selected programmed linear video channel;
while displaying the first scheduled video content: 1) displaying in the second display area a first group of at least four graphic images, wherein the first group of graphic images is displayed in the second display area in a grid pattern having at least two rows and two columns and 2) displaying an artist list for enabling the user to select an artist, wherein the artist list comprises a set of at least two artist images including a first artist image and a second artist image, where each artist image included in the set identifies an artist, and wherein displaying the artist list comprises displaying the first artist image so that the first artist image is not obscured, further wherein and each graphic image included in the first group of graphic images is associated with a different music video associated with the artist identified by the first artist image;
after the first scheduled video content has ended: 1) automatically displaying, in the first display area, second scheduled video content transmitted by the television system on the selected programmed linear channel in accordance with the video content schedule for the selected programmed linear video channel; 2) automatically adding to the displayed artist list a third artist image such that the third artist image is not obscured; and 3) automatically displaying in the second display area a second group of at least four graphic images, wherein the second group of graphic images is displayed in the second display area in a grid pattern having at least two rows and two columns and each graphic image included in the second group of graphic images is associated with a different music video associated with the artist identified by the third artist image;
while displaying the second scheduled video content in the first display area and the second group of graphic images in the second display area, receiving a user input indicating that the user has selected one of the graphic images included in the second group of graphic images; and
after receiving the user input, causing the music video associated with the selected graphic image to be streamed on-demand to the user device.

US Pat. No. 10,219,026

MOBILE TERMINAL AND METHOD FOR PLAYBACK OF A MULTI-VIEW VIDEO

LG ELECTRONICS INC., Seo...

18. A method of controlling a mobile terminal, the method comprising the steps of:displaying a first frame corresponding to a first playback angle at a first playback time point of a multi-view video and a progress bar corresponding to the multi-view video;
in response to a touch input applied to the progress bar, moving a time indicator displayed at a first position corresponding to the first playback time point to a second position corresponding to a second playback time point of the progress bar;
displaying a first thumbnail image corresponding to the first playback angle at the second playback time point while maintaining displaying the first frame, as the time indicator is moved from the first position to the second position;
in response to a touch input applied to the first thumbnail, changing the first thumbnail image into a second thumbnail image to indicate that the first playback angle is increased to a second playback angle at the second playback time point; and
in response to a touch input applied to the second thumbnail image, replace the first frame with a different frame of the multi-view video and display the replaced first frame, the different frame corresponding to the second playback angle indicated by the second thumbnail image at the second playback time.

US Pat. No. 10,219,025

VIDEO DISTRIBUTION DEVICE, VIDEO DISTRIBUTION METHOD, AND PROGRAM

DWANGO CO., LTD., Chuo-K...

1. A video distribution device comprising:a transmission range determinator that receives information, indicating a display position for a video gallery display screen from a terminal device, the transmission range determinator determining, as a transmission range for video gallery data in which display images of video data items are arranged, a range at approximately a center of which the display position indicated by the received information is positioned and which has a size greater than a possible display range of the terminal device; and
a video gallery display screen generator that generates the video gallery display screen in which the display images of the video data items included in the transmission range determined by the transmission range determinator are arranged according to an arrangement of the display images of the video data items which is defined by the video gallery data, wherein the video gallery display screen generator distributes the generated video gallery display screen to the terminal device;
wherein among the display images of the video data items in the video gallery display screen, an outermost image included in the display range of the terminal device in the video gallery display screen is displayed in a manner such that part of the outermost image is cut by an outer periphery of the display range; and
in the video gallery display screen, the display images of the video data items have the same size and are arranged in a vertical direction and a horizontal direction.

US Pat. No. 10,219,024

TRANSMISSION APPARATUS, METAFILE TRANSMISSION METHOD, RECEPTION APPARATUS, AND RECEPTION PROCESSING METHOD

SATURN LICENSING LLC, Ne...

1. A transmission apparatus, comprising:processing circuitry configured to
store, in a memory, first acquisition information used for a first client terminal to acquire a first determined number of data streams of first content that are to be delivered by a delivery server via a network, second acquisition information used for the first client terminal and a second client terminal to acquire a second determined number of data streams of a second content that are to be delivered by the delivery server via the network, a first metafile, and a second metafile,
wherein the first metafile includes either first presentation control information to control presentation of the first content or first reference information to refer a first file that includes the first presentation control information, and the second metafile includes either second presentation control information to control presentation of the second content or second reference information to refer a second file that includes the second presentation control information,
wherein the first presentation control information includes first presentation time information that designates a start time at which the first content is to be reproduced, and the second presentation control information includes second presentation time information that designates a start time at which the second content is to be reproduced;
transmit, to the first client terminal via the network, the stored first metafile and the stored second metafile based on a first transmission request transmitted from the first client terminal via the network; and
transmit, to the second client terminal via the network, the stored second metafile based on a second transmission request transmitted from the second client terminal via the network, wherein
the first client terminal stops reproducing the second content in response to receiving a notification from the second client terminal.

US Pat. No. 10,219,023

SEMICONDUCTOR DEVICE, VIDEO DISPLAY SYSTEM, AND METHOD OF OUTPUTTING VIDEO SIGNAL

LAPIS SEMICONDUCTOR CO., ...

1. A semiconductor device comprising:a first selecting processor configured to select one of a first video signal and a second video signal according to a first selection signal;
a selection signal generating processor configured to generate the first selection signal;
a second selecting processor configured to select another one of the first video signal and the second video signal according to a second selection signal; and
a scaling processor configured to scale a size of a video of the another one of the first video signal and the second video signal to a size of a display device,
wherein said first selecting processor is configured to output the one of the first video signal and the second video signal in synchronization with a synchronization signal accompanied with the one of the first video signal and the second video signal,
said second selecting processor is configured to output the another one of the first video signal and the second video signal in synchronization with a synchronization signal accompanied with the another one of the first video signal and the second video signal to the scaling processor,
said scaling processor is configured to output the another one of the first video signal and the second video signal to the first selecting processor, said scaling processor includes a second setting processor configured to store a setting value for the scaling processor to scale the video, said scaling processor is configured to supply a second status signal indicating that the scaling processor changes the setting value to the second selecting processor when the scaling processor changes the setting value to be stored in the second setting processor, and
said second selecting processor is configured to output the another one of the first video signal and the second video signal in synchronization with the synchronization signal accompanied with the another one of the first video signal and the second video signal after the second selecting processor detects that the scaling processor completes changing the setting value to be stored in the second setting processor according to the second status signal.

US Pat. No. 10,219,022

METHOD AND SYSTEM FOR SHARING TELEVISION (TV) PROGRAM INFORMATION BETWEEN SET-TOP-BOXES (STBS)

Wipro Limited, Bangalore...

1. A method of sharing television (TV) program information, the method implemented by one or more set-top-boxes (STBs) and comprising:obtaining program-specific-information of TV content;
converting, by a Text-to-Speech (TTS) converter, the program-specific information into a voice message;
establishing, using a first Subscriber Identity Module (SIM) a voice call to another STB;
transmitting, from the first SIM, (i) the voice message associated with the program-specific-information to a second SIM associated with the another STB over the voice call using a first modulation scheme, (ii) user interaction-data over the voice call using a second modulation scheme and (iii) one or more control commands over the voice call using a third modulation scheme; and
multiplexing the voice message associated with the program-specific information, control commands, and the user-interaction-data over the voice call.

US Pat. No. 10,219,021

METHOD AND APPARATUS FOR PROCESSING COMMANDS DIRECTED TO A MEDIA CENTER

1. A method, comprising:associating, by a processing system comprising a processor, a first gesture of a first object of a plurality of objects based on images from image data with a first command for controlling a media center, wherein the image data is captured by a plurality of image sensors;
associating, by the processing system, a second gesture of a second object of the plurality of objects based on the images from the image data with a second command for controlling the media center, wherein the second gesture is based on images detected by the plurality of image sensors;
modifying, by the processing system, the first command to a modified command according to a characteristic of the first gesture;
determining, by the processing system, a conflict between the first command and the second command;
presenting, by the processing system, a notification indicating the conflict and requesting a resolution to the conflict;
determining, by the processing system, if a response to the notification indicates the resolution is to perform the first command or the second command; and
responsive to a first determination that the response indicates the resolution is to perform the first command processing, by the processing system, the modified command to control the media center responsive to determining the response indicates the resolution is to perform the first command.

US Pat. No. 10,219,020

PORTABLE TERMINAL, INFORMATION PROCESSING APPARATUS, CONTENT DISPLAY SYSTEM AND CONTENT DISPLAY METHOD

Maxell, Ltd., Kyoto (JP)...

1. A display apparatus for the display of video content acquired from a television broadcast and for the display of video content acquired via the internet, comprising:a digital broadcast receiver configured to receive a digital broadcast signal;
a signal separator for de-multiplexing the digital broadcast signal into video data and audio data;
a video processor configured to convert the format of the video data;
an audio processor configured to convert the format of the audio data;
a network communication module configured to communicate over the internet;
a radio receiver for receiving data from an external mobile terminal;
an infra-red (IR) receiver for receiving commands from a remote controller;
a display panel; and
a system controller configured to control the display apparatus to:
receive a first video content via the received broadcast signal;
display the first video content on the display panel;
receive an identifier for identifying a second video content from the external mobile terminal;
acquire the second video content via the interne using said identifier;
display the second video content on the display panel;
terminate display of the first video content prior to display of the second video content;
execute operation instructions received from the external mobile terminal via the radio receiver while the second video content is being displayed; and
execute commands received from the remote controller, wherein said remote controller is a different device than the external mobile terminal.

US Pat. No. 10,219,019

PORTABLE TERMINAL, INFORMATION PROCESSING APPARATUS, CONTENT DISPLAY SYSTEM AND CONTENT DISPLAY METHOD

Maxell, Ltd., Kyoto (JP)...

1. A video apparatus for outputting video content acquired from a television broadcast and for outputting video content acquired via the internet, comprising:a digital broadcast receiver configured to receive a digital broadcast signal;
a signal separator for de-multiplexing the digital broadcast signal into video data and audio data;
a video processor configured to convert the format of the video data;
an audio processor configured to convert the format of the audio data;
a network communication module configured to communicate over the internet;
a radio receiver for receiving data from an external mobile terminal;
an infra-red (IR) receiver for receiving commands from a remote controller; and
a system controller configured to control the video apparatus to:
receive a first video content via the received broadcast signal;
output first video signals representing the first video content;
receive an identifier for identifying a second video content from the external mobile terminal;
acquire the second video content via the internet using said identifier;
output second video signals representing the second video content;
terminate output of the first video signals prior to outputting the second video signals;
execute operation instructions received from the external mobile terminal via the radio receiver while the second video signals are being outputted; and
execute commands received from the remote controller, wherein said remote controller is a different device than the external mobile terminal.

US Pat. No. 10,219,018

METHOD OF CONTROLLING DISPLAY DEVICE FOR PROVIDING CONTENT AND DISPLAY DEVICE PERFORMING THE SAME

SAMSUNG ELECTRONICS CO., ...

1. A method of controlling a first device, the method comprising:transmitting identification information of a first user from the first device to a server;
obtaining, from the server, identification information of a plurality of second users that are related to the first user and content information corresponding to the plurality of second users, the content information comprising information of content that is being displayed on a plurality of second devices respectively corresponding to the plurality of second users;
displaying, on the first device, a user interface (UI) for selecting content that corresponds to the content information; and
in response to selecting content using the UI, playing, on the first device, the selected content,
wherein the UI comprises a plurality of objects, each object comprising identification information of one of the plurality of second users and content information corresponding to the one of the plurality of second users, and the plurality of objects are displayed within the UI in order of closeness between the first user and each of the plurality of second users; and
wherein the order of closeness is determined based on comparing a viewing rate referring to a time during which content has been viewed by each of the plurality of second users as compared to a total time of the content.

US Pat. No. 10,219,017

APPARATUS AND METHODS FOR MULTICAST DELIVERY OF CONTENT IN A CONTENT DELIVERY NETWORK

Time Warner Cable Enterpr...

1. A method of providing packetized content using a managed webserver to a client device from a content server, said method comprising:receiving a request for said packetized content from said client device;
determining whether said requested packetized content is to be provided via a multicast to a group of devices; and
when it is determined that said requested packetized content is to be provided via said multicast, causing, by the managed webserver, another client device to assign a persistent transmission control protocol (TCP) port that enables direct bidirectional communication between said client device and said content server for receiving said requested packetized content, and thereafter providing said client device with an instruction to cause said client device to query said another client device with a request to open said persistent TCP port between said client device and said content server previously assigned by said another client device, the TCP port being configured to provide said packetized content, said another client device being configured to, in response to said query:
join a multicast group for receiving said requested packetized content; and
enable said content server to provide said requested packetized content as a unicast stream to said client device via said persistent TCP port.

US Pat. No. 10,219,016

EXCLUDING SPECIFIC APPLICATION TRAFFIC FROM CUSTOMER CONSUMPTION DATA

Time Warner Cable Enterpr...

1. A method comprising:receiving first data packets over a communication link from a first source, the first data packets destined for delivery to a communication device operated by a user in a network environment, the first data packets assigned delivery information to facilitate conveyance of the first data packets over the communication link to the communication device;
examining the delivery information assigned to the first data packets to control delivery of the first data packets, the delivery information indicating that the first data packets are received from the first source; and
in response to detecting that the first data packets are received from the first source and that communications from the first source are to be excluded from a data delivery count representing a amount of data conveyed to the communication device over the communication link on behalf of the user, communicating the first data packets over a data flow that is not counted in the data delivery count assigned to the user.

US Pat. No. 10,219,015

OFFERING ITEMS IDENTIFIED IN A MEDIA STREAM

Amazon Technologies, Inc....

1. A device comprising:at least one physical processor; and
one or more memory devices to store computer instructions that, when executed by the at least one physical processor, cause the at least one physical processor to:
receive, from a content provider, a content stream, the content stream including content depicting an item;
provide the content stream to a display device associated with a user;
detect the item within the content stream;
analyze one or more visual elements of one or more images of the content stream with respect to a catalog of items to identify one or more characteristics of the item;
analyze one or more audio elements of the content stream to identify at least one of the one or more characteristics of the item, the one or more audio elements different than part of an advertisement;
determine an identification of a first catalog item that corresponds to the item within the catalog of items based at least in part on the one or more characteristics of the item;
determine an identification of a secondary catalog item that is similar to the item based at least in part on the one or more characteristics of the item, the secondary catalog item having at least one additional different functionality than the item;
receive a request regarding the first catalog item, the request regarding the first catalog item including first audio input from the user, the first audio input corresponding to a transaction phrase token spoken by the user;
receive identification information of the user, wherein the identification information of the user includes the transaction phrase token that is assigned a transaction rule to automatically approve purchase requests for items that belong to an item type for a certain account and to automatically approve shipment of the first catalog item to a first shipping address associated with the certain account;
transmit, to one or more item offering services, the identification of the first catalog item, the identification of the secondary catalog item, and the identification information of the user; and
receive a second request regarding the secondary catalog item, the second request including second audio input from a second user, the second audio input corresponding to a second transaction phrase token spoken by the second user, and wherein the second transaction phrase token is usable to authorize a purchase of the secondary catalog item without user access to billing information associated with a second account corresponding to the second transaction phrase token and to automatically approve shipment of the secondary catalog item to a second shipping address associated with the second account.

US Pat. No. 10,219,013

METHOD AND APPARATUS FOR REDUCING DATA BANDWIDTH BETWEEN A CLOUD SERVER AND A THIN CLIENT

SINGAPORE UNIVERSITY OF T...

1. A method for reducing data bandwidth between a cloud server and a thin client comprising:rendering a base layer image/video stream at the thin client,
rendering a high quality layer image/video stream at the cloud server, the high quality layer image/video stream having a higher quality than the base layer image/video stream,
rendering a duplicate of the base layer image/video stream at the cloud server,
generating an enhancement layer image/video stream from the high quality layer image/video stream and the duplicate of the base layer image/video stream at the cloud server,
transmitting the enhancement layer image/video stream from the cloud server to the thin client,
displaying a composite layer image/video stream on the thin client, the composite layer image/video stream being based on the base layer image/video stream and the enhancement layer image/video stream;
wherein rendering the base layer image/video stream comprises using one or more rendering techniques and wherein rendering parameters for the one or more rendering techniques are determined by minimizing information content of the enhancement layer, while satisfying a constraint that the rendering of the base layer image/video stream can be achieved with computation capability of the thin client.

US Pat. No. 10,219,012

METHOD AND APPARATUS FOR TRANSCEIVING DATA FOR MULTIMEDIA TRANSMISSION SYSTEM

Samsung Electronics Co., ...

1. A method for receiving media content in a multimedia system, the method comprising:receiving one or more multimedia data packets generated based on a data unit, the data unit being fragmented into at least one sub data unit, each multimedia data packet including a packet header and a payload; and
decoding the one or more multimedia data packets to recover the media content,
wherein each of the one or more multimedia data packets comprise type information indicating whether a respective payload data included in a payload of a given multimedia data packet comprises either a metadata of the data unit or a data element derived from the at least one sub data unit,
wherein the type information comprises a first value indicating that the respective payload data comprises the metadata of the data unit if the respective payload data comprises the metadata of the data unit, and
wherein the type information comprises a second value indicating that the respective payload data comprises the data element derived from the at least one sub data unit if the respective payload data comprises the data element derived from the at least one sub data unit.

US Pat. No. 10,219,011

TERMINAL DEVICE AND INFORMATION PROVIDING METHOD THEREOF

SAMSUNG ELECTRONICS CO., ...

1. A terminal device comprising:a communication interface;
a display; and
a processor configured to:
control the display to output a moving picture;
extract fingerprints from frames of the moving picture while the moving picture is output;
detect an object from a frame of the moving picture while the moving picture is output;
control the communication interface to transmit, to a server, a request comprising a fingerprint extracted from a currently output frame, to query information corresponding to the fingerprint comprised in the request;
in response to transmitting the request to the server, receive, from the server, the information corresponding to the fingerprint comprised in the transmitted request; and
control the display to output the received information,
wherein the processor is further configured to control the communication interface to regularly transmit the request to the server at a predetermined time interval while a target object is not detected,
wherein the processor is further configured to control the communication interface to transmit the request to the server immediately when the target object is detected regardless of the predetermined time interval, and
wherein the processor is further configured to stop transmitting the request to the server while the same target object is continuously detected.

US Pat. No. 10,219,010

SELECTIVE MEDIA PLAYING METHOD AND APPARATUS ACCORDING TO LIVE STREAMING AND RECORDED STREAMING

HANWHA TECHWIN CO., LTD.,...

1. A media streaming apparatus for playing media on a web browser, comprising at least one processor to implement:a receiving unit configured to receive media data by using a communication protocol which supports web services, the media data being generated by a media service apparatus;
a first media restoring unit configured to decode the media data by a first decoder written in a script which can be parsed by the web browser;
a second media restoring unit configured to decode the media data by a second decoder embedded in the web browser; and
an output unit configured to output the media data decoded by at least one of the first media restoring unit and the second media restoring unit,
wherein the media data is decoded by the at least one of the first media restoring unit and the second media restoring unit based on a streaming mode,
wherein the media data is decoded by the first media restoring unit when the streaming mode is a live streaming mode, and the media data is decoded by the at least one of the first media restoring unit and the second media restoring unit when the streaming mode is a recorded streaming mode.

US Pat. No. 10,219,009

LIVE INTERACTIVE VIDEO STREAMING USING ONE OR MORE CAMERA DEVICES

Twitter, Inc., San Franc...

1. A computing device comprising:at least one processor; and
a non-transitory computer-readable medium having executable instructions that when executed by the at least one processor are configured to execute an interactive streaming application, the interactive streaming application configured to:
join a live broadcast of an event that is shared by an interactive video broadcasting service executing on a server computer;
receive a first video stream of the live broadcast, the first video stream having video captured from a camera device configured as a first video source;
display the video of the first video stream on a display screen of the computing device;
trigger display of a first icon and a second icon on the display screen during a course of the live broadcast, the first icon representing a first user-provided engagement provided by a first viewing device, the second icon representing a second user-provided engagement provided by a second viewing device, the first user-provided engagement being associated with a first timestamp in the first video stream such that the display of the first icon is triggered at a time indicated by the first timestamp, the second user-provided engagement being associated with a second timestamp in the first video stream such that the display of the second icon is triggered at a time indicated by the second timestamp,
wherein the first icon is removed from the display screen when a predetermined interval elapses after the time indicated by the first timestamp, and the second icon is removed from the display when a predetermined interval elapses after the time indicated by the second timestamp;
receive a second video stream of the live broadcast, the second video stream having panoramic video captured from a panoramic video capturing device configured as a second video source;
display a portion of the panoramic video according to a first viewing angle on the display screen;
receive a change to the first viewing angle of the panoramic video; and
display another portion of the panoramic video according to a second viewing angle, the second viewing angle providing a different perspective of the panoramic video than what was provided by the first viewing angle.

US Pat. No. 10,219,008

APPARATUS AND METHOD FOR AGGREGATING VIDEO STREAMS INTO COMPOSITE MEDIA CONTENT

1. A system, comprising:a processing system including a processor; and
a memory that stores executable instructions that, when executed by the processing system, facilitate performance of operations, comprising:
obtaining a live video stream from each of a plurality of communication devices resulting in a plurality of live video streams, the plurality of live video streams being associated with a common event;
detecting a presentation capability of a device;
determining that a group of the plurality of live video streams are from a same perspective of the common event;
identifying a user associated with a communication device, wherein the plurality of communication devices comprise the communication device;
providing additional bandwidth to the communication device based on the communication device providing a first live video stream, wherein the plurality of live video streams comprises the first live video stream;
selecting one of the group of the plurality of live video streams that are from the same perspective of the common event according to the presentation capability of the device;
aggregating a first portion of the plurality of live video streams to generate a composite video stream for presenting a selectable viewing of the common event, wherein the first portion of the plurality of live video streams includes the one of the group of the plurality of live video streams;
sending the composite video stream to the device for presentation of the composite video stream of the common event at the device, wherein the sending of the composite video stream comprises transmitting the composite video stream to a social media server, wherein the social media server shares the composite video stream with social media members;
providing a graphical user interface to the device, wherein the graphical user interface is presented by the device with the presentation of the composite video stream of the common event, wherein the graphical user interface includes a touchscreen to receive first user-generated input through contact with the touchscreen and a gesture, and wherein the graphical user interface enables adjustment of a viewing of the common event;
receiving first user-generated input from the device, wherein the first user-generated input comprises a request to adjust the presentation of the common event by providing a selection of a moving object;
adjusting the composite video stream according to the first user-generated input to generate a first adjusted composite video stream, wherein each image of the adjusted composite video stream includes a selected moving object within the common event;
providing the first adjusted composite video stream to the device for presentation of adjusted composite video stream of the common event at the device;
receiving second user-generated input from the device, wherein the second user-generated input comprises a first gesture with the touchscreen of the graphical user interface that indicates a magnification of the selected moving object on a separate screen, and wherein the second user-generated input comprises a change in location of the device;
adjusting the first adjusted composite video stream to generate a second adjusted composite video stream and a third adjusted composite video stream, wherein the second adjusted composite video stream includes is adjusted according to the change in location and the third adjusted composite video stream is adjusted according to the magnification of the selected moving object; and
providing the second adjusted composite video stream and the third adjusted composite video stream to the device for presentation of the second adjusted composite video stream at a same time for presentation of the third adjusted composite video stream.

US Pat. No. 10,219,007

METHOD AND DEVICE FOR SIGNALING IN A BITSTREAM A PICTURE/VIDEO FORMAT OF AN LDR PICTURE AND A PICTURE/VIDEO FORMAT OF A DECODED HDR PICTURE OBTAINED FROM SAID LDR PICTURE AND AN ILLUMINATION PICTURE

INTERDIGITAL VC HOLDINGS,...

1. A method for signaling, in a bitstream representing a LDR picture obtained from an HDR picture, both a picture/video format of a decoded version of said LDR picture, denoted an output LDR format, and a picture/video format of a decoded version of said HDR picture, denoted an output HDR format, the method comprising encoding in the bitstream a first syntax element defining the output LDR format,wherein it further comprises encoding in the bitstream a second syntax element which is distinct from the first syntax element and which defines the output HDR format.

US Pat. No. 10,219,006

JCTVC-L0226: VPS AND VPS_EXTENSION UPDATES

SONY CORPORATION, Tokyo ...

1. A method, comprising:in a device configured to receive a bit stream of a video:
decoding, by a decoder, the bit stream based on a video parameter set (VPS) syntax structure, wherein a byte-alignment syntax is under a condition of a VPS extension flag in the VPS syntax structure,
wherein the byte-alignment syntax is associated with a byte-alignment;
determining a value of the VPS extension flag;
executing the byte-alignment based on the value of the VPS extension flag that is equal to one; and
executing a VPS extension function based on the byte-alignment.

US Pat. No. 10,219,005

SYSTEM AND METHOD FOR REAL-TIME COMPRESSION OF DATA FRAMES

HCL Technologies Italy S....

1. A method for real-time compression of a data frame, the method comprises:receiving, by a processor, a data frame, wherein the data frame comprises a set of symbols, wherein the length of each symbol is m bits;
identifying, by the processor, a frequency associated with each symbol, from the set of symbols, wherein for each symbol, the frequency corresponds to a number of occurrence of the symbol in the data frame;
sorting, by the processor, the set of symbols to generate a sorted set of symbols, based on descending order of frequency associated with each symbol from the set of symbols;
computing, by the processor, a compression gain associated with each predefined case type, from a set of predefined case types, wherein each predefined case type corresponds to a number of bits (C) used for representing first (2?C?1) symbols from the sorted set of symbols;
selecting, by the processor, a target predefined case type, from the set of predefined case types, based on comparison of the compression gain associated with each predefined case type, wherein the target predefined case type corresponds to Ct bits;
assigning, by the processor, Ct bits compressed code to the first (2?Ct?1) symbols, from the sorted set of symbols, and (m+Ct) bits code to the remaining symbols from the sorted list of symbols; and
generating, by the processor, a compressed frame, wherein the compressed frame comprises a header and a sequence of compressed symbols, wherein the sequence of compressed symbols is generated based on the bit code assigned to each symbol, and wherein the header represents the target predefined case type, and the first (2?Ct?1) symbols.

US Pat. No. 10,219,004

VIDEO ENCODING AND DECODING METHOD AND APPARATUS USING THE SAME

Electronics and Telecommu...

1. A method for video decoding that supports multi-layer videos, the method comprising:analyzing a first layer dependency on a current layer based on a video parameter set (VPS) extension;
analyzing a second layer dependency on a current slice in the current layer based on information encoded in a slice unit, wherein the analyzing the second layer dependency on the current slice comprises determining, for the current slice, whether to use the first layer dependency of the VPS extension or the second layer dependency of the slice unit;
constructing a reference picture list for the current slice based on either one or both of the first layer dependency on the current layer and the second layer dependency on the current slice;
predicting a current block included in the current slice by using at least one reference picture included in the reference picture list to generate a prediction block;
generating a residual block of the current block; and
reconstructing the current block by using the prediction block and the residual block,
wherein the generating the residual block comprises entropy-decoding a bitstream to generate a quantized transformed coefficient,
wherein the reference picture list comprises a temporal reference picture belonging to a same layer as the current slice and an inter-layer reference picture belonging to a different layer from the current slice, and
wherein the inter-layer reference picture has a same picture order count (POC) value as the current slice.

US Pat. No. 10,219,003

INTRA-FRAME PREDICTIVE CODING AND DECODING METHODS BASED ON TEMPLATE MATCHING, ARRAY SCANNING METHOD AND APPARATUS, AND APPARATUS

Huawei Technologies Co., ...

1. An intra-frame predictive coding method based on template matching, comprising:determining N predicted pixel values of a to-be-predicted unit by using a template of an ith shape, wherein the to-be-predicted unit is adjacent to the template of the ith shape, an ith predicted pixel value is determined according to the template of the ith shape, wherein i=1, 2, . . . , N, and N is an integer greater than or equal to 2; and
selecting a predicted pixel value that is among the N predicted pixel values of the to-be-predicted unit and meets a preset condition as an optimal predicted pixel value of the to-be-predicted unit, wherein the optimal predicted pixel value of the to-be-predicted unit is used for coding;
wherein the determining N predicted pixel values of a to-be-predicted unit by using a template of an ith shape comprises:
determining a predicted pixel value of a subunit sj in the to-be-predicted unit by using a template ix of the ith shape, wherein the subunit sj is a region that is in the to-be-predicted unit, and the region of the subunit sj has a same shape as the template ix, and that is adjacent to the template ix; j=1, 2, . . . , M; x=1, 2, . . . , M; M is an integer greater than or equal to 2; s1Us2U . . . UsM is equal to the to-be-predicted unit; the subunits s1, s2, sM are successively farther away from an adjacent reconstructed region; and the template ix has a different size; and
wherein all predicted pixel values of the to-be-predicted unit is determined by successive iterations from a peripheral region of the to-be-predicted unit that is closest to a reconstructed region.

US Pat. No. 10,219,002

DYNAMIC FIDELITY UPDATES FOR ENCODED DISPLAYS

Intel Corporation, Santa...

1. A source device comprising:an interface configured to be coupled to a data link; and
at least one processor coupled to the interface and configured to transmit, via the interface and the data link, a plurality of frames of image data, the plurality of frames including at least one base frame and at least one partial fidelity update frame distinct from and corresponding to the at least one base frame, the at least one partial fidelity update frame being applicable to a portion of the at least one base frame and storing at least one chroma value to replace one or more chroma values of the at least one base frame.

US Pat. No. 10,219,001

INTER-LAYER PREDICTION METHOD FOR MULTI-LAYER VIDEO AND DEVICE THEREFOR

Intellectual Discovery Co...

1. An inter-layer prediction apparatus for a multi-layer video, comprising:a frame buffer configured to store a reconstructed picture in an enhancement layer and a reconstructed picture in a reference layer;
a predictor configured to
determine whether the reconstructed picture in the reference layer is present at a time corresponding to a current picture in the enhancement layer,
determine an inter-layer reference picture for the current picture, in response to the determination that the reconstructed picture is present at the time corresponding to the current picture,
generate a reference picture list for the current picture including the inter-layer reference picture and the reconstructed picture in the enhancement layer, and
generate a predicted picture of the current picture by performing inter prediction on the current picture based on the reference picture list; and
an adder configured to generate a reconstructed picture of the current picture by adding the predicted picture of the current picture and a residual picture of the current picture.

US Pat. No. 10,219,000

TIME STAMP RECOVERY AND FRAME INTERPOLATION FOR FRAME RATE SOURCE

PIXELWORKS, INC., Portla...

1. A method of performing motion vector correction in a sequence of video frames, comprising:receiving, at a processor, a sequence of video frames at a received rate lower than an original frame rate, the sequence of video frames having fewer frames than an original sequence of video frames;
identifying motion vectors for frames in the sequence of video frames;
identifying a high-low pattern of motion vector magnitudes over a period of time;
determining a location of dropped frames from the original sequence of video frames based on the high-low pattern;
generating frame interpolation phases based on the high-low pattern;
adjusting magnitudes of the motion vectors based on the high-low pattern to determine motion vectors for each of the frame interpolation phases; and
interpolating a new frame of video data at each of the frame interpolation phases.

US Pat. No. 10,218,999

METHOD AND APPARATUS FOR IMAGE CODING/DECODING

Electronics and Telecommu...

1. An image decoding method comprising:configuring a motion vector candidate list;
modifying the motion vector candidate list based on a number of motion vector candidates in the motion vector candidate list; and
determining a prediction motion vector based on the modified motion vector candidate list,
wherein the modified motion vector candidate list comprises any one or any combination of any two or more of a spatial motion vector candidate, a temporal motion vector candidate, and a (0,0) motion vector,
wherein the configuring of the motion vector candidate list comprises
deriving the spatial motion vector candidate,
deriving the temporal motion vector candidate except when two derived spatial motion vector candidates are present and different from each other, and
adding either one or both of the derived spatial motion vector candidate and the derived temporal motion vector candidate to the motion vector candidate list, and
wherein in response to the number of motion vector candidates in the motion vector candidate list being smaller than a maximum number of motion vector candidates, the modifying of the motion vector candidate list comprises repeatedly adding a specific motion vector candidate to the motion vector candidate list until the motion vector candidate list reaches the maximum number of motion vector candidates, based on only the maximum number of motion vector candidates and the number of motion vector candidates in the motion vector candidate list,
wherein the adding either one or both of the derived spatial motion vector candidate and the derived temporal motion vector candidate to the motion vector candidate list carries out an operation of checking the same motion vector candidate only on the spatial motion vector candidates for removing the same motion vector candidate.

US Pat. No. 10,218,998

METHOD AND APPARATUS FOR ENCODING/DECODING IMAGES USING A MOTION VECTOR OF A PREVIOUS BLOCK AS A MOTION VECTOR FOR THE CURRENT BLOCK

SAMSUNG ELECTRONICS CO., ...

1. An image decoding method comprising:hierarchically splitting a maximum coding unit into at least one coding unit based on split information obtained from a bitstream;
determining a current block in a coding unit among the at least one coding unit;
obtaining information regarding a prediction direction to be used to decode the current block, the information indicating one of an L0 direction, an L1 direction, and a bi-direction;
determining motion vector candidates of the current block based on a motion vector of at least one block decoded before decoding of the current block; and
determining at least one motion vector of the current block based on at least one of a motion vector candidate in the L0 direction and a motion vector candidate in the L1 direction, from among the determined motion vector candidates, according to the information regarding a prediction direction,
wherein the determining motion vector candidates of the current block comprises obtaining the motion vector candidates of the current block using a block co-located together with the current block in a temporal reference picture in the L0 direction or the L1 direction,
the image is split into a plurality of maximum coding units including the maximum coding unit,
the maximum coding unit is hierarchically split into the at least one coding unit of depths,
a coding unit of a current depth is one of square data unit split from a coding unit of an upper depth,
when the split information indicates a split for the current depth, the coding unit of the current depth is split into four coding units of a lower depth, independently from neighboring coding units, and
when the split information indicates a non-split for the current depth, a coding unit of the current depth is split into one or more prediction units, and the current block is a prediction unit.

US Pat. No. 10,218,997

MOTION VECTOR CALCULATION METHOD, PICTURE CODING METHOD, PICTURE DECODING METHOD, MOTION VECTOR CALCULATION APPARATUS, AND PICTURE CODING AND DECODING APPARATUS

Velos Media, LLC, Plano,...

1. A decoding method of decoding a current block included in a current picture, the current picture being included in a coded video stream, the decoding method comprising:determining a reference picture in the coded video stream, the reference picture being included in one of (i) a first reference picture group of the current block and (ii) a second reference picture group of the current block;
selecting a reference motion vector among from one or more reference motion vectors of a reference block in the reference picture such that in situation (A) when the reference block has a first reference motion vector and a second reference motion vector that respectively correspond to the first reference picture group and the second reference picture group, (i) the first reference motion vector is selected when the reference picture is included in the second reference picture group and (ii) the second reference motion vector is selected when the reference picture is included in the first reference picture group, in situation (B) when the reference block has only one reference motion vector, the only reference motion vector is selected, and in situation (C) when the reference block has no reference motion vector, a zero reference motion vector is selected;
deriving the motion vector of the current block using the selected one reference motion vector; and
decoding the current block using the derived motion vector.

US Pat. No. 10,218,996

MOTION VECTOR DETECTION APPARATUS AND METHOD OF CONTROLLING MOTION VECTOR DETECTION APPARATUS

CANON KABUSHIKI KAISHA, ...

1. A motion vector detection apparatus, comprising:a processor that executes a program stored in a memory and functions as:
a detecting unit adapted to detect, for each of a plurality of areas of a base image, a motion vector relative to a reference image;
a motion vector determining unit adapted to determine, among motion vectors, motion vectors related to a moving object;
a candidate vector determining unit adapted to determine, based on a point of interest that is a position within an image and a movement direction of the moving object, one or more of the motion vectors related to the moving object as candidate vector(s); and
a calculating unit adapted to calculate a representative vector of the moving object based on the candidate vector(s),
wherein the candidate vector determining unit determines, from among the motion vectors related to the moving object, one or more motion vectors each being detected in, among the plurality of areas, an area that exists on an axis of interest that extends in a different direction from the movement directions of the moving object and passes through the point of interest, as the candidate vector(s).

US Pat. No. 10,218,995

MOVING PICTURE ENCODING SYSTEM, MOVING PICTURE ENCODING METHOD, MOVING PICTURE ENCODING PROGRAM, MOVING PICTURE DECODING SYSTEM, MOVING PICTURE DECODING METHOD, MOVING PICTURE DECODING PROGRAM, MOVING PICTURE REENCODING SYSTEM, MOVING PICTURE REENCODING M

JVC KENWOOD CORPORATION, ...

1. A moving picture encoding system comprising:a first encoder configured to work on a subsequence of a sequence of moving pictures with a standard resolution to implement a first combination of processes for an encoding and a decoding to create a first sequence of encoded bits and a set of decoded pictures with the standard resolution;
a first super-resolution enlarger configured to work on the subsequence of the sequence of moving pictures with the standard resolution to implement an interpolation of pixels with a first enlargement to create a set of super-resolution enlarged pictures with a first resolution higher than the standard resolution;
a first resolution converter configured to work on the set of super-resolution enlarged pictures to implement a process for a first resolution conversion to create a set of super-resolution enlarged and converted pictures with a standard resolution;
a second super-resolution enlarger configured to acquire the set of decoded pictures with the standard resolution from the first encoder to work on the sequence of decoded pictures to implement an interpolation of pixels with a second enlargement to create a set of super-resolution enlarged decoded pictures with a second resolution higher than the standard resolution;
a second resolution converter configured to work on the set of super-resolution enlarged decoded pictures to implement a process for a second resolution conversion to create a set of super-resolution enlarged and converted decoded pictures with a standard resolution; and
a second encoder configured to:
have the set of super-resolution enlarged and converted pictures from the first resolution converter as a set of encoding target pictures, the set of decoded pictures from the first encoder as a set of first reference pictures, and the set of super-resolution enlarged and converted decoded pictures from the second resolution converter as a set of second reference pictures,
select one of the set of first reference pictures and the set of second reference pictures to create reference picture selection information to identify the set of selected reference pictures to implement a second process for encoding to create a second sequence of encoded bits based on the set of encoding target pictures and the set of selected reference pictures, and
implement a third process for encoding for the reference picture selection information to create a sequence of encoded bits of the reference picture selection information,
wherein the set of encoding target pictures, the set of first reference pictures, and the set of second reference pictures have the same value in spatial resolution.

US Pat. No. 10,218,994

WATERMARK RECOVERY USING AUDIO AND VIDEO WATERMARKING

Verance Corporation, San...

1. A method for enabling acquisition of metadata associated with a multimedia content based on detection of a video watermark from the multimedia content, the method comprising:obtaining, at a watermark extractor that is implemented at least partially in hardware, one or more blocks of sample values representing image pixels in a video frame of the multimedia content, each block including one or more rows of pixel values and one or more columns of pixel values; and
using the watermark extractor to extract one or more video watermarks from the one or more blocks, including:
for each block:
(a) determining a weighted sum of the pixel values in the block produced by multiplying each pixel value with a particular weight coefficient and summing the result together, wherein the particular weight coefficients for each block are selected to at least partially compensate for degradation of video watermark or watermarks in each block due to impairments caused by transmission or processing of the multimedia content;
(b) comparing the weighted sum of the pixel values to one or more predetermined threshold values;
(c) upon a determination that the weighted sum falls within a first range of the one or more predetermined threshold values, identifying a detected watermark symbol having a first value; and
(d) upon a determination that the weighted sum falls within a second range of the one or more predetermined threshold values, identifying a detected watermark symbol having a second value;
repeating operations (a) through (d) for a plurality of the one or more blocks to obtain a plurality of the detected watermark symbol values;
determining whether or not the plurality of the detected watermark symbols values form a valid watermark payload; and
upon a determination that a valid watermark payload has been detected, acquiring the metadata associated with the multimedia content based on the valid watermark payload.

US Pat. No. 10,218,993

VIDEO ENCODING METHOD, VIDEO DECODING METHOD AND APPARATUS USING SAME

LG ELECTRONICS INC., Seo...

1. A video decoding apparatus, comprising:a decoder configured to receive a bitstream including information on a slice header and information on substreams for a current slice segment, to obtain entry point information for the substreams from the slice header, and to decode the substreams based on the entry point information to reconstruct a picture;
a memory configured to store the reconstructed picture,
wherein the decoder comprises:
an entropy decoding module configured to derive prediction information and residual information on a block of a current substream;
a prediction module configured to derive prediction samples on the block based on the prediction information;
an inverse transform module configured to derive residual samples on the block, wherein the residual samples are derived based on the residual information;
a reconstructed block generating unit configured to generate reconstructed samples to generate the reconstructed picture based on the prediction samples and the residual samples,
wherein the picture includes multiple largest coding units (LCUs),
wherein a number of the substreams is equal to a number of LCU rows in the current slice segment in the picture,
wherein the entry point information includes number information indicating a number of entry point offsets, and
wherein the number of the substreams is derived based on the number information in the slice header.

US Pat. No. 10,218,992

ENCODING, TRANSMISSION AND DECODING OF COMBINED HIGH MOTION AND HIGH FIDELITY CONTENT

Cisco Technology, Inc., ...

1. A device comprising:at least one processor; and
at least one memory having computer-readable instructions, which when executed by the at least one processor, cause the at least one processor to:
receive an encoded frame;
determine whether the encoded frame includes at least one region having high fidelity content; and
upon determining that the encoded frame includes at least one region having high fidelity content,
perform a first decoding process,
perform a second decoding process for decoding the at least one region having high fidelity content,
display a previous version of the high fidelity content on a display based on the first decoding process and while the second decoding process is being performed, and
display a decoded version of the at least one region having the high fidelity content on the display when performing the second decoding process is complete.

US Pat. No. 10,218,991

IMAGE ENCODING APPARATUS, METHOD OF IMAGE ENCODING, AND RECORDING MEDIUM, IMAGE DECODING APPARATUS, METHOD OF IMAGE DECODING, AND RECORDING MEDIUM

Canon Kabushiki Kaisha, ...

1. An image decoding apparatus capable of decoding a bit stream including data obtained by encoding an image including a tile, the tile including a plurality of block rows, the image decoding apparatus comprising:a number-of-blocks acquiring unit configured to acquire, from the bit stream, information indicating a number of blocks in a height direction in the tile;
an entry point offset acquiring unit configured to acquire, from the bit stream, an entry point offset indicating a size of data corresponding to a block row included in the tile;
a flag acquiring unit configured to acquire, from the bit stream, a flag indicating whether specific decoding processing is performed; and
a decoding unit configured to decode the image including the tile based on the information acquired by the number-of-blocks acquiring unit and the entry point offset acquired by the entry point offset acquiring unit, in a case where the image includes a plurality of tiles and the flag indicates the specific decoding processing is performed,
wherein the specific decoding processing includes referring to information updated in decoding of a predetermined-numbered block in a first block row, in decoding of a first block in a second block row subsequent to the first block row.

US Pat. No. 10,218,990

VIDEO ENCODING FOR SOCIAL MEDIA

Avago Technologies Intern...

1. A device for encoding and sharing media for social networks, comprising:a sharing engine comprising a buffer, and a network interface installed within a housing of the device;
wherein the sharing engine is configured to:
receive a first portion of a media stream, and
write a subset of the received first portion of the media stream to the buffer; and
wherein the network interface is configured to, responsive to receipt of a capture command:
retrieve a second portion of the media stream from the buffer of the sharing engine,
trim the beginning and end of the retrieved second portion of the media stream to independently decodable frames, and
transmit the retrieved second portion of the media stream via a network to a second device.

US Pat. No. 10,218,989

IMPLICIT SIGNALING OF SCALABILITY DIMENSION IDENTIFIER INFORMATION IN A PARAMETER SET

Dolby International AB, ...

1. An electronic device comprising:a decoder for decoding a coded video sequence, the decoder comprising one or more processing devices configured to:
receive a video syntax set that includes information applicable to the coded video sequence,
determine, based on a flag included in the video syntax, that a scalability dimension identifier for the coded video sequence is implicitly signaled, wherein the flag is indicative of either implicit or explicit signaling of the scalability dimension identifier,
wherein the scalability dimension identifier specifies a scalability dimension of a particular layer of the coded video sequence, the scalability dimension being one of multiple types, including: a spatial type and a quality type,
derive the scalability dimension identifier from a network abstraction layer (NAL) unit header in response to determining that the scalability dimension identifier is implicitly signaled,
decode an enhancement layer based on the scalability dimension, and
generate a decoded video sequence based on, in part, the enhancement layer.

US Pat. No. 10,218,988

METHOD AND SYSTEM FOR INTERPOLATING BASE AND DELTA VALUES OF ASSOCIATED TILES IN AN IMAGE

Nvidia Corporation, Sant...

1. A non-transitory tangible-computer-readable medium having computer-executable instructions for performing a method of image decompression, said method comprising:accessing compressed image data representing an image, wherein said image comprises a plurality of tiles comprising a plurality of pixels, and wherein further said, compressed image data comprises a base value, a delta value and a plurality of indices for each tile of said plurality of tiles;
decompressing said compressed image data by performing:
identifying a pixel in an image;
identifying one or more tiles associated with said pixel;
determining an interpolated base for said pixel by interpolating base values of said one or more tiles;
determining an interpolated delta for said pixel by interpolating delta values of said one or more tiles;
determining an index for said pixel based on said plurality of indices; and
determining a color value for said pixel based on said interpolated base, said interpolated delta, and said index.

US Pat. No. 10,218,987

METHODS, SYSTEMS, AND COMPUTER READABLE MEDIA FOR PERFORMING IMAGE COMPRESSION

THE UNIVERSITY OF NORTH C...

1. A method for performing image compression, the method comprising:identifying a canonical image set from a plurality of images uploaded to or existing on a cloud computing environment and/or a storage environment;
computing an image representation for each image in the canonical image set;
receiving a first image;
identifying, using the image representations for the canonical image set, one or more reference images that are visually similar to the first image, wherein identifying the one or more reference images includes: computing a first image representation for the first image, compressing the first image representation using a binarizing process, and performing, using the first image representation, a k-nearest neighbor(s) (KNN) search over the image representations for the canonical image set, wherein each of the image representations includes a GIST descriptor represented as a binarized string; and
compressing the first image using the one or more reference images.

US Pat. No. 10,218,986

FRAME ACCURATE SPLICING

Google LLC, Mountain Vie...

1. A computer implemented method comprising:receiving, by a computing system, first compressed video content;
receiving, by the computing system, second compressed video content;
identifying, by the computing system, a splice point for the first compressed video content;
identifying a particular frame in the first compressed video content that precedes the splice point;
determining that the particular frame depends on information included in a subsequent frame of the first compressed video content that is after the splice point;
altering, by the computing system and in response to determining that the particular frame depends on information included in the subsequent frame, time stamp information of the subsequent frame, wherein altering the time stamp information of the subsequent frame comprises:
reading a presentation time stamp value associated with the subsequent frame;
subtracting a particular value from the presentation time stamp value; and
storing the resulting value of subtracting the particular value from the presentation time stamp value as a new presentation time stamp for the subsequent frame; and
transmitting, by the computing system and to a video presentation system, the particular frame, the subsequent frame along with the altered time stamp information, and at least a portion of the second compressed video content;
wherein the particular value is between 5 ms and 150 ms.

US Pat. No. 10,218,985

INTRA-FRAME DEPTH MAP BLOCK ENCODING AND DECODING METHODS, AND APPARATUS

Huawei Technologies Co., ...

1. An intra-frame depth map block encoding method, comprising:acquiring a depth map block to be encoded;
when a depth modeling mode (DMM) is applied to a recursive quadtree (RQT) or simplified depth coding (SDC) to encode the depth map block, separately detecting the depth map block by using a DMM1 mode and a DMM4 mode in the DMM, to obtain a rate-distortion result of the depth map block in the DMM1 mode and a rate-distortion result of the depth map block in the DMM4 mode; and
determining that a DMM with a smallest rate-distortion result in the DMM1 and the DMM4 is a DMM used during encoding, applying the used mode to the RQT or the SDC to encode the depth map block, and writing the used DMM to a bitstream.

US Pat. No. 10,218,984

IMAGE CODING METHOD AND DEVICE FOR BUFFER MANAGEMENT OF DECODER, AND IMAGE DECODING METHOD AND DEVICE

SAMSUNG ELECTRONICS CO., ...

1. An apparatus for encoding an image, the apparatus comprising:an encoder configured to encode an image frame by performing motion prediction using a reference frame,
to output a first syntax indicating a maximum size of a buffer required to decode the image frame by a decoder, a second syntax indicating the number of image frames required to be reordered, and a third syntax indication a latency information, and to generate a bitstream by adding the first syntax, the second syntax and the third syntax to a mandatory sequence parameter set,
wherein the number of frames required to be reordered is determined based on an encoding order of the image frame, an encoding order of the reference frame referred to by the image frame, a display order of the image frame, and a display order of the reference frame,
wherein the latency information indicates a largest difference between the encoding order and the display order,
wherein the maximum size of the buffer storing decoded picture is determined based on the first syntax,
wherein, whether to output the decoded picture stored in the buffer is determined based on the second syntax and the third syntax by increasing a latency parameter count of the decoded picture stored in the buffer by one whenever a picture include in an image sequence is decoded,
the decoded picture is outputted from the buffer when the latency parameter count of the decoded picture is equal to the latency information.

US Pat. No. 10,218,983

ADAPTING MODE DECISIONS IN VIDEO ENCODER

Apple Inc., Cupertino, C...

1. An encoding pipeline configured to encode image data, comprising:mode decision circuitry configured to determine a frame prediction mode, the mode decision circuitry comprising:
distortion measurement circuitry configured to select a distortion measurement calculation based at least in part on operational parameters of a display device and the image data, wherein the distortion measurement calculation comprises a higher-cost calculation when the encoding pipeline is capable of encoding the image data at or near real time, or comprises a low-cost calculation when the encoding pipeline is not capable of encoding the image data at or near real time and wherein whether the encoding pipeline is capable of encoding the image data at or near real time depends at least in part on the operational parameters; and
mode selection circuitry configured to:
determine rate distortion cost metrics associated with an inter-frame prediction mode and an intra-frame prediction mode using the distortion measurement calculation; and
select between the inter-frame prediction mode and the intra-frame prediction mode based at least in part on the rate distortion cost metrics.

US Pat. No. 10,218,954

VIDEO TO DATA

CELLULAR SOUTH, INC., Ri...

1. A method to generate video data from a video comprising:generating audio files and image files from the video;
distributing the audio files and the image files across a plurality of processors and processing the audio files and the image files in parallel;
converting audio files associated with the video to text;
identifying an object in the image files;
determining a contextual topic from the image files;
assigning a probability of accuracy to the identified object based on the contextual topic;
converting the image files associated with the video to video data, wherein the video data comprises the object, the probability, and the contextual topic;
cross-referencing the text and the video data with the video to determine contextual topics;
generating a contextual text, an image, or an animation based on the determined contextual topics;
generating a content-rich video based on the generated text, image, or animation.

US Pat. No. 10,218,925

METHOD AND APPARATUS FOR CORRECTING LENS DISTORTION

HUAWEI TECHNOLOGIES CO., ...

1. A method, comprising:performing a first correction of radial lens distortion in image data acquired from a lens in a horizontal direction, before the image data is written into a dynamic memory;
writing the image data into the dynamic memory after performing the first correction; and
performing a second correction in the dynamic memory of the radial lens distortion in the image data written into the dynamic memory in the vertical direction using a column length selected according to a degree of radial distortion of the image data, wherein the column length is a sum of pixels that can be read consecutively in a refresh cycle of the dynamic memory.

US Pat. No. 10,218,924

LOW NOISE CMOS IMAGE SENSOR BY STACK ARCHITECTURE

OmniVision Technologies, ...

1. A pixel circuit for use in a high dynamic range (HDR) image sensor, comprising:a photodiode disposed in a first semiconductor wafer, the photodiode adapted to photogenerate charge carriers in response to incident light during a single exposure of a single image capture of the HDR image sensor;
a floating diffusion disposed in the first semiconductor wafer and coupled to receive the charge carriers photogenerated in the photodiode;
a transfer transistor disposed in the first semiconductor wafer and coupled between the photodiode and the floating diffusion, wherein the transfer transistor is adapted to be switched on to transfer the charge carriers photogenerated in the photodiode to the floating diffusion;
an in-pixel capacitor disposed in a second semiconductor wafer, wherein the first semiconductor wafer is stacked with and coupled to the second semiconductor wafer; and
a dual floating diffusion (DFD) transistor disposed in the first semiconductor wafer and coupled between the floating diffusion and the in-pixel capacitor, wherein the DFD transistor is coupled to be enabled or disabled in response to a DFD signal such that the in-pixel capacitor is selectively coupled to the floating diffusion through the DFD transistor in response to the DFD signal, wherein the floating diffusion is set to low conversion gain in response to the in-pixel capacitor being coupled to the floating diffusion, and wherein the floating diffusion is set to high conversion gain in response to the in-pixel capacitor being decoupled from the floating diffusion.

US Pat. No. 10,218,923

METHODS AND APPARATUS FOR PIXEL BINNING AND READOUT

SEMICONDUCTOR COMPONENTS ...

1. An imaging apparatus capable of identifying a predetermined feature and producing an output image, comprising:a pixel array, comprising a plurality of pixels arranged to form a plurality of rows;
an image signal processor coupled to the pixel array and configured to:
receive pixel data from the plurality of pixels; and
determine:
a region of interest according to the predetermined feature, wherein the region of interest corresponds to a first group of consecutive rows from the plurality of rows; and
a region of non-interest comprising a plurality of remaining rows from the plurality of rows;
a readout circuit coupled to the pixel array and configured to:
facilitate combining portions of the pixel data from the region of non-interest to form a plurality of second groups;
readout each row of the first group according to a first readout rate; and
readout each of the second groups according to a second readout rate;
wherein the first readout rate is substantially equal to the second readout rate.

US Pat. No. 10,218,922

SOLID-STATE IMAGING DEVICE

OLYMPUS CORPORATION, Tok...

1. A solid-state imaging device comprising:a first semiconductor substrate to which light is incident;
a second semiconductor substrate that is stacked on a surface of the first semiconductor substrate, the surface being opposite with respect to a surface on which the light is incident to the first semiconductor substrate;
n first photoelectric conversion devices that are periodically arranged in the first semiconductor substrate, the n first photoelectric conversion devices generating first electric charge signals by performing photoelectric conversion of the incident light;
n first reading circuits arranged in correspondence with each of the n first photoelectric conversion devices in the first semiconductor substrate, each of the n first reading circuits accumulating the first electric charge signal generated by a corresponding one of then first photoelectric conversion devices, and each of the n first reading circuits outputting a signal voltage corresponding to the accumulated first electric charge signal as a first pixel signal;
a driving circuit that outputs the first pixel signal by sequentially driving each of the n first reading circuits;
m second photoelectric conversion devices that are periodically arranged in one of the first semiconductor substrate and the second semiconductor substrate, the m second photoelectric conversion devices generating second electric charge signals by performing photoelectric conversion of the incident light; and
m second reading circuits that sequentially output a second pixel signal indicating a change in the second electric charge signal, the second electric charge signal being generated by a corresponding second photoelectric conversion device among the m second photoelectric conversion devices,
wherein each of the m second reading circuits includes:
a detection circuit that detects a temporal change of the second electric charge signal generated by the corresponding one of the second photoelectric conversion devices and the detection circuit outputs an event signal indicating a direction of a change when a change exceeding a predetermined threshold is detected; and
a pixel signal generating circuit that is arranged in the second semiconductor substrate and the pixel signal generating circuit outputs the second pixel signal, the second pixel signal being generated by adding address information indicating a position at which the corresponding one of the second photoelectric conversion devices is arranged to the event signal,
wherein n is a natural number equal to 2 or more than 2, and
wherein m is a natural number equal to 2 or more than 2.

US Pat. No. 10,218,921

IMAGING SYSTEMS AND METHODS

Sensors Unlimited, Inc., ...

1. An imaging method, comprising:receiving electromagnetic radiation at a focal plane array of a handheld device;
processing the received electromagnetic radiation within the handheld device;
displaying visible images on the handheld device indicative of a scene including a designator and a designator identifier, the designator and designator identifier being representative of pulsed electromagnetic radiation received by the focal plane array; and
converting the electromagnetic radiation into image data and comprising converting the electromagnetic radiation into high frequency pulse data using a common focal plane array,
wherein the image data includes pixel photocurrents integrated over a first exposure period, wherein the high frequency pulse data includes voltages representative of presence, or lack of presence, of high frequency laser illumination within a second exposure period, and wherein the second exposure interval has a shorter duration than the first exposure interval.

US Pat. No. 10,218,920

IMAGE PROCESSING APPARATUS AND CONTROL METHOD FOR GENERATING AN IMAGE BY VIEWPOINT INFORMATION

Canon Kabushiki Kaisha, ...

1. An image processing apparatus comprising:a processor; and
a memory storing one or more programs configured to be executed by the processor, the one or more programs including instructions for:
acquiring virtual viewpoint information indicating a virtual viewpoint;
generating a virtual viewpoint image based on both of a plurality of captured images captured by multiple cameras from a plurality of directions and the virtual viewpoint information acquired in the acquiring, wherein, according to the virtual viewpoint information acquired in the acquiring, an inclination correction process for correcting an inclination based on the virtual viewpoint information is executed to generate the virtual viewpoint image to be output; and
outputting the generated virtual viewpoint image.

US Pat. No. 10,218,919

IMAGE PICKUP SYSTEM, IMAGE PROCESSING METHOD, AND RECORDING MEDIUM

OLYMPUS CORPORATION, Tok...

1. An image pickup system in which an interchangeable lens is detachably attached to a camera main body, the image pickup system comprising:an image pickup circuit provided in the camera main body and configured to pick up, on an image pickup plane on which a plurality of pixels are arrayed, an optical image formed by the interchangeable lens and output picked-up image data with a predetermined sampling frequency;
a resolution changing circuit configured to generate, on the basis of the picked-up image data corresponding to a partial region in a screen obtained from the image pickup circuit, image data with resolution higher than resolution of the picked-up image data; and
a control circuit configured to determine, on the basis of an MTF characteristic value corresponding to the partial region among a plurality of MTF characteristic values corresponding to a plurality of regions in the screen of the interchangeable lens and the predetermined sampling frequency, an upper limit of the resolution of the image data generated by the resolution changing circuit.

US Pat. No. 10,218,918

IMAGE PROCESSING DEVICE AND METHOD WITH IMAGE PROCESS EFFECT SELECTION BASED UPON FLOW VECTOR PATTERNS

Sony Corporation, Tokyo ...

1. An image processing device comprising:a flow vector detection unit configured to detect flow vectors of pixels in an input image; and
an effect selection unit configured to select a process of effect to the input image, on the basis of a pattern of the flow vectors,
wherein the pattern of the flow vectors indicates a presence or absence of a vanishing point.

US Pat. No. 10,218,917

METHOD AND APPARATUS TO CREATE AN EOTF FUNCTION FOR A UNIVERSAL CODE MAPPING FOR AN HDR IMAGE, METHOD AND PROCESS TO USE THESE IMAGES

KONINKLIJKE PHILIPS N.V.,...

1. A tangible processor readable storage medium that is not a transitory propagating wave or signal having processor readable program code for operating on a processor for performing a method of constructing a code allocation function for allocating pixel colors having pixel luminances to luma codes encoding such pixel luminances, the method comprising acts of:constructing a luma code mapping from at least two partial functions by determining a code allocation function applied to a linear luminance of a pixel to obtain a luma code value, the constructing comprising acts of:
mapping the luma code to provide a non-linear mapping of pixel linear luminances to luma values,
defining a non-linear invertible mapping of an entire luminance range of a linear luminance input value to an entire luma range of a first output luma value using a first partial function of the at least two partial functions, and
defining a non-linear invertible mapping of an entire luma range of an input luma value being the first output luma value to an entire luma range of a second output luma value using a second partial function to be consecutively applied to the luma value from the first partial function of the at least two partial functions.

US Pat. No. 10,218,916

CAMERA WITH LED ILLUMINATION

GOOGLE LLC, Mountain Vie...

1. A camera, comprising:a camera lens configured to capture visual data of a field of view;
a plurality of light sources configured to illuminate the field of view; and
bypass circuit coupled to the plurality of light sources and configured to bypass a subset of the plurality of light sources, wherein the bypass circuit is configured to select one of a plurality of light source subsets, and at least two of the plurality of light source subsets include distinct light source members configured to illuminate different regions of the field of view of the camera;
wherein:
the camera includes a first mode and a second mode;
in the first mode, the plurality of light sources are electrically coupled to form a string and driven by a boosted drive voltage; and
in the second mode, the one of the plurality of light source subsets is selected and driven by a regular drive voltage that is lower than the boosted drive voltage.

US Pat. No. 10,218,915

COMPACT MULTI-ZONE INFRARED LASER ILLUMINATOR

TRILUMINA CORP., Albuque...

1. An infrared illumination system, comprising:a plurality of infrared illumination sources including a first infrared illumination source and a second infrared illumination source, the first infrared illumination source configured to provide illumination to a first zone of a plurality of zones, the second infrared illumination source configured to provide illumination to a second zone of a plurality of zones, the first zone being different than the second zone, and each of the plurality of zones corresponding to at least part of an angular portion of a field of view of an image sensor, wherein the plurality of infrared illumination sources comprise one or more arrays of a plurality of vertical-cavity surface-emitting lasers (VCSELs);
plurality of microlenses, each microlens among the plurality of microlenses corresponding to each VCSEL among the plurality of VCSELs, wherein each microlens directs illumination from a corresponding VCSEL to at least one of the plurality of separate zones; and
an image processor in communication with the plurality of infrared illumination sources and the image sensor, and configured to define an area of interest in the field of view of the image sensor and separately control each of the plurality of infrared illumination sources to provide an adjustable illumination power to at least one of the plurality of separate zones and alter an illumination of the area of interest, in response to image data indicative of an illumination of one or more areas in the field of view of the image sensor.

US Pat. No. 10,218,913

HDR/WDR IMAGE TIME STAMPS FOR SENSOR FUSION

Qualcomm Incorporated, S...

1. An apparatus to determine timestamp information, the apparatus comprising:an image sensor configured to capture a plurality of sub-frames of a scene, wherein each sub-frame comprises an image of the scene captured using an exposure time that is different from at least one other exposure time of at least one other sub-frame of the plurality of sub-frames; and
at least one processor coupled to the image sensor and configured to:
receive, from the image sensor, for each of the plurality of sub-frames, sub-pixel image data corresponding to a first portion of an image frame;
determine composite image data corresponding to the first portion of the image frame based on values of the received sub-pixel image data for the plurality of sub-frames and by selecting, from the plurality of sub-frames a particular sub-frame to be associated with the composite image data based on luminosity values of the sub-pixel image data of the plurality of sub-frames;
identify an indicator based on the sub-frames corresponding to the received sub-pixel image data used to determine the composite image data; and
determine timestamp information, based on the identified indicator, wherein the timestamp information corresponds to the composite image data based upon timing information for the plurality of sub-frames.

US Pat. No. 10,218,912

INFORMATION PROCESSING APPARATUS, METHOD OF CONTROLLING IMAGE DISPLAY, AND STORAGE MEDIUM

CANON KABUSHIKI KAISHA, ...

1. An information processing apparatus, comprising:a display controller configured to exert control such that an image is displayed on a display screen in one of a plurality of display modes, the plurality of display modes including a first display mode and a second display mode; and
a first comparison unit configured to compare, when the first display mode is switched to the second display mode, a display size of the image that is displayed on the display screen in the first display mode and a given display size that is associated with the second display mode,
wherein the display controller is configured to exert control such that the image is displayed on the display screen in the second display mode in either the given display size that is associated with the second display mode or the display size of the image that is displayed in the first display mode, based on a result of the comparison by the first comparison unit.

US Pat. No. 10,218,911

MOBILE DEVICE, OPERATING METHOD OF MOBILE DEVICE, AND NON-TRANSITORY COMPUTER READABLE STORAGE MEDIUM

HTC Corporation, Taoyuan...

1. A method comprising:capturing a preview image;
displaying the preview image;
detecting a photograph in the preview image;
in response to the photograph being detected in the preview image, searching a video file corresponding to the photograph in a database; and
in response to a video file corresponding to the photograph being searched, playing a video of the searched video file over at least a part of the displayed preview image,
wherein the operation of playing the searched video file over at least a part of the displayed preview image comprises:
calculating a corresponding relationship between vertexes of the photograph and vertexes of the video of the searched video file; and
playing the video with a shape and a size changed according to the corresponding relationship between the vertexes of the photograph and the vertexes of the video of the searched video file at a position of the photograph in the preview image.

US Pat. No. 10,218,909

CAMERA DEVICE, METHOD FOR CAMERA DEVICE, AND NON-TRANSITORY COMPUTER READABLE STORAGE MEDIUM

HTC Corporation, Taoyuan...

1. A method applied to a camera device comprising:acquiring an angular velocity signal;
calculating an angular displacement according to a high frequency portion and a low frequency portion of the angular velocity signal;
generating a compensation value according to the angular displacement;
adjusting the compensation value according to a frequency corresponding to the angular velocity signal to generate an adjusted compensation value, wherein the adjusted compensation value is determined by using a first adjusting filter, the first adjusting filter corresponds to a first function of the angular displacement, and an order of the first function is less than 1; and
controlling an optical image stabilization (OIS) system to align an optical axis of a camera of the camera device according to the adjusted compensation value.

US Pat. No. 10,218,908

IMAGE PROCESSING APPARATUS CAPABLE OF PERFORMING IMAGE SHAKE CORRECTION, IMAGE PICKUP APPARATUS, AND CONTROL METHOD

Canon Kabushiki Kaisha, ...

1. An image processing apparatus includes at least one processor or circuit configured to perform the operations of following units:a temporary storage unit configured to temporarily store a video image including a (N-M)-th frame;
an output control unit configured to output a video image of frames up to N-th frame and later than the (N-M)-th frame, to a first signal path, and output a video image of the (N-M)-th frame that is temporarily stored in the temporary storage unit to a second signal path;
a first correction amount calculating unit configured to calculate a first correction amount used for correcting an image shake relating to the video image of the (N-M)-th frame based on the video image of the frames up to N-th frame and later than the (N-M)-th frame, output to the first signal path;
a first image shake correction unit configured to correct the image shake relating to the video image of the (N-M)-th frame by cutting out a predetermined region from the video image of the (N-M)-th frame based on the first correction amount; and
a recording unit configured to record the (N-M)-th frame, the image shake of which is corrected by the first image shake correction unit;
wherein the first signal path is connected to a display unit configured to display a video image for confirmation on a main body of the image processing apparatus.

US Pat. No. 10,218,907

IMAGE PROCESSING APPARATUS AND CONTROL METHOD DETECTION AND CORRECTION OF ANGULAR MOVEMENT DUE TO SHAKING

Canon Kabushiki Kaisha, ...

1. An image processing apparatus comprising:at least one processor or circuit configured to perform the operations of the following units:
an acquisition unit configured to acquire an angular velocity of panning of an imaging device detected by an angular velocity sensor and a motion vector of an object detected from a plurality of image data successively imaged by an imaging element;
a decision unit configured to decide a value of a frame rate when the image data is acquired from the imaging element used to detect the motion vector; and
a calculation unit configured to calculate an angular velocity of the object with respect to the imaging device from the angular velocity of the panning and the motion vector of the object,
wherein the decision unit decides the value of the frame rate corresponding to the angular velocity of the panning and the acquisition unit acquires the motion vector of the object detected from the plurality of the image data imaged at the frame rate decided by the decision unit.

US Pat. No. 10,218,906

CAMERA DEVICE AND METHOD FOR CAMERA DEVICE

HTC Corporation, Taoyuan...

1. A method applied to a camera device comprising:receiving one or all of an angular velocity signal and an acceleration signal;
selecting one of predetermined motion modes according to the one or all of the angular velocity signal and the acceleration signal;
configuring one or more of an exposure time of a camera, an auto white balance (AWB) configuration of the camera, and an auto exposure (AE) configuration of the camera according to the selected motion mode;
configuring an auto focus (AF) configuration of the camera, wherein if magnitudes of one or more of vectors in the angular velocity signal or the acceleration signal are lower than a predetermined threshold, an AF speed of the AF configuration is configured to be a fast value, and if the magnitudes of the one or more of the vectors in the angular velocity signal or the acceleration signal are greater than the predetermined threshold, the AF speed is configured to be a medium value; and
capturing an image or recording a video according to the one or more of the exposure time of the camera, the AF configuration of the camera, the AWB configuration of the camera, and the AE configuration of the camera;
wherein the predetermined motion modes comprise a walk mode and a rotate mode, a first AF speed is determined according to the angular velocity signal or the acceleration signal of the walk mode and a second AF speed is determined according to the angular velocity signal or the acceleration signal of the rotate mode, and the first AF speed is different from the second AF speed.

US Pat. No. 10,218,905

SLIDING ACCESSORY LENS MOUNT WITH AUTOMATIC MODE ADJUSTMENT

Nokia Technologies Oy, E...

1. A device comprising a camera, the camera comprising:an objective;
an image capture assembly;
an attachment indicator configured to detect attaching of a sliding accessory lens mount to the device;
an optical mode detector configured to receive an optical mode indication from an optical mode indicator of the sliding accessory lens mount such that one of different optical elements is linearly moved to a co-operating position with the objective;
a touch detection surface, wherein the device is configured to detect a detection object of the optical mode indicator proximate the touch detection surface, and wherein the device is configured to use the touch detection surface for detection of a position of the mode indicator; and
a processor configured to automatically determine one or more imaging parameters based on the optical mode indication and to correspondingly control the operation of one or more of the objective and the image capture assembly.

US Pat. No. 10,218,904

WIDE FIELD OF VIEW CAMERA FOR INTEGRATION WITH A MOBILE DEVICE

ESSENTIAL PRODUCTS, INC.,...

1. An imaging device, comprising:an array of lenses corresponding to photo sensors disposed around a substrate,
wherein a first subset of the array of lenses includes wide-angle lenses and a second subset of the array of lenses include standard-angle lenses; and
a connection mechanism to transfer data associated with images captured by the photo sensors to cause a processor to receive any of the captured images and create a wide view image of an environment around the imaging device,
wherein the captured images include a distorted image and a standard image, and
wherein creating the wide view image includes merging pixels of the distorted image and the standard image.

US Pat. No. 10,218,903

DIGITAL 3D/360 DEGREE CAMERA SYSTEM

1. A method for exporting digital images in a digital camera system comprising a plurality of digital cameras, the method comprising:storing digital image data and embedded metadata in an electronic file;
generating, based on a calibration process that exposes pixels of the plurality of digital cameras to distinct coordinate points, a pixel vector map, wherein the pixel vector map includes a collection of data that identifies a geometry of each of the plurality digital cameras in the digital camera system;
storing, based on the generated pixel vector map, pixel vector map data in the electronic file, wherein the file includes pixel vector map data describing the digital camera system; and
exporting the file via a communication interface to an external device, wherein the exporting includes delivering the image data and the pixel vector map data for processing, and wherein the exporting is based on a request received by the digital camera system from an external processing system and a determination that no more images are to be captured.

US Pat. No. 10,218,902

APPARATUS AND METHOD FOR SETTING CAMERA

Samsung Electronics Co., ...

1. A method for controlling an electronic device, the method comprising:detecting environmental information associated with the electronic device using a sensor, wherein the electronic device comprises the sensor, a first image sensor, and a second image sensor;
changing first setting information of the first image sensor based on the environmental information;
detecting a user's viewpoint;
selecting one of the first image sensor or the second image sensor based on the user's viewpoint; and
changing the first setting information or a second setting information of the second image sensor based on the selected image sensor,
wherein the detecting of the user's viewpoint comprises:
receiving orientation information from a head mounted display (HMD); and
determining the user's viewpoint based on the orientation information.

US Pat. No. 10,218,901

PICTURE COMPOSITION ADJUSTMENT

INTERNATIONAL BUSINESS MA...

1. A computer-implemented method comprising:determining, based on a predefined composition rule, whether a picture composition of a first object and a second object needs to be adjusted; wherein the determining comprises:
determining whether the first object overlaps with the second object;
in response to determining that the first object overlaps with the second object, determining whether a ratio of areas of a first region of the first object and a total region of the picture composition is less than a ratio threshold; and
in response to determining that the ratio is less than the ratio threshold, determining that the picture composition needs to be adjusted;
in response to determining that the picture composition needs to be adjusted, determining an adjusting pattern based on the predefined composition rule; and
providing the adjusting pattern to a user, to indicate the user to adjust the picture composition based on the adjusting pattern.

US Pat. No. 10,218,900

DOCUMENT REORIENTATION PROCESSING

NCR Corporation, Atlanta...

1. A method, comprising:capturing, by a device, document images for a document;
identifying four edges for the document from the document images;
obtaining a camera preview image when a camera of the device is in a camera preview mode;
resolving an optimal orientation of the device for capturing an optimal image of the document based on the four edges and the camera preview image; and
activating the device to capture the optimal image for the document in the optimal orientation, including displaying on a display of the device an indication of the optimal orientation of the device;
wherein displaying includes presenting a guiding rectangle corresponding to the optimal orientation of the device in a screen on the display of the device and superimposed over the document images appearing in the display;
wherein presenting further includes presenting a graphical illustration within the screen that illustrates moving the device from a current orientation to the optimal orientation and identifying when particular edges of a particular document image instance are aligned with a top-leftmost corner of the guiding rectangle and when a center of the particular document image instance corresponds to a calculated center for the four edges.

US Pat. No. 10,218,899

CONTROL METHOD IN IMAGE CAPTURE SYSTEM, CONTROL APPARATUS AND A NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM

Canon Kabushiki Kaisha, ...

1. A control method in an image capture system having a first image capture apparatus and a second image capture apparatus, the method comprising:analyzing a captured image captured by the first image capture apparatus;
deciding whether a likelihood of a capturing target that is an analysis result of the captured image is smaller than a predetermined threshold; and
in a case where the likelihood is smaller than the predetermined threshold, controlling an imaging area of the second image capture apparatus so that the imaging area of the second image capture apparatus becomes wider than in a case where the likelihood is not smaller than the predetermined threshold until the capturing target is found in a captured image captured by the second image capture apparatus,
wherein the controlling of the imaging area until the capturing target is found comprises: (a) making the imaging area wider, (b) determining whether or not the capturing target is found in the captured image captured by the second image capture apparatus using characteristic information on the capturing target, and (c) in response to the determination that the capturing target is not found in the captured image captured by the second image capture apparatus, repeating making the imaging area wider.

US Pat. No. 10,218,898

AUTOMATED GROUP PHOTOGRAPH COMPOSITION

International Business Ma...

1. A computer-implemented method for the automatic composition of group photograph framings as a function of relationship data, comprising executing on a computer processor:identifying a person appearing to a user via a viewfinder of a camera within a photographic image framing for acquisition of image data by the camera;
determining a geographic location of an additional person who is related to the person identified within the photographic image framing, wherein the additional person does not appear to the user within the photographic image framing, and the determined geographic location of the additional person is within a specified proximity range to a geographic location of the person identified within the photographic image framing; and
in response to determining that a relationship of the additional person to the person identified within the image framing indicates that the additional person should be included within photographic images of the identified person, recommending that the additional person be added to the photographic image framing prior to acquisition of image data by the camera from the photographic image framing.

US Pat. No. 10,218,897

DISPLAY CONTROL DEVICE AND METHOD TO DISPLAY A PANORAMIC IMAGE

Sony Corporation, Tokyo ...

1. A display control device, comprising:circuitry configured to:
generate, based on a first user instruction, a partial target image of a display target image displayed on a display area, wherein the partial target image comprises a first width and a first length, the first width is equal to a second width of the display area, and the first length is shorter than a second length of the display area;
concurrently display a whole of the display target image and the partial target image at a position, wherein the position corresponds to an input position of the first user instruction on the display target image,
wherein the first user instruction to designate a part of the display target image as the partial target image is received in a state where the whole of the display target image is displayed on the display area, wherein the display area has an aspect ratio different from that of the display target image; and
display automatic scroll of the display target image from a scroll start position, based on a second user instruction to designate start of the automatic scroll of the display target image,
wherein the first user instruction is a touch operation, and the second user instruction is a release of the touch operation,
wherein the release of the touch operation is a trigger for the display of the automatic scroll of the display target image, and
wherein the scroll start position corresponds to a point on the display target image at which the touch operation is released.

US Pat. No. 10,218,896

FOCUS ADJUSTMENT DEVICE, FOCUS ADJUSTMENT METHOD, AND NON-TRANSITORY STORAGE MEDIUM STORING FOCUS ADJUSTMENT PROGRAM

Olympus Corporation, Tok...

1. A focus adjustment device which includes an imager to receive a light flux passing through an imaging lens including a focus lens and then generate an image signal and which performs focus adjustment on the basis of the image signal, the focus adjustment device comprising:a direction judgment unit which calculates an evaluation value based on an image signal of a focus detection region set in a region of the imager where the light flux is received, thereby judging a drive direction of the focus lens to be in focus based on a difference of an evaluation value of a different position of the focus lens; and
a control unit which controls a focus adjustment operation on the basis of the drive direction judged by the direction judgment unit,
wherein the control unit causes the direction judgment unit to repeatedly judge the drive direction, and after the focus lens is slightly driven in a first direction judged on the basis of a first evaluation value and then the focus lens is slightly driven in a second direction different from the first direction on the basis of a subsequently calculated second evaluation value, when a drive amount of the focus lens in the second direction which is calculated based on the second evaluation value does not exceed a predetermined drive amount, the control unit forbids the slight driving of the focus lens in the first direction even though a drive direction judged on the basis of a further subsequently calculated third evaluation value is the first direction and the slight driving of the focus lens in the second direction is continuously performed,
wherein the control unit does not forbid the slight driving of the focus lens in the first direction, when a drive amount of the focus lens in the second direction which is calculated based on the second evaluation value exceeds the predetermined drive amount, and causes the direction judgment unit to repeatedly judge the drive direction by slightly driving the focus lens in the first direction, and
wherein the control unit determines that the focus lens is in focus when a sum of a number of times when the drive direction of the slight driving of the focus lens is changed from the first direction to the second direction and a number of the times when the drive direction is changed from the second direction to the first direction becomes a predetermined value.

US Pat. No. 10,218,895

ENHANCED FIELD OF VIEW TO AUGMENT THREE-DIMENSIONAL (3D) SENSORY SPACE FOR FREE-SPACE GESTURE INTERPRETATION

Leap Motion, Inc., San F...

1. A rim mounted space imaging apparatus, mounted in a rim of a display that has a vertical axis, comprising:a camera mounted in a rim of a display with optical axis facing within 20 degrees of tangential to a vertical axis of the display;
at least one Fresnel prismatic element that redirects the optical axis of the camera, giving the camera a field of view that covers at least 45 to 80 degrees from tangential to the vertical axis of the display; and
a camera controller coupled to the camera that compensates for redirection by the Fresnel prismatic element and determines a position of at least one control object within the field of view of the camera.

US Pat. No. 10,218,894

IMAGE CAPTURING APPARATUS, IMAGE CAPTURING METHOD, AND STORAGE MEDIUM

Canon Kabushiki Kaisha, ...

1. An image capturing apparatus comprising:an image sensor including a plurality of pixels, each pixel including a plurality of photoelectric conversion units that generate focus detection signals from light flux that have passed through different regions in an exit pupil in an optical system; and
an image capturing unit configured to continuously capture a plurality of images by using the image sensor, the image capturing unit being configured to acquire a signal in a first acquisition mode or a second acquisition mode for each pixel, the first acquisition mode being a mode in which an image signal obtained by adding the focus detection signals of the plurality of photoelectric conversion units is acquired, and the second acquisition mode being a mode in which the focus detection signals are acquired in addition to the image signal,
wherein the image capturing unit is configured to alternately capture a recording image and a focus detection image having a smaller number of pixels than the recording image, apply all pixels to the first acquisition mode when capturing the recording image, and apply at least a part of the pixels to the second acquisition mode when capturing the focus detection image.

US Pat. No. 10,218,893

IMAGE CAPTURING SYSTEM FOR SHAPE MEASUREMENT OF STRUCTURE, METHOD OF CAPTURING IMAGE OF STRUCTURE FOR SHAPE MEASUREMENT OF STRUCTURE, ON-BOARD CONTROL DEVICE, REMOTE CONTROL DEVICE, PROGRAM, AND STORAGE MEDIUM

Mitsubishi Electric Corpo...

1. An image capturing system for shape measurement of a structure, the image capturing system comprising:an image capturing device configured to capture an image of a target structure to measure a shape of the target structure;
an air vehicle having the image capturing device mounted thereon, the air vehicle being configured to fly and be unmoved in air;
a distance measurement device mounted on the air vehicle and configured to measure a distance between the air vehicle and the target structure;
an image capturing scenario storage configured to store an image capturing scenario, the image capturing scenario including:
a plurality of image capturing points at each of which the air vehicle is unmoved in air with a distance from the target structure being maintained when capturing the image of the target structure to measure the shape of the target structure with the target structure being unmoved, and
a flight route set in accordance with a positional relation between the target structure and each of the image capturing points or coordinates of each of the image capturing points such that the air vehicle having the image capturing device mounted thereon and configured to capture the image of the target structure flies via the image capturing points sequentially;
an on-board control device mounted on the air vehicle, the on-board control device including:
an image capturing controller configured to control the image capturing device in accordance with the image capturing scenario, and
a flight controller configured to control the air vehicle in accordance with the image capturing scenario based on the distance measured by the distance measurement device; and
a remote control device including:
a scenario creator configured to create the image capturing scenario based on the image capturing points, and
a scenario transferor configured to transfer the image capturing scenario created by the scenario creator to the on-board control device to store the image capturing scenario in the image capturing scenario storage,
the scenario creator configured to:
check whether or not a path connecting a first image capturing point to a second image capturing point in a straight line meets the target structure, the first image capturing point and the second image capturing point being different image capturing points,
when the path does not meet the structure, create the flight route including the path connecting the first image capturing point to the second image capturing point in the straight line, and
when the path meets the structure, create the flight route including a path avoiding the target structure in flight between the first image capturing point and the second image capturing point.

US Pat. No. 10,218,892

INFORMATION PROCESSING DEVICE, IMAGING DEVICE, IMAGING SYSTEM, AND INFORMATION PROCESSING METHOD

SONY CORPORATION, Tokyo ...

1. An information processing device, comprising:circuitry configured to:
receive information associated with a user's contact on a lens barrel, wherein the information comprises a contact area of the lens barrel that is in the user's contact;
determine, based on the received information, a state of an imaging device,
wherein the imaging device comprises the lens barrel, and
wherein the state corresponds to a holding state of the imaging device with respect to the user's contact;
control, an imaging operation of the imaging device, based on the determined state and a relative positional relation of the information processing device with the imaging device; and
set a moving image mode to the imaging device based on attachment of the imaging device with the information processing device and the contact area that exceeds a threshold area.

US Pat. No. 10,218,891

COMMUNICATION APPARATUS, CONTROL METHOD FOR THE SAME, AND STORAGE MEDIUM FOR PRIORITY IMAGE TRANSFER

Canon Kabushiki Kaisha, ...

1. A communication apparatus, comprising:a communication unit capable of communicating with an external apparatus;
a display unit configured to display an image;
an operation unit configured to accept a user operation; and
a designation unit configured to, in response to a first operation of pressing both a first operation member and a second operation member being performed during display of an image, designating the image being displayed as an image to be transferred, which is to be transferred to the external apparatus,
wherein in response to a second operation that is different from the first operation and uses a plurality of operation members being performed, the designation unit designates the image being displayed as an image to be priority transferred, which is to be transferred with greater priority than the image to be transferred, and
the plurality of operation members to be used in the second operation include at least one of the first operation member and the second operation member.

US Pat. No. 10,218,890

DEVICE FOR INHIBITING OPERATION OF AN IMAGE RECORDING APPARATUS

KHALIFA UNIVERSITY OF SCI...

1. A device for attachment to an image recording apparatus, the device comprising: a blocker, attachable to said image recording apparatus, for inhibiting said apparatus from recording an image when attached thereto; a transducer, connected to the blocker, for detecting a change of position of the blocker between an attached position, wherein said apparatus is inhibited, and another position; and—a controller, connected to the transducer, for storing the position of the blocker when attached to said image recording apparatus and indicate indicating if the blocker has changed position after it has been attached; and—wherein the blocker is a sticker for sticking over one or more image recording apparatus lens or sensor.

US Pat. No. 10,218,889

SYSTEMS AND METHODS FOR TRANSMITTING AND RECEIVING ARRAY CAMERA IMAGE DATA

FotoNation Limited, (IE)...

1. A method of transmitting image data, comprising:capturing image data using a first set of active cameras in an array of cameras;
generating a first line of image data by multiplexing at least a portion of the image data captured by the first set of active cameras using a predetermined process, wherein the predetermined process is selected from a plurality of predetermined processes for multiplexing captured image data;
generating a first set of additional data containing information identifying the cameras in the array of cameras that form the first set of active cameras and information indicating the predetermined process used to multiplex at least the portion of the image data;
transmitting the first set of additional data and the first line of image data;
capturing image data using a second set of active cameras in the array of cameras, wherein the second set of active cameras is different from the first set of active cameras;
generating a second line of image data by multiplexing at least a portion of the image data captured by the second set of active cameras;
generating a second set of additional data containing information identifying the cameras in the array of cameras that form the second set of active cameras; and
transmitting the second set of additional data and the second line of image data.