US Pat. No. 10,171,975

EFFICIENT DISTRIBUTION OF HEATING, VENTILATION, AND AIR CONDITIONING FUNCTIONALITY

Lennox Industries Inc., ...

1. A system for replicating settings in a heating, ventilation, and air-conditioning (HVAC) network, comprising:a first control unit comprising a first internal clock operable to maintain an internal time for the first control unit, the first control unit communicatively coupled to a first plurality of HVAC units and a first interactive display;
a second control unit comprising a second internal clock operable to maintain an internal time for the second control unit, the second control unit communicatively coupled to a second plurality of HVAC units and a second interactive display; and
a communications network, wherein the first control unit detects the second control unit over the communications network and the first control unit and the second control unit synchronize the first internal clock and the second internal clock to have the same internal time; and
the first control unit is further operable to:
receive a first settings update from the second control unit;
determine drat the first settings update is associated with a changed universal setting comprising a first setting time, wherein the changed universal setting comprises at least one of a language preference, a temperature unit preference, a password for the communications network, an address of the first and second control units, or a dealer of the first plurality of HVAC units and second plurality of HVAC units;
compare the first setting time of the changed universal setting to a stored setting time of an existing universal setting, wherein the stored setting time reflects the time when existing universal setting was changed and the first setting time reflects the time when the changed universal setting was changed;
determine that the first setting time is more recent in time than the stored setting time;
update the existing universal setting with the changed universal setting; and
implement the changed universal setting in at least one of the first plurality of HVAC units and the first interactive display.

US Pat. No. 10,171,974

SYSTEM AND METHOD FOR USING AN ELECTRONIC LOCK WITH A SMARTPHONE

Schlage Lock Company LLC,...

1. A computer-implemented method for updating a reader device with access control information, comprising: transmitting a reader device identifier from a reader device to a mobile device via a wireless connection; receiving the reader device identifier at a server from the mobile device, the server and mobile device communicating with one another via an Internet; determining, via the server, an encrypted user database for the reader device u-to be received by the reader device based on an analysis of the reader device identifier, wherein the encrypted user database is stored at the server; transmitting a firmware update from the server to the mobile device and storing the Firmware update in the mobile device; transmitting the encrypted user database from the server to the mobile device; transmitting the firmware update from the mobile device to the reader device; transmitting the encrypted user database from the mobile device to the reader device; transmitting a confirmation from the reader device to the mobile device, wherein the confirmation only includes information that the firmware update and the encrypted user database were received by the reader device; and transmitting the confirmation from the mobile device to the server via the Internet.

US Pat. No. 10,171,971

ELECTRICAL SYSTEMS AND RELATED METHODS FOR PROVIDING SMART MOBILE ELECTRONIC DEVICE FEATURES TO A USER OF A WEARABLE DEVICE

Skullcandy, Inc., Park C...

1. An electrical system, comprising:a wearable mobile electronic device including:
cellular voice equipment configured to enable a user of the wearable mobile electronic device to participate in cellular voice calls through a cellular voice network;
cellular data equipment configured to enable the wearable mobile electronic device to communicate through a cellular data network;
one or more biometric sensors;
at least one audio speaker;
control circuitry operably coupled to the cellular voice equipment, the cellular data equipment, the one or more biometric sensors, and the at least one audio speaker, the control circuitry including a processor operably coupled to a data storage device comprising computer-readable instructions stored thereon, the processor configured to execute the computer-readable instructions, wherein the computer-readable instructions are configured to instruct the processor to:
engage in communications with a remote server using the cellular data equipment; and
interact with the remote server to provide audio signals to the at least one audio speaker, the audio signals corresponding to audio media selected based, at least in part, on biometric data provided to the control circuitry by the one or more biometric sensors,
wherein the selected audio media has a beat that is faster than a rate of running steps measured by the biometric sensors when a heart rate measurement by the biometric sensors is slower than a desired heart rate; and the selected audio media has a beat that is slower than the rate of running steps measurement by the biometric sensors when the heart rate measurement by the biometric sensors is faster than a desired heart rate.

US Pat. No. 10,171,970

RESOURCE NOTIFICATION METHOD AND SYSTEM, LOCAL CSE, REMOTE CSE AND STORAGE MEDIUM

1. A resource attribute notification method, comprising:after receiving a first resource updating request transmitted by an Application Entity, AE, and when determining that a valid notification resource attribute exists in the first resource updating request, a local Common Service Entity, CSE, transmitting a second resource updating request to a remote CSE; wherein the first resource updating request includes a notifiable attribute and an address of an original resource which needs to be updated,
wherein the notifiable attribute includes an attribute list which needs to be notified;
wherein the local CSE determines that an attribute which satisfies the following condition in the notifiable attribute is a valid notification resource attribute: an attribute included in the notifiable attribute exists in the original resource which requests to be updated,
or,
wherein the local CSE determines that an attribute which satisfies the following condition in the notifiable attribute is a valid notification resource attribute: an attribute included in the notifiable attribute exists in the original resource which requests to be updated, and is marked as optionally notifiable;
wherein the method further comprises:
the local CSE checking whether the original resource contains a notified attribute, wherein the notified attribute includes a notified attribute list
if the original resource contains a notified attribute, the local CSE containing a valid notification resource attribute, which is not contained in the notified attribute in the notifiable attribute, into the second resource updating request and
if the original resource does not contain a notified attribute, the local CSE containing all valid notification resource attributes in the notifiable attribute into the second resource updating request.

US Pat. No. 10,171,969

SYSTEM INCLUDING ALTERNATION OF SENSOR CONNECTION AND TRANSMISSION MECHANISM FOR MOTION SENSING GARMENT

1. A wireless communication system, comprising:a plurality of sensors attached to one or more wearables, wherein each of the plurality of sensors are configured to operate either as a master device or a slave device, wherein the plurality of sensors are inter-communicatively coupled to establish a sensor communication network wherein one of the sensors operates as the master device and the other sensors operates as the slave devices, and wherein a set of sensors, of the plurality of sensors are configured to broadcast metadata information to at least one other sensor within the sensor communication network, wherein the metadata information comprises sensor identifier, location of the one or more wearables, a battery level of the sensor, and a current time, and wherein the sensor communication network is updated based upon the metadata information received from each of the set of sensors to form an updated sensor communication network, wherein the sensor communication network is updated by swapping one of the slave devices with the master device, and wherein the slave device being swapped is configured to operate as a new master device for the updated sensor communication network; and
a processor configured to process the metadata information received from each sensor to determine a first signal level indicative of signal strength (dBm) between each sensor and at least one other sensor, a second signal level (dBm) indicative of signal strength between each sensor and the one or more communication devices, and a relative signal strength (dBm) of each sensor, wherein the slave device swapped with the master device to operate as the new master device for the updated sensor communication network is having the maximum relative signal strength as compared to the other sensors in the sensor communication network.

US Pat. No. 10,171,968

DEVICE BASED NETWORK NOTIFICATION OF DEVICE SUPPORTED MACHINE-TYPE COMMUNICATION FEATURES

1. A method, comprising:initiating, by a subscriber identity module card operatively coupled to a processor, an application of the subscriber identity module card, wherein the subscriber identity module card is further operatively coupled to a machine-type communication device;
based on the initiating, determining, by the subscriber identity module card using the application, whether a terminal support data structure of the subscriber identity module card comprises parameter information defining machine-type communication parameters of the machine-type communication device;
based on a determination that the parameter information is excluded from the terminal support data structure, updating, by the subscriber identity module card using the application, the terminal support data structure to include the parameter information; and
based on the updating, sending, by the subscriber identity module card using the application, the terminal support data structure to a network device of a wireless communication network to facilitate provisioning machine-type communication services for the machine-type communication device via the wireless communication network based on the parameter information.

US Pat. No. 10,171,964

LOCATION-ORIENTED SERVICES

INTERNATIONAL BUSINESS MA...

1. A computer-implemented method, comprising:determining a first screen size of a first mobile device associated with a first user;
determining a second screen size of the second mobile associated with a second user;
providing a first portion of a location-oriented data service to the first mobile device; and
providing a second portion of the location-oriented data service to the second mobile device based upon an interrelationship between the first user and the second user, wherein
the first screen size is different than the second screen size,
first content within the first portion is based upon the first screen size,
second content within the second portion is based upon the second screen size,
the first content differs from the second content based upon the first screen size being different than the second screen size,
the first content is provided to the first mobile device at substantially a same time as when the second content is provided to the second mobile device, and
the first content is complementary to and based upon the second content.

US Pat. No. 10,171,962

CONTROLLING A MOBILE DEVICE

International Business Ma...

1. A method for controlling a second mobile device in response to a first mobile device having no connection to a telephony service, wherein the first mobile device is operable to transmit an identifier to the second mobile device using a network connection that is operable to connect mobile devices and wherein the first mobile device and the second mobile device are operable to join a first group, said method comprising:assigning, by one or more processors of a broker service apparatus, a first unique identifier to the first group, said broker service apparatus being a computer apparatus;
notifying, by the one or more processors, the telephony service of the first unique identifier;
using, by the one or more processors in response to receiving a first control signal comprising the first unique identifier and data from the telephony service, the first unique identifier to identify the first group, wherein the data is targeted to the first mobile device;
determining, by the one or more processors in response to the first mobile device not being connected to the broker service apparatus, whether the second mobile device is connected to the broker service apparatus; and
issuing, by the one or more processors in response to the second mobile device being connected to the broker service apparatus, a second control signal to the second mobile device in order to forward the data to the second mobile device, said second control signal comprising an identifier of the first mobile device,
wherein the second control signal causes the second mobile device to use the identifier of the first mobile device and the network connection in order to forward the data to the first mobile device.

US Pat. No. 10,171,961

TRANSACTION AUTHORIZATION SERVICE

Amazon Technologies, Inc....

1. A system for scheduling events, comprising:one or more processors; and
memory having stored thereon program instructions that when executed by the one or more processors cause at least one of the one or more processors to implement:
registering users with a service center for a transaction authorization service and a text reminder service, wherein registering the users comprises receiving contact information of the users;
receiving, at the service center from a particular registered user of the registered users for the text reminder service via a first communication channel, a scheduling text message specifying an event to be scheduled between the particular registered user and a third party, wherein the scheduling text message includes contact information for the third party;
in response to receiving the scheduling text message, storing, by the text reminder service, an indication of the event;
at a predetermined time prior to the event, sending a reminder text message regarding the event based on the stored indication of the event to the particular registered user and to the third party based on the contact information for the third party;
receiving, at the service center from the particular registered user for the transaction authorization service via a second communication channel, a transaction text message specifying an initiation of a transaction between the particular registered user and another third party, the transaction text message comprises contact information for the other third party, wherein the second communication channel is different from the first communication channel;
in response to receiving the transaction text message, sending, to the particular registered user based on the contact information for the particular register user, an authorization request text message indicating a request for authorization of the transaction to the other third party; and
in response to receiving the authorization of the transaction, sending, to the other third party based on the contact information for the other third party, an authorization text message indicating that the transaction is authorized.

US Pat. No. 10,171,959

DISTRIBUTED ACCESS POINT FOR IP BASED COMMUNICATIONS

ARRIS Enterprises LLC, S...

1. An apparatus for Internet-Protocol based communications in a wireless network having a minimum available physical data rate, the apparatus comprising:a network interface to receive a plurality of multicast data packets;
a memory; and
a processor for executing instructions stored in the memory to:
identify a plurality of receiving nodes in the wireless network requesting data corresponding to the plurality of multicast data packets,
convert the received plurality of multicast data packets into one or more unicast data packets,
determine an effective unicast rate for said one or more unicast data packets, wherein the effective unicast rate corresponds to a combined rate for converting the plurality of multicast packets into one or more unicast packets and sending the one or more unicast packets to the receiving nodes,
compare said effective unicast rate to said minimum available physical data rate,
serially transmit said one or more unicast data packets, via the network interface, to said plurality of receiving nodes at said effective unicast rate, when said effective unicast rate is greater than said minimum available physical data rate, and
serially transmit said one or more unicast data packets, via the network interface, to said plurality of receiving nodes at said minimum available physical data rate, when said effective unicast rate is less than or equal to said minimum available physical data rate, wherein said network interface is capable of transmitting and receiving both multicast data packets and unicast data packets.

US Pat. No. 10,171,958

MANAGING A MULTIMEDIA BROADCAST MULTICAST SERVICE USING AN MBMS RELAY DEVICE

QUALCOMM Incorporated, S...

1. A method for managing a multimedia broadcast multicast service (MBMS), comprising:broadcasting an out-of-coverage status indicator or MBMS query in a first peer discovery signal;
receiving a second peer discovery signal from each of a plurality of MBMS relay devices, each second peer discovery signal comprising at least a subset of service announcement information for at least one MBMS; and
selecting an MBMS relay device from the plurality of MBMS relay devices to deliver content of a particular MBMS based at least in part on a signal strength of the second peer discovery signal from each MBMS relay device.

US Pat. No. 10,171,957

METHOD AND USER EQUIPMENT FOR RECEIVING BROADCAST/MULTICAST SERVICE, AND METHOD AND BASE STATION FOR TRANSMITTING BROADCAST/MULTICAST SERVICE

1. A method for receiving a broadcast/multicast service by a user equipment, the method comprising:receiving, by the user equipment, frequency resource information of a serving cell; and
receiving, by the user equipment, the broadcast/multicast service using a frequency region based on the frequency resource information,
wherein the frequency resource information includes information indicating the frequency region to which the broadcast/multicast service is allocated within a system bandwidth of the serving cell,
wherein the frequency resource information further includes information on a center frequency of another cell that provides a service identical to the broadcast/multicast service, and
wherein a subcarrier corresponding to the center frequency of the another cell is not used for the reception of the broadcast/multicast service if the subcarrier corresponding to the center frequency of the another cell is within the frequency region to which the broadcast/multicast service is allocated within the system bandwidth of the serving cell.

US Pat. No. 10,171,956

NOTIFICATION METHOD, SYSTEM, AND DEVICE FOR VEHICLE

Chiun Mai Communication S...

1. A notification method implemented by a notification device installed in a vehicle, comprising:establishing a wireless communication connection with an electronic device;
receiving a target station sent from the electronic device;
obtaining a navigation information of the vehicle;
recording a number of passengers planning to get off at the target station;
sending an arrival notification to the electronic device when the vehicle approaches at the target station;
recording a number of the passengers who actually get off at the target station;
comparing the number of the passengers that actually set off with the number of the passengers planning to get off;
sending the arrival notification again to the electronic device when the number of the passengers that actually set off is less than the number of the passengers planning to get off;
sending the arrival notification to the electronic device when the vehicle arrives at the target station and the passenger is still on board;
detecting a wireless signal strength of the electronic device when the vehicle arrives at the target station; and
comparing the wireless signal strength with a preset value to determine whether the passenger has got off.

US Pat. No. 10,171,955

METHOD FOR COMMUNICATION BETWEEN VEHICLES

Volkswagen AG, (DE)

1. A method for the communication of vehicles, the method comprising:receiving a first message from a first vehicle by a second vehicle via vehicle-to-vehicle communication and/or vehicle-to-infrastructure communication;
allocating the first message to the first vehicle based on the content of the first message by the second vehicle; and
sending a second message from the second vehicle to the first vehicle via a separate communication channel by using information from the first message;
wherein the sending of the second message from the second vehicle to the first vehicle via the separate communication channel by using information from the first message comprises at least one of the following operations:
sending the second message from the second vehicle to a central processor; and
sending the second message from the central processor to the first vehicle.

US Pat. No. 10,171,954

VEHICLE OCCUPANT POSITION DETERMINATION

International Business Ma...

1. A device identification and modification method comprising:receiving, via a computer processor of a computing system by a cache of said computing system from a plurality of GPS enabled devices of a plurality of users, GPS data identifying locations of said GPS enabled devices, where said computing system comprises an integrated computer within a vehicle;
initiating, by said computer processor, direct communications between said computer processor and said GPS enabled devices;
receiving, by said computer processor, digital identification input;
inserting, by said processor into said cache, said digital identification input;
identifying, by said computer processor based on said digital identification input, each GPS enabled device of said GPS enabled devices;
refreshing, by said computer processor based on said digital identification input within said cache, said GPS data resulting in updated GPS data identifying updated locations of said GPS enabled devices;
determining, by said computer processor based on said updated GPS data within said cache and locations of said GPS enabled devices, a group of users of said plurality of users located within a specified proximity to each other user of said group of users;
determining, by said computer processor based on an altitude, velocity, and a vector of said GPS enabled devices of said group of users, that said group of users is located within said vehicle;
determining, by said computer processor based on locations of said GPS enabled devices, a position of each user of said group of users with respect to said vehicle;
modifying, by said computer processor based on driver or passenger roles associated with each said position for each user of said group of users with respect to said vehicle, selected control functions of each said GPS enabled device of said GPS enabled devices; and
disabling after said modifying, by said computer processor, said GPS enabled devices.

US Pat. No. 10,171,953

VEHICLE EVENT NOTIFICATION VIA CELL BROADCAST

1. A method comprising:receiving, by a traffic optimization management server system comprising a processor, an event message in response to an event;
determining, by the traffic optimization management server system, based, at least in part, upon the event message, an area of relevance for the event, a location of the event, and a description of the event;
determining, by the traffic optimization management server system, a broadcast duration during which an event notification message should be broadcast and a broadcast interval with which the event notification message should be broadcast;
creating, by the traffic optimization management server system, the event notification message specifying the area of relevance, the location of the event, the description of the event, the broadcast duration, and the broadcast interval, wherein the event notification message is formatted as a cell broadcast message to be broadcast, by a cell broadcast center, to a cell serving at least a portion of the area of relevance; and
providing, by the traffic optimization management server system, the event notification message to the cell broadcast center.

US Pat. No. 10,171,952

METHOD FOR MANAGING A LOCATION OF A TERMINAL IN WIRELESS COMMUNICATION SYSTEM

Samsung Electronics Co., ...

1. A method to manage paging at a terminal in a wireless communication system, the method comprising:transmitting a connection management (CM) service request message to a network comprising at least one cell;
receiving, from the network, a CM service reject message in response to the CM service request message;
starting a congestion timer;
detecting a location area change while the congestion timer is running and an update status of update statuses is set as a first value indicating that a procedure of update procedures for the paging was successful;
when the location area change is detected, changing the update status to a second value indicating that the procedure failed; and
when the congestion timer is expired, performing to initiate the procedure,
wherein the update procedures include last attach, area updating attempt, and location updating attempt.

US Pat. No. 10,171,951

SYSTEM AND METHOD FOR POSITIONING MOBILE DEVICE BY USING BLUETOOTH SIGNAL

NEMUSTECH CO., LTD., Seo...

1. A system for positioning a mobile device by using a Bluetooth signal, in which the mobile device receives the Bluetooth signal, which is periodically transmitted by a beacon, to determine the location of the mobile device, the system comprising:a strong signal beacon configured to periodically transmit a first Bluetooth signal having a first identification number representing the strong signal beacon, with a relatively strong intensity;
a plurality of weak signal beacons arranged within a Bluetooth signal range of the strong signal beacon and each configured to transmit a second Bluetooth signal having a second identification number representing each of the plurality of weak signal beacons, with a relatively weak intensity compared to the intensity of the first Bluetooth signal; and
a mobile device configured to store signal properties of the beacons including the strong signal beacon and the plurality of weak signal beacons corresponding to the identification numbers of the strong signal and the weak signal beacons and determine whether Bluetooth signal reception sensitivity of the mobile device is normal based on the identification numbers of the strong signal and the weak signal beacons included in the Bluetooth signal received by the mobile device and the signal properties corresponding to the identification numbers of the beacons and determine a location of the mobile device relative to the beacons based on a result of the determining whether the Bluetooth signal reception sensitivity of the mobile device is normal,
wherein if only the first Bluetooth signal of the strong signal beacon is received, and the second Bluetooth signal of the plurality of weak signal beacons is not received, the mobile device determines the Bluetooth signal reception sensitivity as abnormal, and when the first Bluetooth signal of the strong signal beacon is continuously received for a predetermined period of time or longer, the mobile device determines its location to be within the Bluetooth signal range of the strong signal beacon.

US Pat. No. 10,171,949

ELECTRONIC APPARATUS AND OPERATING METHOD THEREOF

Samsung Electronics Co., ...

1. A method of operating an electronic device, the method comprising:confirming a first location of a user;
predicting a second location on a basis of the first location and a pre-stored at least one expected movement path confirmed by a relational model or a probability model; and
providing an information service providing signal to at least one external device present in the second location,
wherein the information service providing signal includes information related to an operational state of at least one external device present in an indoor area, and
wherein the relational model is based on a positional relation between devices in which the user is detected and the probability model is based on a probability that the user may move from the first location to the final location.

US Pat. No. 10,171,948

METHOD FOR PERFORMING POSITIONING OPERATION AND ASSOCIATED ELECTRONIC DEVICE

MEDIATEK INC., Hsin-Chu ...

1. An electronic device, comprising:an application processor, for executing applications running on a system of the electronic device; and
a sensor hub, coupled to the application processor, for obtaining and processing sensed data from a plurality of sensors within the electronic device;
wherein the application processor further downloads location data from a remote device via a network module, and at least a portion of the downloaded location data is further stored in a storage unit of the sensor hub to be reused for positioning;
wherein the location data corresponds to a plurality of cell identities; and when a positioning operation is performed, the sensor hub receives surrounding cell IDs and asks the storage unit of the sensor hub for their location data directly; and the sensor hub calculates a location of the electronic device according to at least a portion of the location data obtained from the storage unit of the sensor hub;
wherein a portion of the downloaded location data is stored in the storage unit of the sensor hub, and another portion of the downloaded location data is stored in a storage unit of the application processor; and when a positioning operation is performed, the sensor hub receives surrounding cell IDs and asks the storage unit of the sensor hub for their location data directly; and the application processor asks the storage unit of the application processor for part or all of the location data only when the part or all of the location data cannot be found in the storage unit of the sensor hub; and the sensor hub calculates a location of the electronic device according to at least a portion of the location data obtained from the storage unit of the sensor hub and the location data obtained from the application processor.

US Pat. No. 10,171,947

MOBILE APPLICATION AND DEVICE FEATURE REGULATION BASED ON PROFILE DATA

1. A system, comprising:a processor; and
a memory that stores executable instructions that, when executed by the processor, facilitate performance of operations, comprising:
receiving profiling data associated with a user equipment, wherein the profiling data comprises first context data that has been determined, during a first-time period, based on first motion data sensed via a first sensor of the user equipment and supplementary data sensed via a second sensor associated with the user equipment, and wherein the supplementary data is data that supplements the first motion data to determine the first context data;
during a second-time period, utilizing the profiling data to calibrate second motion data sensed via the first sensor, wherein the second sensor is unavailable during the second time period;
based on an analysis of the profiling data and the second motion data, determining second context data associated with the user equipment; and
in response to determining that the second context data satisfies a defined context criterion, prohibiting an execution of an application of the user equipment.

US Pat. No. 10,171,946

ASSISTED GNSS VELOCITY ESTIMATION

Apple Inc., Cupertino, C...

1. A method comprising:obtaining, by a mobile device, a step-based speed measurement based on sensor data;
obtaining, by the mobile device, a step-based speed uncertainty associated with the step-based speed measurement;
evaluating, by the mobile device, a plurality of assistance conditions, each derived from different aspects of the step-based speed measurement;
determining, by the mobile device, that one or more of the evaluated assistance conditions are met;
responsive to the determining, assisting a state estimator using the step-based speed measurement and the associated step-based speed uncertainty, wherein the assisting includes using the step-based speed uncertainty in the state estimator as a source of measurement noise; and
estimating, by the mobile device, at least one of the position, velocity or speed of the mobile device using the assisted state estimator.

US Pat. No. 10,171,945

LOCATION BASED PROVISIONING AND BROADCASTING OF CONTENT UTILIZING A MULTIMEDIA BROADCAST SERVICE

1. A method, comprising:receiving, by a system comprising a processor, a client service request;
uniformly distributing, by the system, user equipment location requests among a group of gateway mobile positioning center devices to facilitate generation of location information representing locations of respective user equipments; and
in response to the uniformly distributing the user equipment location requests among the group of gateway mobile positioning center devices, broadcasting, by the system based on a location of the locations corresponding to a user equipment of the respective user equipments, content corresponding to the client service request to the user equipment via a broadcast enabled access point device that is configured to send broadcast data to multiple devices via a point-to-multipoint communication protocol.

US Pat. No. 10,171,944

MONITORING A STATUS OF A DISCONNECTED DEVICE BY A MOBILE DEVICE AND AN AUDIO ANALYSIS SYSTEM IN AN INFRASTRUCTURE

International Business Ma...

1. A computer program product for monitoring an operation status of a disconnected device by a mobile device and an audio analysis system in an infrastructure, wherein the mobile device has connectivity to the infrastructure and the disconnected device has no connectivity to the infrastructure, the computer program product comprising a computer readable storage medium having program code embodied therewith, the program code executable to:invoke, by the mobile device, passive listening of the mobile device to a sound generated by the disconnected device, in response to determining that the mobile device is in proximity to a predefined location of the disconnected device;
determine, by the mobile device, whether the sound can be detected by the mobile device;
stream, by the mobile device, audio with information of a location of the mobile device to the audio analysis system, in response to determining that the sound can be detected by the mobile device, wherein the audio is recorded during the passive listening;
determine, by the audio analysis system, whether the audio is recorded from the disconnected device, based on the information of the location of the mobile device;
compare, by the audio analysis system, the audio with pre-recorded sounds of the disconnected device, in response to determining that the audio is recorded from the disconnected device;
determine, by the audio analysis system, the operation status of the disconnected device, based on a comparison of the audio and the pre-recorded sounds; and
send to the mobile device, by the audio analysis system, a notification of the operation status of the disconnected device, in response to determining that the operation status is an event predetermined by a user of the mobile device.

US Pat. No. 10,171,943

SYSTEM AND METHOD FOR UTILIZING AN ARRAY OF MOBILE DEVICES FOR IMAGING AND USER EQUIPMENT POSITIONING

QUALCOMM Incorporated, S...

5. A system for utilizing an array of one or more mobile devices to improve one or more positioning metrics of a user equipment, the system comprising:a memory to store received positioning metrics associated with signals collected by the one or more mobile devices in the array, wherein the signals are wireless communication network signals generated by the user equipment; and
one or more processors coupled with the memory configured to:
determine a first position of the user equipment based on the positioning metrics; and
determine a new position for at least one mobile device in the array of the one or more mobile devices based on the first position of the user equipment;
transmit the new position to the at least one mobile device;
receive new positioning metrics associated with signals collected by the at least one mobile device at the new position; and
determine a second position of the user equipment based on the positioning metrics and the new positioning metrics.

US Pat. No. 10,171,941

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, PROGRAM AND INFORMATION PROCESSING SYSTEM

SONY CORPORATION, Tokyo ...

1. An information processing apparatus comprising:hardware circuitry configured to:
acquire existing place information which specifies a location of an existing place,
acquire spot information related to a plurality of spots, which spot information is provided for each spot from a plurality of information provider devices connected to a user through a social networking service, based on the location of the existing place information,
display an integrated list of the spot information acquired for the plurality of spots from the plurality of information provider devices, and
register my-spot information specified by the user, out of the displayed integrated list of the plurality of spots of the acquired spot information, wherein the my-spot information indicates a status of the registered spot information selected from a group consisting of each spot of the plurality of spots to which the user has been and each spot of the plurality of spots to which the user wants to go,
wherein, when the status of the registered spot information indicates particular spots of the plurality of spots to which the user wants to go, the circuitry is further configured to display at least one of a number of and a list of the particular spots in response to current location information of the user indicating that the particular spots exist within a predetermined range with respect to a current location of the user.

US Pat. No. 10,171,940

TRANSPORTATION ACTIVATED GEOFENCE

International Business Ma...

1. A method for targeting geofence messages based on a transportation mode of a user comprising the steps of:creating, by a processor of a computer system, a geofence, wherein the geofence comprises a location, a size, a message, and a mode of transportation parameter;
receiving, by the processor, location data of a client device operated by the user;
further receiving, by the processor, measurements from a measuring device or a sensor quantifying movement of the client device, wherein the measuring device or sensor comprises an accelerometer, a gyroscope sensor, and a geomagnetic field sensor;
detecting, by said processor based on said measurements, a velocity and acceleration rate of said user;
detecting, by said processor via said gyroscope sensor, an angular velocity, a rotational motion, and a change of orientation of said user, wherein said detecting said angular velocity comprises measuring an amount of angular velocity being produced by a motion of said user and sensing an angular velocity produced by movement of the gyroscope sensor, and wherein associated angles are detected by said processor as said angles move and by sensing vibrations produced by external environments surrounding said gyroscope sensor to correct an orientation of an object embedded with said gyroscope sensor;
analyzing, by the processor, said angular velocity, said rotational motion, and said change of orientation of said user, said velocity and acceleration rate, a rate of change in the location data and the measurements from the measuring device or sensor thereby identifying the transportation mode as a function of the location data and measurements, wherein said identifying said transportation mode comprises comparing said angular velocity produced by movement of the gyroscope sensor to known angular velocities associated with walking, biking, driving in an automobile, riding a train, and riding on an airplane;
concluding, by the processor, that a location of the client device is within the location of the geofence and that the transportation mode of the user operating the client device is the same as the mode of transportation parameter defined by the geofence as a function of the measurements and location data; and
displaying, by the processor, the message on the client device.

US Pat. No. 10,171,938

USE OF GEOFENCES FOR LOCATION-BASED ACTIVATION AND CONTROL OF SERVICES

1. A method, comprisingreceiving, in association with a temperature control device located at a given base location, the temperature control device having a temperature setting that is variably set to different values based on corresponding distances of a user from the base location, an indication of a user moving from a first geofence to a second geofence of a plurality of geofences, each geofence associated with identifying the user at a location outside of the base location and situated with respect to the base location; wherein each of the plurality of geofences has an associated and distinct temperature value or temperature range value, the temperature value or the temperature range value corresponding to a temperature setting in the temperature control device, and wherein geographic location identifying capabilities in the user's device provides information related to the user's current geographic location, the current geographic location used at least in part to determine the user's moving from the first geofence to the second geofence;
upon receiving the indication of the user moving from the first geofence to the second geofence, determining that the user is moving in a direction closer to the base location based on the direction of movement and relative distance of the user from the base location; further determining a first temperature value or first temperature range value associated with the second geofence; and
in response to the determination, causing the temperature setting in the temperature control device at the base location to be adjusted from its current temperature value to substantially the first temperature value or within the first temperature range value, the first temperature value or first temperature range value closer in value to an optimally desired temperature setting value of the temperature control device.

US Pat. No. 10,171,937

MATCHING USERS IN A LOCATION-BASED SERVICE

Alibaba Group Holding Lim...

1. A method, comprising:receiving a plurality of service messages from a plurality of terminals, wherein the plurality of service messages associated with one or more transactions, and wherein the receiving of the service messages comprises:
receiving a first service message from a first terminal, wherein the first service message comprises an indication of a first transaction to be transacted with the first terminal, and an indication of a service that the first terminal is requesting to receive; and
receiving a second service message from a second terminal, wherein the second service message comprises an indication of a second transaction to be transacted with the second terminal, and an indication of a service that the second terminal is requesting to receive; and
in response to determining that the first terminal and the second terminal are within a threshold distance of each other, matching the first terminal and the second terminal as two parties in connection with a same transaction based at least in part on the first service message and the second service message, the matching the first terminal and the second terminal as two parties in connection with the same transaction comprises determining that the indication of the service that the first terminal is requesting to receive matches the indication of the service that the second terminal is requesting to receive.

US Pat. No. 10,171,936

MATCHING ACTIONABLE EVENTS WITH GOODS AND SERVICES PROVIDERS

International Business Ma...

1. A first device, comprising:a touchscreen display configured to receive an event identification request from a first user via a graphical user interface (GUI), wherein the event identification request comprises one or more requirements usable to identify one or more actionable events stored in a repository device, wherein the one or more requirements specify a geographical boundary and one or more goods or services, and wherein the one or more actionable events correspond to one or more dysfunctional devices reported by one or more second users of one or more second devices; and
a processor communicably coupled to the touchscreen display and configured to:
cause the touchscreen display to display the GUI to receive the event identification request from the first user by causing the touchscreen display to display a menu button that, when a tap by the first user is detected, causes the touchscreen display to display a drop-down menu that includes a first option that, when a selection by the first user is detected, causes the touchscreen display to display a second drop-down menu that presents different selectable request aspects to enable entry of the one or more requirements of the event identification request;
transmit the event identification request to the repository device to determine whether the repository device stores at least one actionable event of the one or more actionable events that matches the one or more requirements of the event identification request;
receive a notification indicating that the at least one actionable event matches the one or more requirements of the event identification request; and
cause the touchscreen display to provide the first user with an option to accept the at least one actionable event.

US Pat. No. 10,171,933

ISSUING NOTIFICATIONS ABOUT LOST DEVICES

International Business Ma...

1. A method for notifying a user of a location of a device, the method comprising:receiving a first location of a first device;
determining, by one or more processors, that the first device has been separated from a user of the first device;
determining a second device located at, within a first predetermined threshold, the first location;
sending a first notification to the second device, wherein the notification includes, at least, that the first device is located nearby
retrieving registration information of the user, wherein the registration information includes, at least, historical usage information of a plurality of devices associated with the registration information;
determining a third device located with the user, based on the historical usage information; and
sending a second notification to the third device, wherein the second notification information includes, at least, information indicating the location of the first device.

US Pat. No. 10,171,932

COMMUNICATION METHOD, INFORMATION PROCESSING APPARATUS, AND RECORDING MEDIUM RECORDING COMPUTER READABLE PROGRAM

Sony Corporation, Tokyo ...

1. An information processing apparatus comprising:a first communication unit configured to communicate with a first external apparatus via a first communication path;
a second communication unit configured to communicate with the first external apparatus via a second communication path, wherein the second communication path uses a different frequency range than the first communication path; and
a controller configured to determine a role of the information processing apparatus with the first external apparatus belonging to a network in the second communication path based on designation role information which is sent from the first external apparatus via the first communication path and received by the first communication unit, and the network is organized by a plurality of external apparatuses including the first external apparatus, wherein the determined role of the information processing apparatus is one of a first role and a second role,
wherein the controller makes the second communication unit communicate with each of the plurality of external apparatuses in the network via the second communication path when the determined role of the information processing apparatus is the first role, and
wherein the controller makes the second communication unit communicate directly only with the first external apparatus via the second communication path when the determined role of the information processing apparatus is the second role.

US Pat. No. 10,171,931

METHOD FOR RECEIVING DOWNLINK CONTROL CHANNEL BY MTC DEVICE, AND TERMINAL

LG Electronics Inc., Seo...

1. A method for receiving physical downlink channels, the method performed by a device and comprising:receiving, from a cell, downlink control information (DCI) via a first downlink control channel on a first subframe, wherein the DCI includes scheduling information for a physical downlink shared channel (PDSCH),
wherein bandwidth for the first downlink control channel and bandwidth for the PDSCH include a maximum of 6 physical resource blocks (PRBs); and
receiving, from the cell, downlink data via the PDSCH based on the scheduling information,
wherein when the downlink data is received on a second subframe, the device assumes that a second downlink control channel is not transmitted on the second subframe from the cell.

US Pat. No. 10,171,930

LOCALIZED AUDIBILITY SOUND SYSTEM

1. Means for creating a localized low-frequency sound field, comprising:a housing containing an audio driver including a vibratile diaphragm with first and second sides, and a center, said housing having first, second, third, and fourth exit ports;
first acoustic waveguide means for guiding acoustic energy from said first side of said diaphragm to said first exit port at a first location;
second acoustic waveguide means for guiding acoustic energy from said first side of said diaphragm to said second exit port at a second location, said first and second locations approximately equidistant from said center;
third acoustic waveguide means for guiding acoustic energy from said second side of said diaphragm to said third exit port at a third location;
fourth acoustic waveguide means for guiding acoustic energy from said second side of said diaphragm to said fourth exit port at a fourth location, said third and fourth locations approximately equidistant from said center;
wherein said first, second, third, and fourth locations are all approximately co-linear.

US Pat. No. 10,171,929

POSITIONAL AUDIO ASSIGNMENT SYSTEM

Lightbox Video Inc., Tor...

1. A method performed by one or more electronic devices, the method comprising:obtaining data representing a video viewable to a user through a head-mounted device in an immersive virtual reality environment and that identifies spatial positions assigned to one or more objects within the video, and
obtaining audio data associated with the video that (i) encodes one or more audio streams corresponding to each of the one or more objects and (ii) identifies, for each of the one or more audio streams, a frame of the video representing a start point of an audio stream;
receiving, from a computing device of a user, an indication of playback of a particular frame representing a start point of a particular audio stream from among the one or more audio streams;
providing, for display in a field-of-view of the video that is viewable to the user on the computing device, a visual notification representing metadata associated with a particular object corresponding to the particular audio stream, the visual notification being displayed in a particular spatial position within the field-of-view;
receiving, from the computing device of the user, user input data associated with playback of the video;
determining a gaze point of the user based on the received user input data;
evaluating the gaze point of the user with respect to the particular spatial position within the field-of-view; and
based on evaluating the gaze point with respect to the particular spatial position within the field-of-view, selectively adjusting audio data provided to the computing device of the user.

US Pat. No. 10,171,927

METHOD FOR PROCESSING AN AUDIO SIGNAL FOR IMPROVED RESTITUTION

AXD Technologies, LLC, L...

1. A method for processing an audio signal of N.x channels, N being greater than 1 and x being greater than or equal to 0, comprising:processing the audio signal by a multichannel convolution with a predefined imprint, the predefined imprint being formulated at least by the capture of a reference sound by a set of speakers disposed in a reference space,
wherein the method further comprises:
selecting two or more imprints from a plurality of imprints previously formulated in a plurality of different sound contexts; and
combining the selected imprints formulated in different sound contexts to create a new imprint representing a virtual environment.

US Pat. No. 10,171,926

SOUND PROCESSING APPARATUS AND SOUND PROCESSING SYSTEM

Sony Corporation, Tokyo ...

1. A sound processing apparatus, comprising:first gain calculating circuitry configured to calculate output gains of a virtual sound outputting unit and two sound outputting units of at least four sound outputting units located close to a sound image localization position as a target position, wherein the first gain calculating circuitry is configured to calculate the output gains of the virtual sound outputting unit and the two sound outputting units based on a positional relationship among the virtual sound outputting unit, the two sound outputting units, and the sound image localization position;
second gain calculating circuitry configured to calculate output gains of other two of the sound outputting units than the two sound outputting units, wherein the second gain calculating circuitry is configured to calculate the output gains of the other two of the sound outputting units based on a positional relationship among the other two of the sound outputting units and the virtual sound outputting unit; and
gain adjusting circuitry configured to:
perform gain adjustment on sounds to be output from the at least four sound outputting units based on the output gains of the at least four sound outputting units; and
output gain adjusted sound signals to the at least four sound outputting units so as to cause the at least four sound outputting units to output sound to a listener.

US Pat. No. 10,171,925

MEMS DEVICE

INFINEON TECHNOLOGIES AG,...

1. A method, comprising:patterning a first conductive material to form a first electrode on a first bonding layer;
depositing a first dielectric layer over the first electrode;
patterning a second conductive material over the first dielectric layer to form a membrane spaced apart from the first electrode by the first dielectric layer;
depositing a second dielectric layer over the second conductive material;
patterning a third conductive material over the second dielectric layer to form a second electrode; and
removing portions of the first dielectric layer and the second dielectric layer disposed over a central portion of the membrane, wherein an overlapping area of a fixed portion of the membrane with the second electrode is less than a maximum overlapping.

US Pat. No. 10,171,923

BINAURAL HEARING SYSTEM AND METHOD

CIRRUS LOGIC, INC., Aust...

1. A system for binaural signal processing, the system comprising:a first speaker and a second speaker respectively configured to be mounted proximal to, and to deliver respective first and second acoustic signals to, the left and right ears of a user;
a first microphone and a second microphone respectively configured to be mounted proximal to the left and right ears of a user; and
a binaural processing device for receiving respective first and second acoustic signals from each of the first and second microphones and for modifying each of the first and second acoustic signals to produce the modified first and second acoustic signals, wherein sound captured at both ears is used to modify the first acoustic signal to produce the modified first signal and sound captured at both ears is used to modify the second acoustic signal to produce the modified second signal, and wherein the binaural processing device is operable when distal from the left and right ears of the user;
wherein the first and second speakers, the first and second microphones and the binaural processing device are connected by a signal network configured to pass signals from the first and second microphones to the binaural processing device and from the binaural processing device to the speakers,
wherein the signal network comprises a single wire chained bus loop having a chained configuration in which data from upstream on the single wire chained bus loop is recovered by each of the first and second speakers and the first and second microphones and re-modulated downstream onto the single wire chained bus loop, and
wherein the first and second speakers are positioned downstream of the binaural processing device on the single wire chained bus loop, and the first and second microphones are positioned downstream of the first and second speakers on the single wire chained bus loop.

US Pat. No. 10,171,922

HEARING ASSISTANCE SYSTEM WITH OWN VOICE DETECTION

Starkey Laboratories, Inc...

2. An apparatus configured to be worn by a wearer, comprising:a first microphone configured to produce a first microphone signal;
a second microphone configured to produce a second microphone signal;
a voice detector including an adaptive filter configured to produce a filter output signal using the second microphone signal and an error signal produced by subtracting the filter output signal from the first microphone signal, the voice detector configured to:
detect a voice of the wearer by comparing a power of the error signal to a power of the first microphone signal; and
produce an indication of detection in response to the voice of the wearer being detected;
a sound processor configured to produce an audio output signal using the second microphone signal and the indication of detection; and
a speaker configured to produce an audible signal using the audio output signal.

US Pat. No. 10,171,921

MICROPHONE MATCHING UNIT AND HEARING AID

1. A method for performing a microphone matching of a hearing aid comprising a first microphone, a second microphone and a receiver in a predetermined spatial arrangement relative to each other, the method comprising the stepsgenerating an output sound signal by means of the receiver;
picking up a first input sound signal by the first microphone and a second input sound signal by the second microphone while the output sound signal is generated;
converting the first input sound signal into a first electrical microphone output signal by means of the first microphone and the second input sound signal into a second electrical microphone output signal by means of the second microphone;
determining a first microphone response of the first microphone, and a second microphone response of the second microphone at a given point in time;
determining a microphone response difference between the first microphone response and the second microphone response;
determining a matching difference between the microphone response difference and a predetermined reference microphone response difference; and
adapting at least a first microphone gain of the first microphone according to the matching difference to reduce the matching difference between the microphone response difference and the predetermined reference microphone response difference,
wherein the first microphone response is determined from a first estimate of a first feedback path from the receiver to the first microphone and wherein the second microphone response is determined from a second estimate of a second feedback path from the receiver to the second microphone.

US Pat. No. 10,171,920

TEST APPARATUS FOR BINAURALLY-COUPLED ACOUSTIC DEVICES

ETYMONIC DESIGN INCORPORA...

1. An acoustic coupler assembly for carrying an acoustic device, the acoustic coupler assembly comprising:a coupler body extending in length from a lateral outer body end to a lateral inner body end, the body having a sound test cavity extending laterally between the lateral inner and outer body ends and the sound test cavity having lateral inner and outer test cavity openings and a laterally extending sound test cavity centerline;
an acoustic device speaker mount covering the lateral outer body end and having a speaker mount opening sized to grasp a speaker of an acoustic device received in the speaker mount opening, the speaker mount opening abutting the lateral outer test cavity opening; and
an acoustic device microphone mount connected to the coupler body, the acoustic device microphone mount including a microphone mount clip sized to grasp a microphone assembly of an acoustic device when the microphone assembly is received in the microphone mount clip.

US Pat. No. 10,171,919

THERMAL AND THERMOACOUSTIC NANODEVICES AND METHODS OF MAKING AND USING SAME

The Regents of the Univer...

1. A nanodevice comprising:a solid substrate;
a first solid supporting material block and a second solid supporting material block, wherein the first and second supporting material blocks are in physical contact with the same surface of the solid substrate,
wherein the section of the solid substrate defined inbetween the first and second supporting material blocks does not comprise an additional supporting material block; and
at least one ultrathin film block comprising a first face and an opposite second face, wherein:
the first face comprises a solid material nucleation layer,
the opposite second face comprises an electrically conducting layer,
a section of the first face of each ultrathin film block is in physical contact with the first supporting material block,
a distinct section of the first face of each ultrathin film block is in physical contact with the second supporting material block, such that each ultrathin film block spans the width of the section of the solid substrate defined inbetween the first and second supporting material blocks, and
the at least one ultrathin film block does not have physical contact with the solid substrate, such that the at least one ultrathin film block is suspended over the solid substrate;wherein the at least one ultrathin film block has an average thickness that is equal to or lower than about 50 nm.

US Pat. No. 10,171,916

SYSTEM AND METHOD FOR A HIGH-OHMIC RESISTOR

INFINEON TECHNOLOGIES AG,...

1. A circuit comprising:a high-resistance resistor comprising:
a plurality of semiconductor junction devices coupled in series, each semiconductor junction device of the plurality of semiconductor junction devices comprising a parasitic doped well capacitance configured to insert a parasitic zero in a noise transfer function of the high-resistance resistor, wherein each semiconductor junction device of the plurality of semiconductor junction devices comprises a diode connected transistor,
a plurality of additional capacitors, wherein ones of the plurality of additional capacitors are formed in parallel with corresponding ones of the plurality of semiconductor junction devices, and each additional capacitor of the plurality of additional capacitors are configured to adjust a parasitic pole in the noise transfer function of the high-resistance resistor in order to compensate for the parasitic zero,
a capacitive sensor configured to generate a signal output voltage, and
an amplifier coupled to the capacitive sensor and configured to receive the signal output voltage at a high impedance input of the amplifier, wherein the high-resistance resistor has a first terminal coupled to the capacitive sensor and the high impedance input of the amplifier.

US Pat. No. 10,171,915

DISPLAY DEVICE FOR GENERATING SOUND BY VIBRATING PANEL

LG Display Co., Ltd., Se...

1. A display device, comprising: a display panel configured to emit light; a support structure at a rear of the display panel; a sound generation actuator supported by the support structure and configured to vibrate the display panel to generate sound; and a cap member surrounding the sound generation actuator and secured to the support structure at an area of the support structure, the area being near the sound generation actuator and wherein the sound generation actuator includes a lower plate, a magnet disposed on the lower plate, a center pole disposed on the central region of the lower plate, a bobbin disposed to surround the center pole, and a coil wound around the bobbin.

US Pat. No. 10,171,913

SUSPENSION DEVICE FOR A LOUDSPEAKER, MANUFACTURING METHOD AND ASSOCIATED LOUDSPEAKERS

FOCAL JMLAB, La Talaudie...

1. Process for manufacturing a suspension device for a loudspeaker comprising:providing an annular outer edge able to fasten the suspension device to a frame, an annular inner edge able to fasten the suspension device to a membrane, a suspension hoop extending annularly between the outer and inner edges, said suspension hoop being able to absorb movement stresses produced at the inner edge by means of deforming the suspension hoop thus forming at least one resonance mode, the suspension hoop comprises at least one annular protuberance positioned in such a way as to minimize at least one suspension hoop resonance mode, the mass of at least one of these annular protuberances being between 150% and 400% of the mass of a part of the suspension hoop whereupon the annular protuberance is positioned;
exciting the inner edge of the suspension device,
measuring the movements of the suspension hoop in relation to a stable state of the suspension hoop during a characterization period,
detecting the position of the first local maximum of the movements of the suspension hoop in relation to a stable state of the suspension hoop, and
defining a position of a protuberance corresponding to a projection of the first local maximum on the suspension hoop in the stable state.

US Pat. No. 10,171,912

ANALOG DEVICE CONNECTION

Hewlett-Packard Developme...

1. A method, comprising:detecting, in a control device, an analog connection to an audio output device;
transmitting a first signal from the control device to the audio output device using the analog connection, wherein the first signal comprises a first resistance value applied by the control device across the analog connection within a predetermined time period after the analog connection is detected; and
selectively enabling a feature of the control device when a second signal is received by the control device from the audio output device using the analog connection, wherein the second signal comprises a second resistance value that is different from the first resistance value that is applied by the audio output device across the analog connection in response to the first signal, where the second signal indicates the audio output device is an approved audio output device for the feature.

US Pat. No. 10,171,911

METHOD AND DEVICE FOR OUTPUTTING AUDIO SIGNAL ON BASIS OF LOCATION INFORMATION OF SPEAKER

SAMSUNG ELECTRONICS CO., ...

1. A method of processing an audio signal, the method performed by a device and comprising:dividing the audio signal into a first signal and a second signal;
obtaining relative position information between a first speaker and a second speaker;
determining a first gain for the first signal and a second gain for the second signal, based on the relative position information;
obtaining a third signal by mixing the second signal, to which the second gain is applied, and the first signal;
obtaining a fourth signal by mixing the first signal, to which the first gain is applied, and the second signal;
outputting the third signal to the first speaker; and
outputting the fourth signal to the second speaker,
wherein the determining of the first gain for the first signal and the second gain for the second signal comprises:
setting a central axis based on positions of the first speaker and a user;
determining the first gain as a first value inversely proportional to a distance between the second speaker and the central axis; and
determining the second gain as a second value proportional to the distance between the second speaker and the central axis.

US Pat. No. 10,171,910

METHODS AND DEVICES FOR REPRODUCING STEREO AUDIO

D2A Audio LLC, Morgan Hi...

1. An audio system comprising:an input configured to receive left and right stereo input signals;
a left filter configured to receive the left stereo input signal and isolate left low frequency signal and left high frequency signal;
a right filter configured to receive the right stereo input signal and isolate right low frequency signal and right high frequency signal;
left and right high frequency speakers;
top and bottom low frequency speakers, positioned to output sound in opposite directions, wherein the bottom low frequency speaker is positioned to output sound toward an external supporting surface;
left high frequency amplifier configured to receive and amplify the left high frequency signal and drive the left high frequency speaker with the amplified left high frequency signal;
right high frequency amplifier configured to receive and amplify the right high frequency signal and drive the right high frequency speaker with the amplified right high frequency signal;
a summing amplifier configured to receive the left and right low frequency signals and generate a combined low frequency signal; and
a low frequency woofer amplifier coupled to the top and bottom low frequency speakers and configured to receive the combined low frequency signal, output an amplified combined low frequency signal and drive the top and bottom low frequency speakers with the amplified combined low frequency signal.

US Pat. No. 10,171,909

PROCESSING OF SIGNALS FROM LUMINAIRE MOUNTED MICROPHONES FOR ENHANCING SENSOR CAPABILITIES

General Electric Company,...

1. An outdoor luminaire comprising:a luminaire unit comprising LED modules;
a sensor module attached to the luminaire unit, wherein the sensor module comprises:
a housing and a plurality of microphones seated within the housing; and
a computing module operably connected to the plurality of microphones, the computing module comprising a processor and a memory, the memory storing program logic configured to cause the processor to:
receive information comprising a plurality of acoustic output signals from the corresponding plurality of microphones, and any of detection directionality and location for each of the plurality of microphones; and
process, using the received information, the plurality of acoustic output signals to:
select acoustic output signals which are above a predefined noise floor level associated with each of the plurality of microphones and stored in the memory of the computing module,
identify a desirable acoustic signal at least in one of the selected acoustic output signals using analysis of the received plurality of acoustic output signals, and
correlate the acoustic output signals with any of the detection directionalities and locations of the plurality of microphones.

US Pat. No. 10,171,906

CONFIGURABLE MICROPHONE ARRAY AND METHOD FOR CONFIGURING A MICROPHONE ARRAY

1. A method for automatically configuring a microphone array, the microphone array comprising a plurality of microphone capsules, the method being performed by the microphone array and comprising:scanning sound signals from a plurality of directions by combining output signals of said plurality of microphone capsules;
detecting a sound signal from a first direction and detecting the first direction;
determining that the detected sound signal corresponds to a first predefined control sound signal, the first predefined control sound signal being one of a group of at least two predefined control sound signals and comprising a first tone sequence that is automatically generated;
decoding the first tone sequence by a configuration controller, wherein a first electronic control signal according to the first tone sequence is obtained; and
providing the first electronic control signal to a directivity controller of the microphone array, the directivity controller being adapted for configuring the microphone array according to the first electronic control signal;
wherein the configuring comprises:
eliminating the first direction from scanning sound signals when the first tone sequence is a first predefined tone sequence, and
cancelling an elimination of a second direction from scanning sound signals when the first tone sequence is a second predefined tone sequence different from the first predefined tone sequence, the second direction being different from the first direction.

US Pat. No. 10,171,904

WIRELESS NOSE-CANCELLING EARPLUG

QON OY, Kempele (FI)

1. A wireless noise-cancelling earplug comprising:a housing comprising a first cylindrical part and a second cylindrical part, within which an active noise cancellation (ANC) circuit is configured to produce anti-noise, a speaker is configured to emit the anti-noise as a sound wave, and a battery is configured to power the ANC circuit;
a sealing bud disposed about a portion of the second cylindrical part of the housing, the sealing bud and the housing forming a passive noise reduction unit configured to fully occlude an ear canal;
an audio cavity configured to guide the sound wave from the speaker out of the earplug;
at least one microphone configured to measure ambient noise and to feed the measured ambient noise to the ANC circuit,
wherein the earplug and the housing as viewed from one side is L-shaped comprising a stem portion that extends between outer extremities of the housing along a first axis and a bar portion that extends between an outer extremity of the housing and an outermost point of the passive noise reduction unit, wherein:
the stem portion has a length of 25 mm or less;
the bar portion has a length of 23 mm or less; and
an inner angle between the first axis and the second axis is 85 to 120 degrees,
wherein at least the ANC circuit, the speaker, and a first part of the audio cavity are arranged within the second cylindrical part.

US Pat. No. 10,171,903

PORTABLE BINAURAL RECORDING, PROCESSING AND PLAYBACK DEVICE

1. An accessory for binaural recording and playback for a multimedia device comprising:a headphone, said headphone having a left ear piece that houses an inwardly facing left speaker and an outwardly facing, left, non-directional recording microphone therein and a right ear piece that houses an inwardly facing right speaker and an outwardly facing, right, non-directional recording microphone therein;
a dongle, said dongle having a microprocessor and a memory;
an audio codec housed in said dongle and in communication with said microprocessor, said audio codec having audio signal processing functionality accomplished through components selected from the group consisting of microphone preamplifiers, microphone amplifiers, analog audio signal to digital audio signal convertors, digital audio signal processors, and digital audio signal to analog audio signal convertors;
an application program in said memory, executed by said microprocessor, communicating an operating system of said multimedia device to allow a video interface of said multimedia device to operate said audio signal processing functionality of said audio codec in said dongle;
a right three-conductor-wire analog transmission cable connected between said dongle and said right ear piece;
a left three-conductor-wire analog transmission cable connected between said dongle and said left ear piece;
a digital signal transmission cable operatively connected at a first end to said dongle, and configured at a second end for connection to a multimedia device;
wherein said audio codec is in communication with said headphone, and operatively powered by said multimedia device when connected;
wherein said dongle is a parasitically powered dongle without its own power source, receiving said parasitic power from said multimedia device when connected; and
wherein said right, non-directional recording microphone and said left, non-directional recording microphone receive sound and transmit an audio signal to said mutimedia device through said audio codec.

US Pat. No. 10,171,902

IN-EAR MONITOR

Campfire Audio LLC, Port...

1. A tunable in-ear monitor that produces sound when operationally connected to an external audio source comprising:an in-ear monitor housing;
at least one low frequency driver having a first outlet sound port;
at least one high frequency driver having a second outlet sound port;
at least one crossover component;
a spout extending outward from a face of said in-ear housing, said spout having an inner face and an outer face separated by a thickness, with at least one sound exit port formed through said thickness;
at least one sound tube having an input end and an output end, said input end affixed to at least one of said drivers and said sound tube output end affixed to said spout;
at least one sonic dampener affixed in said sound tube at an adjustable length for frequency response tuning, and wherein said sound tube's input end is affixed to said low frequency driver about said first outlet sound port and said output end affixed to said spout;
at least one tunable resonator box with a first end directly affixed to said high frequency driver's second outlet sound port, wherein said resonator box has an opposing side wall structure having an open proximal end and a distal end wall with an orifice therethrough, said orifice concentric with said high frequency driver's second outlet sound port; and
an electrical circuit operationally connected to provide input audio signals from said external audio source, directly, or indirectly through a crossover component, to all drivers in said housing, so as to enable the generation of an output sound from said drivers;
wherein said drivers are mechanically connected to said spout so as to transfer the driver's generated sound into said sound exit port; and
wherein said crossover component is a stacked metalized plastic film chip capacitor;
wherein said spout has at least one resonator box recess formed on said inner face connected to said sound exit port, and a second, output end of said resonator box is inserted and matingly engaged into said resonator box recess.

US Pat. No. 10,171,901

SOUND PICKUP DEVICE AND SOUND PROCESSING DEVICE

YAMAHA CORPORATION, Hama...

15. A sound processing device comprising:a housing;
a mounting mechanism configured to mount the housing to an object;
a sound pickup portion comprising at least one microphone;
a first output terminal that outputs a sound signal corresponding to a sound picked up by the at least one microphone;
a connector configured to mount the sound pickup portion to the housing;
a sensor that detects a vibration of the housing;
a second output terminal that outputs a vibration signal corresponding to the vibration detected by the sensor;
a sound signal processor configured to:
add a first sound effect to the sound signal output from the first output terminal;
produce a vibration sound signal based on the vibration signal output from the second output terminal; and
synthesize the sound signal with the added sound effect, with one of the vibration sound signal or a sound signal produced by adding a second sound effect to the vibration sound signal, to generate and output a synthesized sound signal.

US Pat. No. 10,171,900

SPEAKER AND SHOWER

Kohler Co., Kohler, WI (...

1. An assembly comprising:a speaker supportable for movement relative to a reference external to the speaker, the speaker including
a speaker housing, and
speaker components supported in the speaker housing and operable to produce an audio output;
a sensor operable to sense a direction of movement of the speaker during movement of the speaker relative to the external reference; and
control components operable to
determine the direction of movement of the speaker relative to the external reference, and
control the speaker components based on the direction of movement of the speaker relative to the external reference;
wherein, when the speaker is sensed to be moving in a first direction relative to the reference, an operational characteristic of the speaker components is controlled to increase or advance during the movement in the first direction, and wherein, when the speaker is sensed to be moving in a second direction relative to the reference different from the first direction, the operational characteristic of the speaker components is controlled to decrease or retreat during the movement in the second direction.

US Pat. No. 10,171,897

SPEAKER MOUNT AND ASSEMBLY AND METHOD OF DISENGAGEMENT THEREOF

Swarm Holdings LLC, Salt...

1. A speaker mount, comprising:a. a speaker baffle;
b. a support member extending from the speaker baffle and having an elevated region spaced therefrom and a closer region closer to the speaker baffle than the elevated region;
c. a tab movably coupled to the support member such that it can travel between the elevated region and the closer region and including leeway in the coupling between the tab and the support member such that the tab can tip relative to the support member, the tab including a finger extending away from the support member and shaped to engage with a surface when the speaker mount is installed, thereby causing the tab to tip relative to the support member; and
d. wherein each of the tab and the support member include mating teeth facing each other that are positioned to press against each other and thereby engage when the tab tips relative to the support member when the finger engages with a surface and to be spaced apart and thereby not engage when the tab is not tipped, and when so engaged to each other when the tab tips due, to the finger engaging with a surface, prevent travel of the tab from the closer region to the elevated region.

US Pat. No. 10,171,896

ELECTRONIC DEVICE WITH SIDE SPEAKER HOLE

Samsung Electronics Co., ...

1. An electronic device comprising:a housing including a first face facing a first direction, a second face facing a second direction opposite the first direction, a side face facing a third direction perpendicular to each of the first and second directions and surrounding at least a portion of a space between the first and second faces;
a first plate disposed on the first face and exposed in the first direction; and
a second plate disposed on the second face and exposed in the second direction,
wherein the first plate includes a plurality of first edge regions, wherein at least one of the first edge regions includes, in at least a portion thereof, at least one first curved region that curves toward the second plate and/or toward a side face, and
at least one speaker hole disposed on the side face between the first curved region and the second plate, wherein the side face is not part of the first plate and is not part of the second plate.

US Pat. No. 10,171,895

HYDROPHOBIC MESH COVER

Apple Inc., Cupertino, C...

1. An acoustic module, comprising:an acoustic chamber having a tapered geometry such that a first end of the acoustic chamber is larger than a second end of the acoustic chamber;
a port comprising a plurality of openings, the port being adjacent an external environment and the first end of the acoustic chamber;
a semi-permeable barrier material disposed within the acoustic chamber; and
an audio component at the second end of the acoustic chamber, the audio component being configured to emit acoustic waves that move moisture within the acoustic chamber toward and through the semi-permeable barrier material and the port.

US Pat. No. 10,171,894

METHOD FOR ADJUSTING RECEPTION PARAMETER OF OPTICAL LINE TERMINAL AND OPTICAL LINE TERMINAL

Huawei Technologies Co., ...

1. A method for adjusting a reception parameter of an optical line terminal (OLT), comprising:determining a transmission rate of a to-perform-sending optical network unit (ONU);
generating a reset signal before the to-perform-sending ONU sends an optical signal, wherein the reset signal is used to trigger the OLT to perform a reset operation;
adjusting a signal characteristic of the reset signal according to the transmission rate, to generate an adjusted signal;
extracting a signal characteristic of the adjusted signal, and generating a first signal and a second signal according to the signal characteristic of the adjusted signal, wherein the first signal indicates the reset signal, and the second signal indicates the transmission rate of the to-perform-sending ONU;
performing the reset operation according to the first signal; and
after the reset operation is completed, adjusting the reception parameter of the OLT according to the second signal,
wherein the extracting the signal characteristic of the adjusted signal, and generating a first signal and a second signal comprises:
receiving, by a physical layer chip, the adjusted signal sent by a Media Access Control (MAC) layer chip;
extracting, by the physical layer chip, the signal characteristic of the adjusted signal, and generating the first signal and the second signal according to the signal characteristic of the adjusted signal;
sending, by the physical layer chip, the first signal to an optical receiving component, wherein the first signal is used to trigger the optical receiving component to perform the reset operation; and
after the optical receiving component completes the reset operation, sending, by the physical layer chip, the second signal to the optical receiving component.

US Pat. No. 10,171,892

SYSTEM AND METHOD FOR MONITORING WATER LEVEL ON A ROOF

1. A drain monitor for monitoring water level on a roof, the drain monitor comprising:a base for attaching to the roof;
a riser attached to the base and projecting from the roof;
a water level sensor for measuring water level on the roof, the water level sensor comprising an attachment member and a vertical member, wherein
the attachment member comprises a first attachment end and a second attachment end,
the first attachment end is adjustably attached to the riser such that the first attachment end is inserted into an attachment slot on the riser,
the attachment member is directed away from the riser,
the vertical member is attached to the second attachment end and extends downward towards the roof, and
the vertical member comprises a float sensor that floats up and down on the vertical member to measure water level; and
a communication system positioned on the riser for transmitting measurement data received from the water level sensor.

US Pat. No. 10,171,890

SYSTEM AND METHOD FOR BATTERY MANAGEMENT AND ANTENNA ELEVATION IN A PIT MOUNTED AUTOMATIC METER READING UNIT

Cooper Technologies Compa...

1. An automatic meter reading (AMR) system adapted to be mounted in a utility meter pit, the AMR system comprising:an AMR device including:
a meter connection configured to provide consumption data from a utility meter in the utility meter pit;
processing electronics configured to receive the consumption data via the meter connection and convert the consumption data into a transmittable signal;
an antenna configured to wirelessly transmit the transmittable signal to a remote device; and
an enclosure that houses the processing electronics and the antenna therein to provide protection thereto from ambient conditions in the utility meter pit;
wherein the enclosure defines a dome-shaped antenna compartment therein configured to house the antenna, the antenna compartment protruding out from a remainder of the enclosure so as to provide for positioning of the antenna at a location extended out therefrom; and
a cover adaptor mateable with the AMR device and with a cover of the utility meter pit, the cover adaptor comprising:
a flanged end portion; and
a projection portion protruding outwardly from the flanged end portion, the projection portion comprising a hollow interior formed therein that is open on an end of the projection portion that is distal from the flanged end portion;
wherein the projection portion is sized and constructed to so as to be positionable through a hole formed in the cover of the utility meter pit and so as to receive and secure the dome-shaped antenna compartment of the AMR device enclosure in the hollow interior thereof;
wherein the positioning of the projection portion of the cover adaptor through the hole in the cover of the utility meter pit and the securing of the dome-shaped antenna compartment of the AMR device enclosure in the hollow interior of the projection portion mounts the antenna of the AMR device at a height approximately flush with a top surface of the cover of the utility meter pit; and
wherein the dome-shaped antenna compartment comprises protrusions formed on an outer surface thereof and the projection portion comprises grooves formed thereon, with the protrusions mating with the grooves via a twist-lock type mating, so as to provide for selective mating and separation of the AMR device from the cover adaptor.

US Pat. No. 10,171,888

VIDEO PROCESSING METHOD, TERMINAL AND SERVER

HUAWEI TECHNOLOGIES CO., ...

1. A video processing method, comprising:sending, to a server, a request for acquiring a media presentation description (MPD) file of a video;
receiving the MPD file from the server, the MPD file comprising region information of a region that can be independently decoded in the video;
determining, according to the region information, a region used for playback on a terminal from the region that can be independently decoded;
determining a to-be-acquired media segment according to the MPD file;
acquiring a location in which data content corresponding to the region for the playback on the terminal is stored in the media segment;
acquiring, according to the location in which the data content corresponding to the region for the playback on the terminal is stored in the media segment, the data content corresponding to the region for the playback on the terminal from the media segment stored in the server; and
playing, according to the data content corresponding to the region for the playback on the terminal, a picture of the region for the playback on the terminal, the media segment comprising at least two subsegments, and acquiring the location in which the data content corresponding to the region for the playback on the terminal is stored in the media segment comprises:
acquiring, from the server, a segment index and a subsample index of the media segment, the segment index indicating a location in which each subsegment comprised in the media segment is stored in the media segment, and the subsample index indicating is a location in which each subsample corresponding to the region that can be independently decoded is stored in each subsegment;
determining a to-be-acquired subsegment according to the segment index; and
determining, according to a location in which the subsegment is stored in the media segment and a location in which a subsample corresponding to the region for the playback on the terminal in the region that can be independently decoded is stored in the subsegment, the location in which the data content corresponding to the region for the playback on the terminal is stored in the media segment.

US Pat. No. 10,171,887

METHODS AND SYSTEMS FOR INTELLIGENT PLAYBACK

Comcast Cable Communicati...

1. A method, comprising:receiving, by a computing device, media content for playback;
determining, by the computing device based on an arrival rate of the received media content, a parameter relating to the received media content;
determining, by the computing device based upon the parameter, a safe point, wherein the safe point comprises a point in time when a remainder of the received media content can be presented at a constant pre-defined playback speed;
causing, by the computing device, output of the received media content at a first playback speed until the safe point is reached; and
when the safe point is reached, causing, by the computing device, output of the received media content at a second playback speed.

US Pat. No. 10,171,885

APPARATUS AND METHODS FOR MANAGING DELIVERY OF CONTENT IN A NETWORK WITH LIMITED BANDWIDTH USING PRE-CACHING

Time Warner Cable Enterpr...

1. An apparatus for management and distribution of content in a content delivery network, said apparatus comprising:at least one interface configured to communicate with a plurality of computerized client devices operatively coupled to said content delivery network;
one or more storage apparatus configured to:
store a plurality of digitally rendered content for distribution to subsets of said plurality of computerized client devices;
store data representative of one or more rules to guide said distribution of individual ones of said plurality digitally rendered content; and
store classification data related to said individual ones of said plurality of said digitally rendered content; and
a processing unit in data communication with said at least one interface and said one or more storage apparatus, said processing unit comprising computerized logic configured to:
based at least in part on said classification data, identify individual ones of said plurality of digitally rendered content that are high probability of viewership (HpoV) content for one of said subsets of said plurality of computerized client devices;
identify data representative of one or more rules to guide said distribution of said HpoV content from among said data representative of one or more rules to guide said distribution of said individual ones of said plurality of said digitally rendered content;
cause transmission of both (i) said HpoV content and (ii) said data representative of one or more rules to guide distribution of said HpoV content to said one of said subsets of said plurality of computerized client devices, where said transmission to said one of said subsets of said plurality of computerized client devices is configured to occur when network resource demand is below a predetermined threshold; and
schedule, using at least the computerized logic, said transmission of said HpoV content and said data representative of one or more rules to a different future time when said network resource demand is above said predetermined threshold.

US Pat. No. 10,171,882

BROADCAST SIGNAL FRAME GENERATION DEVICE AND BROADCAST SIGNAL FRAME GENERATION METHOD USING BOOTSTRAP INCLUDING SYMBOL FOR SIGNALING BICM MODE OF PREAMBLE AND OFDM PARAMETER TOGETHER

Electronics and Telecommu...

1. An apparatus for generating broadcast signal frame, comprising:a time interleaver configured to generate a time-interleaved signal by performing interleaving on a BICM output signal; and
a frame builder configured to generate a broadcast signal frame including a bootstrap and a preamble using the time-interleaved signal,
wherein the bootstrap includes a symbol for signaling a BICM mode and OFDM parameters of L1-Basic of the preamble, together.

US Pat. No. 10,171,881

BACKUP MODULE AND METHOD

MT Digital Media Limited,...

1. A method for operating a data processing apparatus to backup display of a sequence of interrupted content items through a module of the data processing apparatus using a CPU and a software program, comprising:identifying a series of user invoked interruptions, each interruption comprising a transition between the display of a first content item and the display of a second content item, wherein the first content item and the second content item are in a sequence of at least three interrupted content items and the sequence of interrupted content items include content items from at least two different content domains;
storing, at the data processing apparatus, interruption records each including a locator to a said first content item subject to a corresponding user invoked interruption, and further including a record of the order in which said interruptions occurred; and
initiating display in a last in first out order of the sequence of interrupted content items responsive to a sequence of backup signals, such that each of the sequence of backup signals causes display of a previous interrupted content item of the sequence of at least three interrupted content items.

US Pat. No. 10,171,880

SYSTEMS AND METHODS FOR MODELING AUDIENCE STABILITY OF A MEDIA ASSET SERIES

Rovi Guides, Inc., San J...

1. A method for modeling consistency of audiences viewing groups of media assets, the method comprising:receiving a data packet from a user equipment of a plurality of user equipment;
extracting, from the data packet, an indication of a first media asset;
identifying a first subset of the plurality of user equipment, the first subset comprising each user equipment on which the first media asset was generated for display;
identifying a second subset of the first subset, the second subset comprising each user equipment on which a second media asset was generated for display, wherein the first media asset and the second media asset are part of a group of media assets;
calculating a score for audience consistency for the group of media assets based on the number of user equipment in the second subset comprising each user equipment on which the first media asset and the second media asset were generated for display relative to the number of user equipment in the first subset comprising each user equipment on which the first media asset was generated for display;
ranking the group of media assets among a plurality of groups of media assets based on the calculated score for audience consistency for the group of media assets; and
selecting a group of the plurality of groups of media assets with the highest rank to target with an advertisement.

US Pat. No. 10,171,879

CONTEXTUAL ALERTING FOR BROADCAST CONTENT

INTERNATIONAL BUSINESS MA...

10. A computer usable program product comprising one or more computer-readable storage devices, and program instructions stored on at least one of the one or more storage devices, the stored program instructions comprising:program instructions to analyze, after receiving a content at a device usable to present the content to a user, a portion of the received content to identify a context present in the portion, the context comprising a type of a subject-matter of the portion;
program instructions to select, corresponding to the context of the portion, a contextual rating rule from a set of contextual rating rules;
program instructions to compute a rating value of the portion using a first rating value in the contextual rating rule, the rating value of the portion being distinct from a rating associated with the content by a distributor of the content;
program instructions to present, on a presentation device, the portion with the rating value of the portion;
program instructions to collect information related to the context of the portion;
program instructions to construct an overlay with the information, wherein the information is configured in the overlay to attract an attention of the user to the portion, and wherein the information for the overlay is selected based on content usage habits of the user;
program instructions to overlay the portion with the overlay during a presentation of the portion;
program instructions to determine that the portion has not yet been presented during a presentation of the content on the presentation device;
program instructions to construct a notification, the notification comprising the rating value of the portion;
program instructions to receive an image of the user during the presenting;
program instructions to analyze the image to determine that the user is not attentive during the presenting; and
program instructions to send a notification to the user prior to presenting the portion on the presentation device responsive to determining that the user is not attentive during the presenting.

US Pat. No. 10,171,875

METHOD FOR PROVIDING PREVIOUS WATCH LIST OF CONTENTS PROVIDED BY DIFFERENT SOURCES, AND DISPLAY DEVICE WHICH PERFORMS SAME

LG ELECTRONICS INC., Seo...

1. A display device, comprising:a display;
an interface unit configured to receive a request from a remote controller; and
a controller configured to:
in response to a first request received from the remote controller, display a list of previously-viewed content including at least a first item corresponding to a first previously-viewed content, a second item corresponding to a second previously-viewed content and a third item corresponding a third previously-viewed content, wherein the first item, the second item, and the third item are displayed in an order that the previously-viewed content has been viewed,
in response to a second request received from the remote controller, display a fourth content on the display,
in response to a third request from the remote controller to display the list of previously-viewed content, delete the first item in the list of previously-viewed content and display the list of previously-viewed content including the second item, the third item and a fourth item corresponding to the fourth displayed content,
in response to a fourth request received from the remote controller, display a fifth content on the display, and
in response to a fifth request from the remote controller to display the list of previously-viewed content, delete the second item in the list of previously-viewed content and display the list of previously-viewed content including the third item, the fourth item and a fifth item corresponding to the fifth displayed content,
wherein the controller is further configured to display the list of previously-viewed content on an initial screen when the display device is turned on.

US Pat. No. 10,171,873

MULTIMEDIA SYSTEM FOR MOBILE CLIENT PLATFORMS

1. A computer program video player product stored on a non-transitory computer readable medium and loadable into the internal memory of a client computing device, comprising software code portions for performing, when the video player product is run on a computer, a method comprising:sequentially reading a plurality of distinctive Internet addresses associated with a plurality of discrete continuous media objects, wherein said discrete continuous media objects are formed from synchronized video and audio segments of a continuous synchronized audio and video;
determining a playback rate by executing software code portions stored exclusively on the memory of the client computing device and executed by a processor on the client computing device, based on varying wireless bandwidth, said playback rate being adjusted for each discrete continuous media object by the computer program video player product, acting autonomously, by selecting which discrete continuous media object is played being made by the computer program video player product, adjusting digital video decoding steps for playback performance resulting from varying bandwidth connection speeds;
further adjusting playback performance by using intrinsic player decoding algorithms that optimize digital video decoding in order to maintain visual continuity and playback;
playing back a video at the determined playback rate consisting of at least a subset of said plurality of discrete continuous media objects; wherein said discrete continuous media objects are individually decoded by said video player product during playback through digital video decoding optimizing algorithms for receiving, parsing and selecting the playback of a sequence of discrete continuous media objects;
wherein said discrete continuous media objects are created by transcoding an input continuous media object including a video segment forming part of a discrete audiovisual file into an optimal audiovideo format at an optimal encoding rate reflecting available cellular network bandwidth; dynamically decoding by the client computing device the transcoded continuous media objects into discrete files by splitting the transcoded stream into specified intervals and scanning after the specified intervals for a next I-frame, wherein each discrete interval is split at that next I-frame to create another discrete continuous media object; and assigning each of the discrete continuous media objects a distinctive Internet address;
and wherein said discrete continuous media objects are obtained by the player as discrete audiovisual files using protocols which access content through file and directory structures to the exclusion of synchronous or asynchronous bitstreaming;
wherein the continuous media objects are maintained by content servers serving the discrete continuous media objects to wireless clients during transmission to wireless devices
wherein said continuous media objects are audiovideo files including metadata.

US Pat. No. 10,171,872

METHODS AND SYSTEMS FOR IMPLEMENTING A LOCKED MODE FOR VIEWING MEDIA ASSETS

Rovi Guides, Inc., San J...

1. A method for implementing a locked mode in an interactive media guidance application, comprising:receiving, using control circuitry, a request to initiate a locked mode for a specified time period on a user equipment device, wherein a specified user of the user equipment device is only allowed access to media assets selected for the locked mode during the specified time period;
receiving, using the control circuitry, first information relating to a first plurality of media assets selected for viewing by a plurality of users having similar characteristics;
receiving, using the control circuitry, second information relating to a second plurality of media assets, the second plurality of media assets being presented to the specified user during a period of time when locked mode is not initiated;
determining, using the control circuitry, a media asset, of the plurality of media assets, that is of interest to the plurality of users based on the received first and second information;
determining, using the control circuitry, whether the specified user has characteristics similar to the plurality of users;
and
during the locked mode and in response to determining that the specified user has characteristics similar to the plurality of users, transmitting, using the control circuitry, an instruction to the interactive media guidance application to present the media asset to the specified user without receiving input from the specified user.

US Pat. No. 10,171,869

METHODS AND APPARATUS TO DETERMINE ENGAGEMENT LEVELS OF AUDIENCE MEMBERS

The Nielsen Company (US),...

13. An apparatus, comprising:a detector to analyze image data depicting an environment in which media is to be presented by a first media device to determine whether the environment includes a second media device emanating a glow, the image data captured with a sensor; and
a calculator to determine an engagement for a person in the environment with respect to the first media device, the calculator to determine the engagement based on a distance between the person and the second media device emanating the glow.

US Pat. No. 10,171,867

SERVICE GUIDE ENCAPSULATION

SHARP KABUSHIKI KAISHA, ...

1. A method for decoding a service guide associated with a video bitstream comprising:(a) receiving a service guide fragment within said service guide;
(b) receiving a service guide delivery unit structure that is a transport container for said service guide fragment and that is used for encapsulating service guide fragments within said video bitstream;
(c) receiving a unit header structure within said service guide delivery unit structure;
(d) receiving an extension offset field within said unit header structure, wherein said extension offset field is zero in said service guide delivery unit structure corresponding to a particular service guide delivery unit structure specification;
(e) receiving said extension offset field within said unit header structure, wherein said extension offset field is ignored for values other than zero in said service guide delivery unit structure corresponding to said particular service guide delivery unit structure specification; and
(f) decoding said service guide.

US Pat. No. 10,171,866

DISPLAY SYSTEM, DISPLAY DEVICE, HEAD-MOUNTED DISPLAY DEVICE, DISPLAY CONTROL METHOD, CONTROL METHOD FOR DISPLAY DEVICE, AND COMPUTER PROGRAM

SEIKO EPSON CORPORATION, ...

1. A display system comprising:a transmitting device configured to transmit video data; and
a first display device and a second display device configured to display videos on the basis of the video data transmitted by the transmitting device, wherein
the transmitting device includes a data transmitting section configured to wirelessly transmit the video data formed by continuous frames to the first display device and the second display device,
the first display device includes:
a first video receiving section configured to receive the video data transmitted by the transmitting device; and
a first display section configured to display a video on the basis of the video data received by the first video receiving section, to only a first eye of the user to visually recognize the video,
the second display device, that is separate from the first display device, includes:
a second video receiving section configured to receive the video data transmitted by the transmitting device; and
a second display section configured to display a video on the basis of the video data received by the second video receiving section, to only a second eye of the user to visually recognize a video, and
the display system detects deviation between (1) timing of displaying frames of the video displayed by the first display section and visually recognized by the first eye and (2) timing of displaying frames of the video displayed by the second display section and visually recognized by the second eye.

US Pat. No. 10,171,865

ELECTRONIC DEVICE AND COMMUNICATION CONTROL METHOD

KABUSHIKI KAISHA TOSHIBA,...

1. An electronic apparatus comprising:a memory;
one or more hardware processors configured to:
acquire content data comprising first encoded data of a video image and second encoded data of a user interface;
decode the second encoded data to generate second decoded data of the user interface without decoding the first encoded data; and
store the second decoded data of the user interface in the memory;
a transmitter configured to transmit the content data comprising the first encoded data and the second encoded data, to a first electronic apparatus, wherein the first encoded data and the second encoded data are decoded to generate third decoded data of the video image and fourth decoded data of the user interface at the first electronic apparatus, respectively; and
a receiver configured to receive, while the video image based on the third decoded data and the user interface based on the fourth decoded data are displayed on a first screen of the first electronic apparatus, first operation data regarding a first user operation that is performed on the user interface displayed on the first screen of the first electronic apparatus,
wherein the one or more hardware processors are further configured to:
specify a first process, inputted by the first user operation, to control playback of the video image displayed on the first screen of the first electronic apparatus based on both the second decoded data of the user interface stored in the memory and the first operation data; and
execute the first process.

US Pat. No. 10,171,863

INTERACTIVE ADVERTISEMENT

1. A receiver comprising:at least one input component to receive audiovisual content;
at least one output component communicatively coupled with at least one display device;
a plurality of tuners;
one or more processors communicatively coupled with the at least one input component, at least one output component, and the plurality of tuners, the one or more processors configured to cause the receiver to perform:
outputting an advertising filter menu for display to the at least one display device, the advertising filter menu comprising menu items allowing for user specification of one or more product characteristics;
processing indicia of one or more selections made with one or more of the menu items of the advertising filter menu, the one or more selections indicating one or more specified product characteristics;
identifying a location corresponding to a user;
receiving a program service transmission, the program service transmission comprising content for at least one channel;
receiving plurality of product advertisements at the receiver and identifying respective location specifications associated with the plurality of product advertisements, the plurality of product advertisements for products shown on the at least one channel of the program service transmission;
processing the plurality of product advertisements and storing the plurality of product advertisements in memory;
selecting a subset of the plurality of product advertisements based at least in part on the one or more specified product characteristics and comparing the respective location specifications associated with the plurality of product advertisements with a threshold distance with respect to the location corresponding to the user, and eliminating from inclusion in the subset at least one product offering advertisement which does not satisfy the threshold distance;
outputting the at least one channel for display;
selecting at least a first product advertisement of the subset of the plurality of product advertisements and outputting the first product advertisement for display;
receiving a user input following the output of the first product advertisement of the subset of the plurality of product advertisements;
modifying subsequent advertisement selection so that a selection of at least a second product advertisement is based at least in part on the user input responsive to the output of the first product advertisement; and
outputting the second product advertisement for display.

US Pat. No. 10,171,862

INTERACTIVE VIDEO SEARCH AND PRESENTATION

1. An interactive video presentation search improvement method comprising:receiving, by a processor of a remote control device configured to control functions for a video presentation device, inquiry data comprising a plurality of video object based questions, wherein said remote control device comprises a memory device, a display device, and a light fidelity (Li-Fi) hardware device comprising circuitry, a transceiver, and a light source device;
storing, by said processor, said inquiry data within said memory device
receiving, by said computer processor from a user based on video data being presented via said video presentation device, a command associated with said inquiry data;
presenting, by said processor via said display device in response to said command, said plurality of video object based questions;
receiving, by said processor in response to said presenting, a selection for a first question of said plurality of video object based questions, said first question associated with a video object of said video data being presented via said video presentation device;
enabling, by said processor executing said circuitry, said light source device such that a light is visible on said video object being presented via said video presentation device;
identifying, by said processor based on results of said enabling, said video object with respect to said first question by:
retrieving via a video retrieval device of said remote control device, a visual image of said video object; and
transmitting via said LiFi hardware device to said video presentation device, said visual image, wherein said video presentation device analyzes said visual image and presents said information adjacent to said video object;
executing, by said processor based on results of said identifying and via said transceiver, an Internet based search associated with locating answers to said first question; and
presenting, by said processor to said user based on results of said executing, information associated with said first question with respect to said video object.

US Pat. No. 10,171,859

SYSTEMS, MEDIA, AND METHODS FOR PROVIDING AN ALGORITHMICALLY SORTED WATCHLIST OR WISHLIST

BLAB VENTURES LLC, Austi...

1. A computer-implemented system for maintaining an algorithmically sorted watchlist comprising:a) a digital processing device comprising an operating system configured to perform executable instructions and a memory;
b) a computer program including instructions executable by the digital processing device to create an application, the application configured for:
i) presenting an interface allowing a first user to create a watchlist comprising a plurality of digital media items, the watchlist having an order indicating a priority for the first user to consume each item;
ii) presenting an interface allowing the first user to rate media items they have consumed;
iii) presenting an interface allowing the first user to recommend one or more consumed media items to a second user, the second user having a social connection to the first user within a social network;
iv) presenting an interface allowing the first user to ask the second user a question pertaining to a media item;
v) presenting an interface allowing the first user to discuss a media item with the second user; and
vi) algorithmically updating the watchlist, the update based on social graph distance between the first user and the second user and user activity including: the second user adding a media item to a watchlist, the second user consuming a media item, the second user rating a media item, the second user recommending a media item, the second user discussing a media item, and aggregated activity of a community of users within the social network, wherein a scope of the community of users is customizable by the first user indicating a number of users, a distance between users on the social graph, one or more demographic characteristics, or one or more groups within the social network.

US Pat. No. 10,171,858

UTILIZING BIOMETRIC DATA TO ENHANCE VIRTUAL REALITY CONTENT AND USER RESPONSE

ADOBE SYSTEMS INCORPORATE...

1. In a digital medium environment for providing an immersive virtual reality experience, a computer-implemented method of customizing digital content based on user biometrics, comprising:identifying biometric data corresponding to a user of a virtual reality device;
determining baseline biometric characteristics of the user of the virtual reality device based on the biometric data;
determining a stimulus category for the user of the virtual reality device from a plurality of stimulus categories based on the baseline biometric characteristics by: clustering a plurality of users based on a plurality of baseline biometric characteristics for the plurality of users, generating biometric data metrics for each of the plurality of stimulus categories based on the clustered plurality of users, and comparing the baseline biometric characteristics of the user to the biometric data metrics corresponding to the plurality of stimulus categories;
in response to identifying additional biometric data corresponding to the user of the virtual reality device, selecting virtual reality content to provide to the user of the virtual reality device based on the stimulus category and the additional biometric data; and
providing the selected virtual reality content via the virtual reality device.

US Pat. No. 10,171,856

VIEWER-AUTHORED CONTENT ACQUISITION AND MANAGEMENT SYSTEM FOR IN-THE-MOMENT BROADCAST IN CONJUNCTION WITH MEDIA PROGRAMS

FX NETWORKS, LLC, Los An...

1. A method for providing viewer-derived content for broadcast presentation in conjunction with a broadcast of a media program by a provider of the media program, comprising:(a) receiving viewer registration information uniquely associated with a viewer via an application executing on a viewer device, the application for collecting the viewer registration information, viewer-authored content and viewer-authored content metadata associated with the viewer-authored content;
(b) receiving the viewer-authored content and the viewer-authored content metadata in a content management system (CMS);
(c) processing the viewer authored content according to the viewer authored content metadata to generate the viewer-derived content;
(d) queuing the viewer-derived content with other viewer-derived content generated from viewer-authored content from other viewers for consideration for the broadcast presentation in conjunction with the broadcast of the media program;
(e) determining if the viewer-derived content complies with broadcast regulations or quality standards;
(f) selecting the viewer-derived content for broadcast presentation in conjunction with a live broadcast of the media program if the viewer-derived content complies with the broadcast regulations or the quality standards; and
(g) providing the viewer-derived content for broadcast in conjunction with the live broadcast of the media program;
wherein:
the viewer authored content comprises a plurality of independent media files, each media file comprising an intra-compressed image;
the step of processing the viewer authored content according to the viewer authored content metadata to generate the viewer derived content comprises the steps of:
generating an animated image file from all of the plurality of independent media files;
generating a compressed video file from the animated image file, the compressed video file having a size smaller than the animated image file and mimicking and serving as a proxy for the animated image file; and
transmitting the compressed video file to the viewer device for presentation by the application executing on the viewer device.

US Pat. No. 10,171,855

METHOD AND APPARATUS FOR SYNCHRONIZING VIDEO LIVE BROADCAST

Huawei Technologies Co., ...

1. A method, comprising:sending, by a user equipment, a video stream synchronization request to a first network side device, wherein the video stream synchronization request requests to acquire a live video of the first network side device, the acquired live video to be played synchronously by the user equipment with the live video of the first network side device, wherein the first network side device receives the video stream synchronization request after it is forwarded to the first network side device from a base station that connects the user equipment to a network;
receiving, by the user equipment, a video stream playback position synchronization parameter sent by the first network side device, wherein the video stream playback position synchronization parameter comprises a playback position parameter at a video stream sending moment and a system frame number (SFN) at the video stream sending moment, and wherein the SFN at the video stream sending moment is added to the video stream playback position synchronization parameter by the base station that connects the user equipment to the network, or the SFN is added to the video stream playback position synchronization parameter by a second network side device that receives the playback position parameter from the first network side device;
acquiring, by the user equipment, a SFN at a video stream receiving moment; and
adjusting, by the user equipment according to the SFN at the video stream sending moment and the SFN at the video stream receiving moment, the playback position parameter at the video stream sending moment.

US Pat. No. 10,171,853

SYSTEMS AND METHODS FOR MANAGING AVAILABLE BANDWIDTH IN A HOUSEHOLD

Rovi Guides, Inc., San J...

1. A method for managing available bandwidth in a household, the method comprising:receiving, from a user device, a request to stream a first media asset;
retrieving, from stored metadata associated with the first media asset, a minimum bandwidth value for streaming the first media asset;
comparing the minimum bandwidth value to a household bandwidth value in a household bandwidth state database, wherein the household bandwidth value indicates a bandwidth currently available in the household;
in response to determining that the minimum bandwidth value is greater than the household bandwidth value, identifying a stream of a second media asset that is consuming bandwidth in the household;
determining a time remaining for completing the stream of the second media asset;
comparing a duration value of a third media asset in a media asset database with the time remaining, wherein the third media asset has an associated bandwidth value less than the household bandwidth value; and
in response to determining that the duration value of the third media asset is greater than the time remaining, generating for display on the user device a message that indicates the bandwidth currently available in the household is insufficient to stream the first media asset and that has an option to stream the third media asset instead of the first media asset.

US Pat. No. 10,171,852

BROADCAST SIGNAL TRANSMISSION DEVICE, BROADCAST SIGNAL RECEPTION DEVICE, BROADCAST SIGNAL TRANSMISSION METHOD, AND BROADCAST SIGNAL RECEPTION METHOD

LG ELECTRONICS INC., Seo...

1. A method of providing a broadcast service, the method comprising:receiving a media content through an external input source, the media content including a video component having video watermarks and an audio component having audio watermarks;
extracting the audio watermarks and the video watermarks from the media content, wherein an audio watermark of the audio watermarks includes a watermark payload including server information and interval information;
generating a Uniform Resource Locator (URL) for a recovery data using the server information and the interval information;
requesting the recovery data to a recovery server using the generated URL, the recovery data including information on the media content; and
receiving the recovery data from the recovery server,
wherein the server information is used to identify the recovery server and the interval information identifies an interval of the media content in which the audio watermark is embedded,
wherein the recovery data includes an identifier of a broadcast stream for the media content and the interval information which was used to request the recovery data,
wherein the recovery data further includes a service element describing information about a broadcast service related to the media content,
wherein the service element includes a service identifier for identifying the broadcast service, version information indicating a version of service information for the broadcast service, Service Layer Signaling (SLS) protocol information and SLS protocol version information, and
wherein the SLS protocol information indicates whether a transport protocol used to transmit SLS of the broadcast service is a Real-Time Object Delivery over Unidirectional Transport (ROUTE) protocol or a MPEG Media Transport (MMT) protocol, and the SLS protocol version information indicates a version of the transport protocol.

US Pat. No. 10,171,851

VIDEO CONTENT DISTRIBUTION SYSTEM AND CONTENT MANAGEMENT SERVER

COLOPL, INC., Tokyo (JP)...

1. A video content distribution system comprising:a user terminal on which contents are viewable; and
a content management server connected to the user terminal via a communication network,
wherein the user terminal comprises:
a first reception unit configured to receive field-of-view video data from the content management server;
a display control unit configured to generate instructions for displaying on a display unit a field-of-view video based on the received field-of-view video data;
a viewpoint switch request signal generating unit configured to generate, in response to input operation on the user terminal, a viewpoint switch request signal for requesting a switch from a first viewpoint to a second viewpoint in the field-of-view video displayed on the display unit; and
a first transmission unit configured to transmit the generated viewpoint switch request signal to the content management server,
wherein the content management server comprises:
a second reception unit configured to receive the viewpoint switch request signal from the user terminal;
a viewing start time determining unit configured to determine a first viewing start time at which the display unit starts displaying the field-of-view video from the first viewpoint, and a second viewing start time at which the display unit starts displaying the field-of-view video from the second viewpoint;
a viewing stop time determining unit configured to determine a first viewing stop time at which the display unit stops displaying the field-of-view video from the first viewpoint, and a second viewing stop time at which the display unit stops displaying the field-of-view video from the second viewpoint;
a viewing period determining unit configured to determine a first viewing period in which the field-of-view video is displayed from the first viewpoint based on the first viewing start time and the first viewing stop time, and to determine a second viewing period in which the field-of-view video is displayed from the second viewpoint based on the second viewing start time and the second viewing stop time;
a total user charge amount calculating unit configured to determine a total amount to be charged to the user based on a combination of charges for a first viewing duration at the first viewpoint and a second viewing direction at the second viewpoint, wherein a charge per unit time for the first view point is different form a charge per unit time for the second viewpoint; and
a second transmission unit configured to transmit to the user terminal field-of-view video data that is associated with one of the first viewpoint or the second viewpoint,
wherein the content management server is configured to continue transmitting the field-of-view video data that is associated with the first viewpoint at least for a time period from a time when the first transmission unit transmits the viewpoint switch request signal to the second transmission unit to a time when the first reception unit receives the field-of-view video data that is associated with the second viewpoint from the content management server, or at least for a time period from a time when the second reception unit receives the viewpoint switch request signal to a time when the second transmission unit transmits the field-of-view video data that is associated with the second viewpoint, and
wherein the viewing stop time determining unit and the viewing start time determining unit are configured to determine the first viewing stop time and the second viewing start time, respectively, when the second reception unit receives the viewpoint switch request signal.

US Pat. No. 10,171,850

TRUNK MANAGEMENT METHOD AND APPARATUS FOR VIDEO SURVEILLANCE SYSTEMS

Hangzhou Hikvision System...

1. A trunk management method for a video surveillance system, the video surveillance system including a first video server, a plurality of clients each having a predetermined priority, and a plurality of surveillance equipment items, the first video server accessing the plurality of surveillance equipment items over a bandwidth-limited backbone network, the method comprising:sending a video service request to a first surveillance equipment item of the plurality of surveillance equipment items by a first client of the plurality of clients;
establishing, by the first client, a new video session between the first video server and the first client;
determining whether there is an existing video session between the first video server and the first surveillance equipment item;
(i) if there is not an existing video session between the first video server and the first surveillance equipment, determining whether a network bandwidth between the first video server and the plurality of surveillance equipment items reaches full load,
(a) if the network bandwidth does not reach full load, establishing, by the first video server, a new video session between the first surveillance equipment item and the first video server and updating a connection priority of the first surveillance equipment item as the priority of the first client, wherein the connection priority of each of the plurality of surveillance equipment items is a priority of the connection between the first video server and this surveillance equipment item, and
(b)if the network bandwidth has reached full load, querying a lowest connection priority among connection priorities of all surveillance equipment items connected to the first video server,
if the priority of the first client is higher than the lowest connection priority among the connection priorities of all surveillance equipment items connected to the first video server, disconnecting a connection between a surveillance equipment item having the lowest connection priority and the first video server, kicking away all clients connected to the surveillance equipment item having the lowest connection priority, and establishing, by the first video server, a new video session between the first surveillance equipment item and the first video server;
(ii) if there is an existing video session between the first video server and the first surveillance equipment item, querying priorities of all clients connected to the first surveillance equipment item;
(iii) determining whether the priority of the first client is higher than a highest priority among the priorities of all the clients connected to the first surveillance equipment item;
(a) if the priority of the first client is higher than a highest priority among the priorities of all the clients connected to the first surveillance equipment item, updating the connection priority of the first surveillance equipment item as the priority of the first client, and
(b) if the priority of the first client is not higher than the highest priority among the priorities of all the clients connected to the first surveillance equipment item, the existing connection priority of the first surveillance equipment item is not updated;
wherein the connection priority of each surveillance equipment item of the plurality of surveillance equipment items is set to the highest priority among the priorities of all clients that are connected to that surveillance equipment item.

US Pat. No. 10,171,849

BROADCAST SIGNAL TRANSMISSION DEVICE, BROADCAST SIGNAL RECEPTION DEVICE, BROADCAST SIGNAL TRANSMISSION METHOD, AND BROADCAST SIGNAL RECEPTION METHOD

LG ELECTRONICS INC., Seo...

1. A method of transmitting a broadcast signal by a broadcast signal transmitter, the method comprising:generating service signaling information for signaling a broadcast service and service data of the broadcast service, wherein the service data comprises service components included in the broadcast service and wherein one of the service components is a stereoscopic video which is encoded by Scalable High Efficiency Video Coding (SHVC);
generating a service list table, the service list table comprising bootstrap information for the service signaling information;
processing the service components, the service signaling information, and the service list table as Internet protocol (IP) packets;
processing the IP packets to generate a broadcast signal and transmitting the broadcast signal through a broadcast network,
wherein the stereoscopic video includes a multi-view view position Supplemental Enhancement Information (SEI) message indicating left and right view;
wherein view position information in the multi-view view position SEI message indicates orders of views from left to right; and
wherein the view position information is set to 0 for a left-most view and increasing by 1 for next view from left to right.

US Pat. No. 10,171,848

DIGITAL BROADCASTING RECEIVER AND METHOD FOR CONTROLLING THE SAME

LG Electronics Inc., Seo...

1. A method of processing data in a broadcast transmitting system, the method comprising:encoding data in at least one first data packet by an encoder,
wherein the least one first data packet includes a first header and a first payload,
wherein the first header includes stuffing indication information related to stuffing data in the at least one first data packet,
wherein the first payload includes at least one second data packet,
wherein the at least one second data packet has a second header and a second payload,
wherein the second payload includes two or more IP (Internet protocol) packets which carry service components of a service, and
wherein the second header includes information for indicating a number of the two or more IP packets included in the second payload; and
transmitting a transmission frame including the encoded data by a transmitter,
wherein the transmission frame further includes fast service acquisition information providing information necessary to locate service signaling information, service type information of the service, and channel information of the service, and
wherein the service signaling information includes access information of the service components.

US Pat. No. 10,171,847

INFORMATION DEVICE AND DISTRIBUTION DEVICE

FUNAI ELECTRIC CO., LTD.,...

1. An information device comprising:a communication component that communicates with an external device; and
a controller that downloads video data from the external device through the communication component, performs processing to convert a format of the video data into a playable format and executes an application for playing the video data,
the controller sending a download request for each divided video data to sequentially download the divided video data, with the divided video data being obtained by dividing up the video data,
the processing by the controller to convert the format of the video data into the playable format including dividing the divided video data into a plurality of divided files and producing a playlist file for a playback instruction of the divided files every time the divided video data is downloaded.

US Pat. No. 10,171,846

SYSTEM AND METHOD FOR ROUTING MEDIA

1. A method for managing streaming of video content to a client device, the method comprising:providing the video content to a content distribution network for storage in a plurality of geographically separated resources of the content distribution network;
dynamically selecting one or more advertisement media clips based on statistical information associated with a user of the client device;
receiving, from the client device via a packet-based telecommunication network, signaling to have the stored video content streamed to the client device;
and
in response to the received signaling, transmitting to the client device, via the packet-based telecommunication network and in one or more files having a format compatible with a media player on the client device, (i) an identification of one or more of the resources of the content distribution network available to facilitate streaming of one or more segments of the stored video content to the client device, the identification being dependent at least in part on a relationship between a geographic location of the client device and geographic locations of the resources of the content distribution network, and (ii) an identification of an advertising server, the identification of the advertising server being dependent at least in part on a relationship between the geographic location of the client device and a geographic location of the advertising server,
wherein the one or more files, when processed by the client device, cause the client device to communicate with the identified one or more resources of the content distribution network and the advertising server to cause the one or more segments of the stored video to be streamed to the client device by the identified one or more resources of the content distribution network and cause the one or more selected advertisement media clips to be streamed from the advertising server to the client device.

US Pat. No. 10,171,845

VIDEO SEGMENT MANAGER AND VIDEO SHARING ACCOUNTS FOR ACTIVITIES PERFORMED BY A USER ON SOCIAL MEDIA

International Business Ma...

1. A computer program product comprising:one or more computer readable storage media and program instructions stored on at least one of the one or more computer readable storage media, the program instructions comprising:
program instructions to identify a plurality of multimedia files that are of interest to a user based on historical activity of the user viewing multimedia files, wherein multimedia files of interest are determined based on metadata stored on one or more databases;
program instructions to determine a ranking of individual multimedia files within the plurality of multimedia files that are of interest to the user based upon an algorithm for generating a novel multimedia file, wherein determining further comprises using a criterion for each of the plurality of the user interested multimedia files;
program instructions to create a catalog of the identified plurality of multimedia files that are of interest to the user, wherein the catalog includes the identified plurality of multimedia files organized into one or more groups of multimedia files based on user preferences and characteristics of the multimedia files;
program instructions to analyze a plurality of catalogs that include multimedia files that are of interest to the user based upon an algorithm, wherein the plurality of catalogs includes the created catalog of the identified plurality of multimedia files that are of interest to the user;
program instructions to select one or more multimedia file segments from the catalog of the identified plurality of multimedia files that are of interest to the user;
program instructions to, responsive to receiving, from the user, a selection of one or more multimedia file frames from the plurality of catalogs that include multimedia files that are of interest to the user, determine a similarity value for the selected one or more multimedia file frames according to the algorithm;
program instructions to generate the novel multimedia file, wherein the novel multimedia file is generated by combining the selected one or more multimedia file segments;
sending, by one or more processors, the one or more novel multimedia file frames to another user;
program instructions to determine an order for the plurality of multimedia files that are of interest to the user according to user preferences associated with the user, wherein the user preferences dictate a truncation of user interested multimedia file frames comprising the plurality of multimedia files that are of interest to the user;
program instructions to truncate a user interested multimedia file according to user preferences, program instructions to display the truncated user interested multimedia file frames, wherein displaying further comprises presenting a searchable index of a plurality of novel multimedia files;
program instructions to, responsive to displaying the novel multimedia file, identify one or more novel multimedia file frames included in the novel multimedia file of interest to the user;
program instructions to send the one or more novel multimedia file frames to another user; and
program instructions to update user preferences information of an inputted keyword by the user, a description in one of the plurality of user interested multimedia files, and a user profile associated with the user in another application.

US Pat. No. 10,171,843

VIDEO SEGMENT MANAGER

International Business Ma...

1. A method comprising:identifying, by one or more processors, a plurality of multimedia files that are of interest to a user based on historical activity of the user viewing multimedia files;
determining, by one or more processors, an order for the plurality of multimedia files that are of interest to the user according to user preferences associated with the user, wherein the user preferences dictate a truncation of user interested multimedia file frames comprising the plurality of multimedia files that are of interest to the user;
creating, by one or more processors, a catalog of the identified plurality of multimedia files that are of interest to the user, wherein the catalog includes the identified plurality of multimedia files organized into one or more groups of multimedia files based on user preferences and characteristics of the multimedia files;
selecting, by one or more processors, one or more multimedia file segments from the catalog of the identified plurality of multimedia files that are of interest to the user;
generating, by one of more processes, a novel multimedia file, wherein the novel multimedia file is generated by combining the selected one or more multimedia file segments;
truncating, by one or more processors, a user interested multimedia file according to user preferences, and
displaying, by one or more processors, the truncated user interested multimedia file frames.

US Pat. No. 10,171,842

HRD DESCRIPTOR AND BUFFER MODEL OF DATA STREAMS FOR CARRIAGE OF HEVC EXTENSIONS

QUALCOMM Incorporated, S...

1. A method of processing video data, the method comprising:obtaining a data stream comprising a plurality of elementary streams and a High Efficiency Video Coding (HEVC) timing and Hypothetical Reference Decoder (HRD) descriptor, wherein the HEVC timing and HRD descriptor comprises a target schedule index syntax element indicating an index of a delivery schedule;
identifying, based on a set of parameters, a syntax element in an array of syntax elements in a video parameter set (VPS), wherein:
the VPS comprises a plurality of HRD parameters syntax structures, wherein each respective HRD parameters syntax structure of the plurality of HRD parameters syntax structures comprises a respective set of HRD parameters,
each respective syntax element of the array of syntax elements specifies an index of an HRD parameters syntax structure in the plurality of HRD parameters syntax structures, and
the set of parameters comprises a parameter having a value equal to a value of the target schedule index syntax element; and
identifying, based on an index specified by the identified syntax element, a particular HRD parameters syntax structure in the plurality of HRD parameters syntax structures as being applicable to a particular elementary stream that is part of the operation point, the plurality of elementary streams including the particular elementary stream.

US Pat. No. 10,171,840

METHOD FOR PRODUCING VIDEO CODING AND PROGRAMME-PRODUCT

SIEMENS AKTIENGESELLSCHAF...

1. Method for video coding with the procedural steps:provision of a prediction error matrix;
conversion of the prediction error matrix by coefficient sampling into a series of symbols; and
performing context-adaptive arithmetic encoding of the symbols on the basis of symbol frequencies, for which the distribution is selected depending on an already encoded symbol;
wherein:
the context-adaptive arithmetic encoding of the symbols includes, for a symbol being encoded, selecting from different predetermined distributions of symbol frequencies a particular predetermined distribution of symbol frequencies based on the symbol encoded immediately beforehand; and
the predetermined distribution of symbol frequencies indicates the likelihood of different types of symbols occurring immediately following the type of the symbol encoded immediately beforehand based on known statistical interdependencies between different types of symbols occurring in succession.

US Pat. No. 10,171,839

GENERATING TRANSFORMS FOR COMPRESSING AND DECOMPRESSING VISUAL DATA

Massachusetts Institute o...

1. A method for encoding data, the method comprising:encoding a residual of a first portion of an array of data to generate a first set of coefficients;
decoding the first set of coefficients to generate a decoded representation of the first portion;
computing an estimated covariance function for a residual of a second portion of the array of data based on a model that includes a gradient of a plurality of boundary data values located on a boundary of the decoded representation of the first portion;
computing a set of transform basis functions from the estimated covariance function; and
encoding the residual of the second portion using a first transform that uses the computed set of transform basis functions, including generating a predicted representation of the second portion, and applying the first transform to a difference between the second portion and the predicted representation of the second portion.

US Pat. No. 10,171,838

METHOD AND APPARATUS FOR PACKING TILE IN FRAME THROUGH LOADING ENCODING-RELATED INFORMATION OF ANOTHER TILE ABOVE THE TILE FROM STORAGE DEVICE

MEDIATEK INC., Hsin-Chu ...

1. A method for video encoding a frame divided into a plurality of tiles, each having a plurality of blocks, the method comprising:storing encoding-related information derived from a plurality of blocks in a last block row of a first tile of the frame into a storage device, wherein the encoding-related information comprises a plurality of encoding-related data derived from the blocks in the last block row of the first tile, respectively;
reading the encoding-related information from the storage device; and
performing entropy encoding upon blocks in a first block row of a second tile of the frame based at least partly on the encoding-related information read from the storage device;
wherein the first block row of the second tile is vertically adjacent to the last block row of the first tile, and the entropy encoding of the first block row of the second tile is started before entropy encoding of the last block row of the first tile is accomplished;
wherein the encoding-related information is stored in the storage device before the entropy encoding is performed upon any block in the frame;
wherein the frame is encoded using a first-stage encoding flow and a second-stage encoding flow following the first-stage encoding flow; each of the first-stage encoding flow and the second-stage encoding flow is applied to all blocks within the frame; entropy encoding is performed in the second-stage encoding flow only; the step of storing the encoding-related information into the storage device is performed in the first-stage encoding flow; and the step of reading the encoding-related information from the storage device is performed in the second-stage encoding flow;
wherein the first-stage encoding flow comprises generating a probability table for the frame; each of the blocks in the last block row of the first tile and the blocks in the first block row of the second tile is split into one or more partitions for coding; and the step of performing the entropy encoding upon blocks in the first block row of the second tile comprises:
when encoding a syntax element of a current partition in the first block row of the second tile, determining a table index based at least partly on encoding-related information of at least one specific partition in the last block row of the first tile, wherein the at least one specific partition is located above the current partition; and
selecting a probability set from the probability table for encoding the syntax element of the current partition according to the table index.

US Pat. No. 10,171,837

PREDICTIVE VALUE DATA SET COMPRESSION

HERE Global B.V., Eindho...

1. An apparatus comprising:at least one processor; and
at least one memory including computer program code and operable to store a data set comprising values for a plurality of pixels in an image, the values relating to relative distances of objects represented in the image;
the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus at least to:
identify an image of a geographic area;
identify a depth data set collected by a distance ranging device, the depth data set comprising depth values correlated to pixels in the image of the geographic area, wherein the depth data set corresponds to one or more objects in the image;
calculate a predicted value for an exponent for a particular point of the image based on values of neighboring points of the particular point and a predicted value for a mantissa of the particular point of the image based on values of neighboring points of the particular point, where the predicted value for the mantissa is calculated based on an identified subset of neighboring points of the image having an exponent value within a predetermined range of the exponent value of the particular point;
calculate a comparator between the predicted value for the exponent for the particular point and an actual value for the exponent for the particular point and a comparative value between an actual mantissa and the predicted mantissa of the particular point; and
at least one of store or communicate the comparator for the particular point to or from the memory.

US Pat. No. 10,171,836

METHOD AND DEVICE FOR PROCESSING VIDEO SIGNAL

LG ELECTRONICS INC., Seo...

1. A method for processing a video signal by a decoding apparatus, the method comprising:obtaining parallel processing information from the video signal, the parallel processing information indicating a size of a current parallel processing unit;
obtaining an inter-view motion vector of a current coding unit included in the current parallel processing unit using an inter-view motion vector of a neighboring block of the current parallel processing unit and the size of the current parallel processing unit, wherein the neighboring block is adjacent to the current parallel processing unit and not included in the current parallel processing unit, and wherein the current coding unit includes one or more current prediction units; and
obtaining motion vectors of the one or more current prediction units in parallel using the inter-view motion vector of the current coding unit, wherein the inter-view motion vector indicates a corresponding block of a current prediction unit, the corresponding block being positioned in a different view from a current view of the current prediction unit,
wherein obtaining the motion vectors of the one or more current prediction units in parallel includes:
generating a motion vector list for the current prediction unit, wherein a motion vector of the corresponding block is added in the motion vector list when a picture order count (POC) of a reference picture for the corresponding block in the different view is identical to a POC of a reference picture for the current prediction unit in the current view, and
wherein the motion vector of the corresponding block is not added in the motion vector list when the POC of the reference picture for the corresponding block in the different view is different from the POC of the reference picture for the current prediction unit in the current view, and
obtaining a motion vector of the current prediction unit from the motion vector list.

US Pat. No. 10,171,835

METHOD AND APPARATUS FOR ENCODING AND DECODING IMAGE

Samsung Display Co., Ltd....

1. A method of encoding video data comprising a plurality of pictures, the method comprising:storing data of at least one first picture in the video data that is already encoded; and
referring to the stored data and using intra-prediction to encode blocks in a current picture following the first picture,
wherein the storing the data comprises:
calculating k similarity values by comparing each pixel data of a first horizontal line of each of k previously encoded pictures with pixel data of a first horizontal line of the current picture, wherein k is a natural number greater than one;
selecting one of the k previously encoded pictures as corresponding to a biggest similarity value of the k calculated similarity values; and
storing pixel data of a first horizontal line of the selected previously encoded picture as third reference data, and
wherein the referring to the stored data and using intra-prediction to encode the blocks in the current picture comprises:
loading the third reference data; and
intra-predicting a block comprising pixels of the first horizontal line of the current picture based on the third reference data.

US Pat. No. 10,171,834

METHODS AND APPARATUS FOR INTRA PICTURE BLOCK COPY IN VIDEO COMPRESSION

MEDIATEK INC., Hsinchu (...

1. A method of intra picture block copy in video compression, comprising:identifying a first block of pixels of a picture as a reference block for reconstructing a second block of pixels of the picture;
determining an overlapped region of the second block that overlaps with the first block, the first block having a first corner, and the second block having a second corner corresponding to the first corner and overlapping the first block;
splitting the overlapped region into a first portion and a second portion along a division line that is parallel to a block vector or a diagonal line of the overlapped region, the block vector indicating a spatial relationship between the first corner of the first block and the second corner of the second block, and the diagonal line of the overlapped region being defined based on a third corner of the overlapped region that is at a same position as the second corner of the second block;
reconstructing pixels in the first portion of the overlapped region based on a first set of pixels of the first block in a manner that values of the reconstructed pixels in the first portion change in a direction from a border of the overlapped region adjacent to the first set of pixels of the first block to the division line; and
reconstructing pixels in the second portion of the overlapped region based on a second set of pixels of the first block in a manner that values of the reconstructed pixels in the second portion change in a direction from a border of the overlapped region adjacent to the second set of pixels of the first block to the division line,
wherein the first set of pixels of the first block is adjacent to the first portion of the overlapped region, and the second set of pixels of the first block is adjacent to the second portion of the overlapped region.

US Pat. No. 10,171,833

ADAPTIVE SWITCHING OF COLOR SPACES, COLOR SAMPLING RATES AND/OR BIT DEPTHS

Microsoft Technology Lice...

1. A computing device comprising:one or more buffers configured to store video in a sequence; and
a video encoder or image encoder configured to perform operations comprising:
encoding the video in the sequence, including:
switching color spaces, color sampling rates and/or bit depths spatially and/or temporally between at least some units of the video within the sequence during the encoding, the color spaces including an RGB-type color space and a YCoCg color space, wherein the encoder is configured to select between:
for lossy coding, using color space conversion operations to switch between the RGB-type color space and the YCoCg color space; and
for lossless coding, using invertible color space conversion operations to switch between the RGB-type color space and the YCoCg color space; and
selectively performing deblock filtering of previously reconstructed content according to one or more rules, including adjusting strength of the deblock filtering depending on whether primary components of two adjacent blocks have non-zero residual values:
outputting encoded data in a bitstream, the encoded data including one or more signals indicating how the color spaces, the color sampling rates and/or the bit depths switch between the at least some units of the video within the sequence.

US Pat. No. 10,171,832

MOVING PICTURE CODING DEVICE, MOVING PICTURE CODING METHOD, AND MOVING PICTURE CODING PROGRAM, AND MOVING PICTURE DECODING DEVICE, MOVING PICTURE DECODING METHOD, AND MOVING PICTURE DECODING PROGRAM

JVC KENWOOD Corporation, ...

1. A moving picture decoding device that decodes a bitstream in which a moving picture is coded using motion compensation in units of blocks acquired by dividing each picture of the moving picture, the moving picture decoding device comprising:a decoding unit configured to decode information representing a motion vector predictor to be selected from a motion vector predictor candidate list having a predefined number of motion vector predictor candidates, together with a motion vector difference;
a motion vector predictor candidate generating unit configured to derive a plurality of motion vector predictor candidates by making a prediction based on a motion vector of one of decoded blocks that are neighboring to a decoding target block in space or time and construct a motion vector predictor candidate list;
a motion vector predictor redundant candidate removing unit configured to compare whether values of vectors are the same among motion vector predictor candidates predicted from a decoded block neighboring in space and remove the motion vector predictor candidates having the same values of vectors from the motion vector predictor candidate list with at least one being left without comparing whether or not a value of vector of a motion vector predictor predicted from a decoded block that is neighboring in space and a value of vector of a motion vector predictor predicted from a decoded block neighboring in time are the same;
a motion vector predictor candidate adding unit configured to repeatedly add the motion vector predictor candidates to the motion vector predictor candidate list until the number of motion vector predictor candidates reaches the predefined number if the number of the motion vector predictor candidates in the motion vector predictor candidate list is smaller than the predefined number, whereby the number of the motion vector predictor candidates in the motion vector predictor candidate list reaches the predefined number;
a motion vector predictor candidate number limiting unit configured to remove the motion vector predictor candidates exceeding the predefined number from the motion vector predictor candidate list if the number of the motion vector predictor candidates in the motion vector predictor candidate list is greater than the predefined number, whereby the number of the motion vector predictor candidates in the motion vector predictor candidate list is limited to the predefined number;
a motion vector predictor selecting unit configured to select a motion vector predictor from the motion vector predictor candidate list based on information representing the decoded motion vector predictor to be selected; and
a motion vector calculating unit configured to calculate a motion vector used for motion compensation by adding the selected motion vector predictor and the motion vector difference together,
wherein the motion vector predictor candidate adding unit repeatedly adds more than one (0,0) motion vectors allowing duplication as the motion vector predictor candidates, and
wherein the motion vector predictor redundant candidate removing unit compares whether values of vectors are the same between a first motion vector predictor candidate predicted from a first decoded block neighboring in space and a second motion vector predictor candidate predicted from a second decoded block neighboring in space and removes, when the values of vectors are the same, the second motion vector predictor candidate from the motion vector predictor candidate list.

US Pat. No. 10,171,831

MOVING PICTURE CODING DEVICE, MOVING PICTURE CODING METHOD, AND MOVING PICTURE CODING PROGRAM, AND MOVING PICTURE DECODING DEVICE, MOVING PICTURE DECODING METHOD, AND MOVING PICTURE DECODING PROGRAM

JVC KENWOOD Corporation, ...

1. A moving picture decoding device that decodes a bitstream in which a moving picture is coded using motion compensation in units of blocks acquired by dividing each picture of the moving picture, the moving picture decoding device comprising:a decoding unit configured to decode information representing a motion vector predictor to be selected from a motion vector predictor candidate list having a predefined number of motion vector predictor candidates, together with a motion vector difference;
a motion vector predictor candidate generating unit configured to derive a plurality of motion vector predictor candidates by making a prediction based on a motion vector of one of decoded blocks that are neighboring to a decoding target block in space or time and construct a motion vector predictor candidate list;
a motion vector predictor redundant candidate removing unit configured to compare whether values of vectors are the same among motion vector predictor candidates predicted from a decoded block neighboring in space and remove the motion vector predictor candidates having the same values of vectors from the motion vector predictor candidate list with at least one being left without comparing whether or not a value of vector of a motion vector predictor predicted from a decoded block that is neighboring in space and a value of vector of a motion vector predictor predicted from a decoded block neighboring in time are the same;
a motion vector predictor candidate adding unit configured to repeatedly add the motion vector predictor candidates to the motion vector predictor candidate list until the number of motion vector predictor candidates reaches the predefined number if the number of the motion vector predictor candidates in the motion vector predictor candidate list is smaller than the predefined number, whereby the number of the motion vector predictor candidates in the motion vector predictor candidate list reaches the predefined number;
a motion vector predictor selecting unit configured to select a motion vector predictor from the motion vector predictor candidate list based on information representing the decoded motion vector predictor to be selected; and
a motion vector calculating unit configured to calculate a motion vector used for motion compensation by adding the selected motion vector predictor and the motion vector difference together,
wherein the motion vector predictor candidate adding unit repeatedly adds more than one (0,0) motion vectors allowing duplication as the motion vector predictor candidates.

US Pat. No. 10,171,830

MOVING PICTURE CODING DEVICE, MOVING PICTURE CODING METHOD, AND MOVING PICTURE CODING PROGRAM, AND MOVING PICTURE DECODING DEVICE, MOVING PICTURE DECODING METHOD, AND MOVING PICTURE DECODING PROGRAM

JVC KENWOOD Corporation, ...

1. A moving picture coding device that codes moving picture data in units of blocks acquired by partitioning each picture of the moving picture data, the moving picture coding device comprising:a candidate list constructing unit configured to derive motion information of a coded block included in a picture that is different in time from a picture including a coding target block that is a target for the coding, derive a temporal motion information candidate of the coding target block based on the derived motion information of the coded block, derive a plurality of candidates based on motion information of a plurality of coded neighboring blocks located at predetermined positions neighboring to the coding target block in space, derive spatial motion information candidates based on the plurality of derived candidates, and construct a list of motion information candidates including the derived temporal motion information candidate and the derived spatial motion information candidates; and
a coding unit configured to code information representing whether or not the coding is performed in a merging prediction mode, code an index designating a predetermined motion information candidate included in the list in a case where the coding is determined to be performed in the merging prediction mode, derive the motion information of the coding target block based on the motion information candidate designated by the coded index, and codes the coding target block,
wherein the candidate list constructing unit does not compare all possible combinations of the spatial motion information candidates with each other but compares predefined partial combinations of the spatial motion information candidates with each other and, in a case where there are candidates having the same moving information out of the candidates, derives one spatial motion information candidate from the candidates of which the motion information is the same.

US Pat. No. 10,171,829

PICTURE ENCODING DEVICE AND PICTURE ENCODING METHOD

JVC KENWOOD Corporation, ...

1. A picture encoding device that encodes a picture and encodes a difference quantization parameter in a unit of a quantization coding block which is divided from the picture and is a management unit of a quantization parameter, comprising:a quantization parameter derivation unit that derives a quantization parameter of a first quantization coding block;
a prediction quantization parameter derivation unit that derives a prediction quantization parameter using the quantization parameters of two quantization coding blocks which precede the first quantization coding block in order of encoding;
a difference quantization parameter derivation unit that derives a difference quantization parameter of the first quantization coding block, using a difference between the quantization parameter of the first quantization coding block and the prediction quantization parameter; and
an encoder that encodes the difference quantization parameter,
wherein the prediction quantization parameter derivation unit derives the prediction quantization parameter using two quantization parameters: one quantization parameter of a previous quantization coding block which immediately precedes the first quantization coding block to be encoded in order of encoding and another quantization parameter of a quantization coding block which precedes the previous quantization coding block in order of encoding and is not spatially neighboring the first quantization coding block to be encoded.

US Pat. No. 10,171,828

MODIFICATION OF UNIFICATION OF INTRA BLOCK COPY AND INTER SIGNALING RELATED SYNTAX AND SEMANTICS

ARRIS Enterprises LLC, S...

1. In a processing device for processing a video sequence having a plurality of pictures, each picture having a plurality of slices, a method of processing a slice of a current picture, comprising:determining when a slice of the current picture excludes any predictive coding derived from another picture;
when the slice of the current picture is designated to exclude any predictive coding derived from another picture, setting a flag to a first logic state;
when the slice of the current picture is not designated to exclude any predictive coding derived from another picture, setting the flag to a second logic state; and
bypassing at least a portion of predicted weight processing of inter picture processing of the slice of the current picture according to the logic state of the flag when coding if the flag is in the first logic state,
wherein the slice of the current picture is of one of an intra coding type (I-slice), a predictive coding type (P-slice) and bi-predictive coding type (B-slice),
wherein the processing of the slice is performed according to a slice a header having inter picture processing,
wherein bypassing at least a portion of the predicted weight processing of the inter picture processing of the current picture according to the logic state of the flag comprises:
skipping at least a portion of the inter picture processing of the slice of the current picture including the at least a portion of the predicted weight processing according to the flag and a determination that the slice is a P-type slice or a B-type slice, and
wherein the skipped at least a portion of the inter picture processing comprises:
B-slice motion vector difference signaling;
entropy coding method signaling processing;
collocated reference picture signaling;
weighted prediction signaling processing; and
integer motion vector signaling processing.

US Pat. No. 10,171,827

IMAGE CODING METHOD AND IMAGE DECODING METHOD

SUN PATENT TRUST, New Yo...

1. An image decoding device that decodes an image having a plurality of blocks, said image decoding device comprising:a processor; and
a memory having a program stored thereon, the program causing the processor to execute operations including
decoding the blocks sequentially based on probability information indicating a data occurrence probability,
wherein, in the decoding, the probability information is updated depending on data of a first target block to be decoded among the blocks, after decoding the first target block and before decoding a second target block to be decoded next among the blocks, and
wherein, in the decoding, a third target block in the blocks is decoded based on the probability information (i) which is updated depending on the data of the first target block, the first target block being a neighboring block above the third target block and (ii) which is not updated depending on the data of the second target block, and
wherein the third target block (i) is located on a left end of the image, (ii) is different from the second target block, and (iii) is decoded after decoding the first target block.

US Pat. No. 10,171,826

METHOD AND APPARATUS FOR ENCODING RESIDUAL BLOCK, AND METHOD AND APPARATUS FOR DECODING RESIDUAL BLOCK

SAMSUNG ELECTRONICS CO., ...

1. An apparatus for decoding an image, the apparatus comprising:a splitter which splits the image into a plurality of maximum coding units, hierarchically splits a maximum coding unit among the plurality of maximum coding units into a plurality of coding units based on split information of a coding unit, and determines a transformation residual block from a coding unit among the plurality of coding units based on split information of the transformation residual block, wherein the transformation residual block includes a plurality of sub residual blocks;
a parser which obtains, from a bitstream, a coded block flag indicating whether the transformation residual block includes at least one non-zero effective transformation coefficient,
when the coded block flag indicates that the transformation residual block includes at least one non-zero effective transformation coefficient, determines whether a current sub residual block is a left-upper residual block among a plurality of sub residual blocks in the transformation residual block,
when the current sub residual block is a left-upper sub residual block, obtains transformation coefficients of the left-upper sub residual block based on a significance map indicating a location of a non-zero transformation coefficient in the first sub residual block and level information of the non-zero transformation coefficient in the first sub residual block obtained from the bitstream,
when the current sub residual block is not a left-upper sub residual block, obtains, from the bitstream, an effective coefficient flag of the current sub residual block without considering an effective coefficient flag of another sub residual block, the effective coefficient flag of the current sub residual block indicating whether at least one non-zero effective transformation coefficient exists in the current sub residual block,
when the effective coefficient flag indicates that at least one non-zero transformation coefficient exists in the current sub residual block, obtains transformation coefficients of the current sub residual block based on a significance map indicating a location of the non-zero transformation coefficient in the current sub residual block and level information of the non-zero transformation coefficient in the current sub residual block obtained from the bitstream;
when the effective coefficient flag indicates that the at least one non-zero effective transformation coefficient does not exist in the current sub residual block, determines the transformation coefficients of the current second sub residual blocks as zero; and
an inverse-transformer which performs inverse-transformation on the transformation residual block including the current sub residual block,
wherein the transformation coefficients of the current sub residual block are a subset of transformation coefficients of the transformation residual block,
the transformation coefficients of the current sub residual block are obtained after or before transformation coefficients of another sub residual block among the plurality of sub residual blocks in the transformation residual block,
wherein a level information of a non-zero transformation coefficient includes information regarding a sign and an absolute value of the non-zero transformation coefficient,
when the split information of the coding unit of a current depth indicates a split, the coding unit of the current depth is split into the plurality of coding units of the lower depth, independently from neighboring coding units,
when the split information of the coding unit of the current depth indicates a non-split, one or more transformation residual blocks including the transformation residual block are obtained from the coding unit of the current depth, and
wherein the current sub residual block is one of a plurality of sub residual blocks which have same size with each other and square-shape, and are included in the transformation residual block.

US Pat. No. 10,171,825

PARALLEL COMPRESSION OF IMAGE DATA IN A COMPRESSION DEVICE

MATROX GRAPHICS INC., Do...

1. A method of compressing a stream of pictures in parallel in a compression device, wherein the compression device includes a plurality of components to be coupled in series to perform an image data compression process for compressing image data into compressed image data, and wherein each one of the plurality of components is to perform a partial compression operation that is part of the image data compression process, the method comprising:processing a first portion of a first picture of a stream of pictures in a first component from the plurality of components of the compression device, while simultaneously processing a second portion of a second picture of the stream of pictures in a second component from the plurality of components of the compression device wherein the processing of the first portion of the first picture is performed according to partial compression statistics associated with the second picture, and wherein the partial compression statistics result from the processing of one or more portions of the second picture in one or more of the plurality of components of the compression device when compression of the second portion of the second picture in the compression device is not yet completed.

US Pat. No. 10,171,824

SYSTEM AND METHOD FOR ADAPTIVE FRAME RE-COMPRESSION IN VIDEO PROCESSING SYSTEM

MEDIATEK INC., Hsinchu (...

1. A method of video decoding, comprising:receiving a video bitstream;
decoding the video bitstream to generate a reconstructed frame;
determining whether to re-compress the reconstructed frame for buffering based on a characteristic of the reconstructed frame that is provided in the video bitstream when a size of the reconstructed frame is greater than a first threshold; and
re-compressing the reconstructed frame and storing the re-compressed reconstructed frame into a buffer of a decoder system when the reconstructed frame is determined to be re-compressed for buffering, wherein
the characteristic of the reconstructed frame includes whether the reconstructed frame is a reference frame and an initial picture quantization parameter associated with the reconstructed frame, and
the determining whether to re-compress the reconstructed frame for buffering when the size of the reconstructed frame is greater than the first threshold comprises:
determining that the reconstructed frame is not to be re-compressed for buffering when the reconstructed frame is the reference frame and the initial picture quantization parameter is not greater than a second threshold.

US Pat. No. 10,171,823

IMAGE DECODING DEVICE AND IMAGE CODING DEVICE

SHARP KABUSHIKI KAISHA, ...

1. An image decoding device that decodes coded data that is hierarchically coded to reconstruct a decoded picture of a higher layer which is a target layer, the image decoding device comprising:a parameter set decoding circuit that decodes a parameter set; and
a predicted image generation circuit that generates a predicted image by inter-layer prediction with reference to decoded pixels of a reference layer picture,
wherein the parameter set decoding circuit decodes a color format identifier and derives a luma chroma width ratio depending upon a chroma format, which is specified by the color format identifier,
wherein the parameter set decoding circuit decodes: (i) a scaled reference layer offset syntax which is decoded in a chroma pixel unit of the target layer picture, and (ii) a reference layer offset syntax which is decoded in a chroma pixel unit of the reference layer picture,
wherein the scaled reference layer offset syntax specifies an offset between a top-left sample of a reference region in the target layer picture and a top-left sample of the target layer picture, and the reference layer offset syntax specifies an offset between a top-left sample of the reference region in the reference layer picture and a top-left sample of the reference layer picture,
wherein the predicted image generation circuit derives a reference position by using a scaled reference layer offset, a reference layer offset, and a scale,
wherein the scaled reference layer offset is derived by multiplying a value of the scaled reference layer offset syntax by a first luma chroma width ratio that is set to the luma chroma width ratio of the target layer picture,
wherein the reference layer offset is derived by multiplying a value of the reference layer offset syntax by a second luma chroma width ratio that is set to the luma chroma width ratio of the reference layer picture, and
wherein the scale is derived by using the scaled reference layer offset and the reference layer offset.

US Pat. No. 10,171,822

IMAGE TRANSMISSION DEVICE, IMAGE TRANSMISSION METHOD, AND IMAGE TRANSMISSION PROGRAM

CIAO, INC., (JP)

1. An apparatus for transmitting images, including:a base server being situated at a point where an image is to be taken, and being connected to an imaging device; and
an aggregation server being connected to said base server through an electrical communication channel,
said base server including:
reference image transmitter for transmitting image data (hereinafter, referred to as “reference image data”) of a frame acting as a reference (hereinafter, referred to as “reference frame”) to said aggregation server at a predetermined timing among images of a plurality of consecutive frames sequentially obtained through said imaging device;
extracted area computer for selecting an image (hereinafter, referred to as “background image”) acting as a background among images of a plurality of consecutive frames sequentially obtained through said imaging device, and sequentially computing a third area surrounding both a first area and a second area for each of frames individually following said reference frame selected among a plurality of consecutive frames sequentially obtained through said imaging device, said first area surrounding an area in which a difference is generated between an image of said each of frames and said background image, said second area surrounding an area in which a difference is generated between an image of a frame immediately prior to said each of frames and said background image; and
extracted image transmitter for sequentially extracting image data of said third area out of said each of frames, and transmitting the thus extracted image data to said aggregation server,
said aggregation server including image synthesizer for synthesizing a moving image based on said reference image data transmitted from said base server, and said image data of said third area extracted out of said each of frames.

US Pat. No. 10,171,821

SCALABLE VIDEO ENCODING METHOD AND APPARATUS AND SCALABLE VIDEO DECODING METHOD AND APPARATUS USING UP-SAMPLING FILTER ACCOMPANIED BY CONVERSION OF BIT DEPTH AND COLOR FORMAT

SAMSUNG ELECTRONICS CO., ...

1. A scalable video encoding method comprising:determining a reference layer image from among base layer images so as to inter layer predict an enhancement layer image, wherein the reference layer image corresponds to the enhancement layer image;
determining a phase between pixels of the enhancement layer image and the reference layer image, according to a scaling factor between the enhancement layer image and the reference layer image and a color format difference of the enhancement layer image and the reference layer image;
selecting at least one filter coefficient set corresponding to the determined phase, from filter coefficient data comprising filter coefficient sets that respectively correspond to phases;
generating an up-sampled reference layer image by extending a resolution of the reference layer image according to the scaling factor by performing interpolation filtering on the reference layer image by using the selected filter coefficient set;
obtaining a prediction error between the up-sampled reference layer image and the enhancement layer image;
generating an enhancement layer bitstream comprising the prediction error; and
generating a base layer bitstream by encoding the base layer images.

US Pat. No. 10,171,818

SCANNING ORDERS FOR NON-TRANSFORM CODING

Microsoft Technology Lice...

1. A method comprising:identifying, by a computing device, a scanning order for scanning a first block, the first block being associated with a transform coding mode and having an associated size and an associated prediction mode;
identifying, by the computing device, a second block that is associated with a non-transform coding mode, the second block being part of a same image as the first block and having the same associated size and the same associated prediction mode as the first block;
determining whether to scan the second block according to a scanning order inverse to the scanning order for scanning the first block, the determining being based on the prediction mode associated with the second block and the size associated with the second block, wherein, if the prediction mode is an intra-prediction mode and the size is smaller than a predetermined size, the second block is scanned according to the inverse scanning order; and
scanning, by the computing device, the second block according to the inverse scanning order, in response to determining that the prediction mode is an intra-prediction mode and the size of the second block is smaller than the predetermined size.

US Pat. No. 10,171,817

IMAGE PROCESSING DEVICE AND METHOD

SONY CORPORATION, Tokyo ...

1. An image processing device, comprising:at least one processor configured to:
set a binary parameter that corresponds to a binary data processing rate of a decoder,
wherein the decoder with at least one first setting is indicated by an encoded stream, and
wherein the encoded stream corresponds to binary data;
calculate a first maximum processing amount of the encoded stream based on the set binary parameter;
calculate a second maximum processing amount of the binary data based on the set binary parameter;
calculate a target bit that indicates a target rate of the encoded stream, based on the calculated first maximum processing amount of the encoded stream and the calculated second maximum processing amount of the binary data;
control a quantization rate based on the calculated target bit;
quantize input data based on the controlled quantization rate;
binarize the quantized input data to obtain the binary data;
arithmetically code the binary data to generate the encoded stream such that the encoded stream is decoded in the decoder without one of an overflow condition or an underflow condition in the decoder; and
transmit the set binary parameter and the encoded stream to the decoder.

US Pat. No. 10,171,816

METHOD AND APPARATUS FOR MOTION COMPENSATION PREDICTION

NTT DOCOMO, INC., Tokyo ...

1. A video decoding method for motion compensation performed under an inter-frame prediction to decode a target picture, the method comprising computer executable steps executed by a processor of a video decoder to implement:(a) decoding a residual and a motion vector received from an encoder;
(b) referencing to the motion vector to retrieve a reference sample from a reference picture stored in a reference picture memory, wherein the reference picture stored in the reference picture memory and the reference sample retrieved from the reference picture are both represented with a first bit depth;
(c) performing a scaling-up operation and a first fractional sample interpolation in a first direction on the retrieved reference sample to generate a first set of fractional samples represented with a second bit depth to which the first bit depth is scaled up by a scaling-up factor, wherein the second bit depth is constant and set equal to a number of bits available to represent the fractional sample, and the scaling-up factor is set equal to the second bit depth minus the first bit depth and is variable to keep the second bit depth constant and independent from a change of the first bit depth;
(d) performing a second fractional sample interpolation on the first set of fractional samples in a second direction to generate a second set of fractional samples represented with the second bit depth;
(e) referencing fractional parts of the motion vector to derive a bidirectional prediction sample from the first and second sets of fractional samples, the bidirectional prediction sample being represented with the second bit depth, wherein the bidirectional prediction sample is generated by combining two second sets of fractional samples, the two second sets being different from each other;
(f) scaling down and clipping the bidirectional prediction sample from the second bit depth to the first bit depth to generate a prediction picture represented with the first bit depth; and
(g) adding the prediction picture and the residual to reconstruct the target picture represented with the first bit depth,
wherein the fractional sample interpolation applies an 8-tap FIR (Finite Impulse Response) filter having a set of coefficients equal to [?1, 4, ?11, 40, 40, ?11, 4, ?1] to generate a quarter-pel sample.

US Pat. No. 10,171,814

MOVING PICTURE DECODING DEVICE, MOVING PICTURE DECODING METHOD AND MOVING PICTURE DECODING PROGRAM

JVC KENWOOD Corporation, ...

1. A moving picture decoding device adapted to decode a bitstream in which moving pictures are coded in units of blocks obtained by partitioning each picture of the moving pictures, comprising:a first bitstream decoding unit configured to set a predefined number of merge candidates;
a second bitstream decoding unit configured to decode information indicating indices of the candidates;
a spatial merge candidate generation unit configured to derive spatial merge candidates from a first predefined number of blocks neighboring a prediction block subject to decoding;
a temporal merge candidate generation unit configured to derive a temporal merge candidate from a block that exists at the same position as or near a prediction block subject to decoding in a decoded picture that is different from the prediction block subject to decoding;
a merge candidate addition unit configured to add the spatial merge candidates and the temporal merge candidates to a merge candidate list;
a merge candidate supplying unit configured to add one or more merge candidates to the merge candidate list up to the predefined number of merge candidates as an upper limit when the number of merge candidates included in the merge candidate list is smaller than the predefined number of merge candidates;
a coding information selection unit configured to select a merge candidate from the merge candidates added to the merge list; and
a motion compensation prediction unit configured to perform inter prediction of the prediction block subject to decoding by the merge candidate thus selected,
wherein the second bitstream decoding unit derives the indices of the merge candidates based on the number of the merge candidates;
the spatial merge candidate generation unit stops deriving the spatial merge candidates when the number of the derived spatial merge candidates reaches a second predefined number smaller than the first predefined number; and
the merge candidate supplying unit adds a merge candidate having a motion vector of (0,0).

US Pat. No. 10,171,813

HIERARCHY OF MOTION PREDICTION VIDEO BLOCKS

QUALCOMM Incorporated, S...

1. A method of decoding video data according to a merge mode, the method comprising:obtaining an index value for a current video block coded in the merge mode;
generating a set of candidate predictive blocks for the merge mode based on spatial and temporal neighbors to the current video block;
limiting the set of generated candidate predictive blocks for the merge mode to a subset of generated candidate predictive blocks for the merge mode, wherein the subset of generated candidate predictive blocks for the merge mode is limited to be smaller than the set of generated candidate predictive blocks for the merge mode;
selecting a predictive video block from the subset of generated candidate predictive blocks for the merge mode based on the index value; and
generating motion information for the current video block according to the merge mode based on motion information of the predictive video block, wherein generating the motion information for the current video block includes inheriting motion information from the predictive video block.

US Pat. No. 10,171,812

DATA OUTPUT APPARATUS, DATA OUTPUT METHOD, AND DATA GENERATION METHOD

PANASONIC INTELLECTUAL PR...

1. A data output apparatus comprising:a decoder that decodes a video stream to generate a first video signal;
an acquirer that acquires one or more pieces of metadata corresponding to one or more first conversion modes in which a luminance range of a video signal is converted;
an interpreter that interprets one of the one or more pieces of metadata to acquire characteristic data indicating a luminance range of the first video signal, and conversion auxiliary data for converting the luminance range of the first video signal;
a control information generator that converts the characteristic data into control information according to a predetermined transmission protocol;
a converter that supports one or more second conversion modes in which a luminance range of a video signal is converted, the converter for performing conversion processing of the luminance range of the first video signal in one of the one or more second conversion modes based on the conversion auxiliary data to generate a second video signal with a luminance range narrower than the luminance range of the first video signal; and
an outputter that outputs the second video signal and the control information to a display apparatus in accordance with the transmission protocol,
wherein the interpreter further determines which of the data output apparatus and the display apparatus is to perform the conversion processing, based on the one or more first conversion modes, the one or more second conversion modes, and one or more third conversion modes in which a luminance range of a video signal is converted, the one or more third conversion modes being supported by the display apparatus,
the interpreter further determines a conversion mode which is included in the one or more first conversion modes and is included in at least one of the one or more second conversion modes and the third conversion modes, as a conversion mode of the conversion processing to be performed by the data output apparatus or the display apparatus,
the acquirer acquires a plurality of pieces of metadata corresponding to a plurality of first conversion modes including the one or more first conversion modes,
the converter supports a plurality of second conversion modes including the one or more second conversion modes, and
the interpreter determines, as a conversion mode of the conversion processing to be performed by the data output apparatus or the display apparatus, a conversion mode with highest reproducibility for a master image which is an image that is output without conversion of the luminance range, from among a plurality of conversion modes which are included in the plurality of first conversion modes, and are included in at least one of the plurality of second conversion modes and the third conversion modes.

US Pat. No. 10,171,811

METHOD AND APPARATUS FOR DETERMINING REFERENCE PICTURE SET OF IMAGE

SAMSUNG ELECTRONICS CO., ...

1. A method of decoding a video, the method comprising:obtaining, by at least one processor, information of the number of reference picture sets from a bitstream, wherein the reference picture sets are included in a sequence parameter set and a reference picture set includes a plurality of reference pictures;
determining, by the at least one processor, whether an index of a current reference picture set of a current picture is equal to the number of reference picture sets, wherein the number of reference picture sets is based on the information of the number of the reference picture sets and the index of the current reference picture set indicates the current reference picture set among reference picture sets;
when the index of the current reference picture set of the current picture is equal to the number of reference picture sets, obtaining, by the at least one processor, delta index information about a difference between the index of the current reference picture set of the current picture and an index of a reference picture set (reference RPS) of the current picture from the bitstream;
determining, by the at least one processor, the index of the reference RPS based on the delta index information;
determining, by the at least one processor, the current reference picture set of the current picture based on the index of the reference RPS of the current picture and a delta RPS which is a difference value between a picture order count (POC) value of a reference picture in the current reference picture set of the current picture and a picture order count (POC) value of a reference picture in the reference RPS of the current picture; and
predictive decoding, by the at least one processor, the current picture by using a reference picture included in one of reference picture sets including the current reference picture set of the current picture.

US Pat. No. 10,171,810

TRANSFORM COEFFICIENT CODING USING LEVEL-MODE AND RUN-MODE

Cisco Technology, Inc., ...

1. A method comprising:obtaining a two-dimensional array of integer samples representing a block of quantized transform coefficients for a video frame;
converting the two-dimensional array of integer samples to a one-dimensional array of integer samples using a scan pattern, wherein each integer sample is represented with a level that is an absolute value of the sample and a sign bit if the level is greater than zero;
converting the one-dimensional array of integer samples to a bit-stream by processing the one-dimensional array of samples in sequential order, wherein converting the one-dimensional array of samples to a bit-stream comprises:
encoding the samples in a level-mode and a run-mode to form an encoded bit-stream;
wherein in the level-mode:
encoding each sample individually and encoding a level;
encoding a sign bit if the level is greater than zero; and
switching to the run-mode for a next sample when the level is less than a first threshold;
wherein in the run-mode:
for each non-zero level, encoding a combined event of:
length of a zero-run corresponding to a number of zeros since a last non-zero level;
whether the level is greater than one; and
when the level is one, encoding the sign bit;
when the level is greater than one:
encoding the level and sign bit jointly, and
encoding the combined event with a code equal to 2*(level?2)+sign;
switching to the level-mode for the next sample when the level is greater than a second threshold.

US Pat. No. 10,171,809

VIDEO ENCODING APPARATUS AND VIDEO ENCODING METHOD

FUJITSU LIMITED, Kawasak...

1. A video encoding apparatus comprising:a processor configured to:
divide a plurality of orthogonal transform coefficients included in each of a plurality of blocks into a plurality of coefficient groups each of which includes a predetermined number of the orthogonal transform coefficients, the plurality of blocks being obtained by dividing a picture included in a video, the plurality of orthogonal transform coefficients being obtained by orthogonally transforming, for each block, a prediction error signal obtained on the basis of difference between a value of each pixel of the picture and a prediction signal of the pixel;
determine, for each of the predetermined number of orthogonal transform coefficients included in a target coefficient group from among the plurality of coefficient groups, a candidate possible to minimize a cost obtained on the basis of a coding error and an amount of coding among a plurality of quantized-coefficient candidates to be used for quantizing the orthogonal transform coefficient, to be a quantized coefficient of the orthogonal transform coefficient, the target coefficient group being selected from among the plurality of coefficient groups sequentially from the coefficient group including the orthogonal transform coefficients corresponding to lowest frequencies;
determine, for a target coefficient group, whether to substitute all the predetermined number of quantized coefficients included in the target coefficient group by zero, on the assumption that a quantized coefficient that is not zero is included in the coefficient group corresponding to higher frequencies than those of the target coefficient group, the target coefficient group being selected from among the plurality of coefficient groups sequentially from the coefficient group including the orthogonal transform coefficients corresponding to the lowest frequencies;
determine, for a target coefficient group, a first candidate for the quantized coefficient corresponding to a highest frequency among the quantized coefficients that are included in the target coefficient group and are not zero, on the assumption that all the quantized coefficients included in the coefficient groups corresponding to higher frequencies than those of the target coefficient group are zero, the target coefficient group being selected from among the plurality of coefficient groups sequentially from the coefficient group including the orthogonal transform coefficients corresponding to the lowest frequencies;
calculate the coding error of the coefficient groups from the coefficient group adjacent to the coefficient group including a second candidate for the quantized coefficient on a higher frequency side to the coefficient group including the first candidate, the second candidate being the quantized coefficient with the highest frequency among the quantized coefficients not being zero, obtained from the coefficient group corresponding to lower frequencies than those of the coefficient group including the first candidate;
update the second candidate to the first candidate when a comparison cost obtained by subtracting the coding error of the coefficient group corresponding to higher frequencies than those of the coefficient group including the first candidate from the cost obtained for the first candidate is lower than a value obtained by adding the coding error to the comparison cost calculated for the second candidate, and determine the second candidate at time when the second candidate for the coefficient group corresponding to highest frequencies among the plurality of coefficient groups is updated, to be the quantized coefficient that is not zero and corresponds to a highest frequency; and
calculate the coding error of the coefficient groups from the coefficient group adjacent to the coefficient group including the second candidate on a higher frequency side to the coefficient group including the first candidate,
wherein
the second candidate includes a third candidate and a fourth candidate, the third candidate being updated to the first candidate also when all the predetermined number of quantized coefficients included in the coefficient group including the first candidate are substituted by zero, the fourth candidate being not updated to the first candidate when all the predetermined number of quantized coefficients included in the coefficient group including the first candidate are substituted by zero, and
the determining a quantized coefficient that is not zero and corresponds to a highest frequency determines the third candidate as the quantized coefficient that is not zero and corresponds to the highest frequency when all the quantized coefficients included in the coefficient groups corresponding to higher frequencies than those of the coefficient group including the third candidate are zero, and
determines the fourth candidate as the quantized coefficient that is not zero and corresponds to the highest frequency when the quantized coefficient that is not zero is included in any one of the coefficient groups corresponding to higher frequencies than those of the coefficient group including the third candidate.

US Pat. No. 10,171,808

IN-LOOP ADAPTIVE WIENER FILTER FOR VIDEO CODING AND DECODING

Intel Corporation, Santa...

1. A video encoder having an input to receive video and a channel output comprising:a transform/quantizer having an input and at least one output;
an adder having three inputs and an output coupled to said transform/quantizer input, one of said adder inputs coupled to receive said video;
an inverse quantizer having an input coupled to said transform/quantizer output;
an adaptive Wiener filter having a first input coupled to said inverse quantizer output and one of said adder inputs, said filter having a second input coupled to receive reconstructed image data, said filter to set filter taps based on the reconstructed image data, said filter having an output coupled to one of said adder inputs; and
an entropy coding having an input coupled to said transform/quantizer output, said entropy coding coupled to said channel output.

US Pat. No. 10,171,807

PICTURE-LEVEL QP RATE CONTROL FOR HEVC ENCODING

ARRIS Enterprises LLC, S...

1. A method of controlling a bit rate of an encoded video comprising a plurality of pictures, each of the plurality of pictures being of one of a plurality of picture types, comprising:(a) defining a window of M pictures comprising a plurality of window pictures;
(b) defining a parameter set for each picture type T, each parameter set comprising:
a quantization parameter (QT);
a first parameter (?T);
a second parameter (?T);
(c) estimating a number of bits R needed to encode a current picture of picture type T; according to:

wherein:
QcurT is a value of QT of the current picture of type T;
?curT is a value of ?T of the current picture of type T;
?curT is a value of ?T of the current picture of type T;
(d) estimating a number of bits Ri needed to encode each remaining picture i of the window of M pictures of picture type T according to:

wherein:
QiT is a value of QT of each remaining picture i of type T;
?iT is a value of ?T of each remaining picture i of type T;
?iT is a value of ?T of each remaining picture i of type T;
(e) determining, for the current picture and each remaining picture i of the window of M pictures and from the estimated number of bits to needed to encode the current picture Rcur and the estimated number of bits needed encode each remaining picture i of the window of M pictures, if a maximum video buffer boundary Bupper or a minimum video buffer boundary Blow are exceeded;
(f) if the maximum video buffer boundary Bupper or the minimum video buffer boundary Blow are exceeded, adjusting QcurT for the current picture of picture type T and QiT of each remaining picture i of picture type T, and repeating (d)-(f); and
(g) if the maximum video buffer boundary Bupper and the minimum video buffer boundary Blow are not exceeded, designating QcurT as a value for coding the current picture:
coding the current picture according to QcurT,
after coding the current picture according to QcurT:
updating ?T and ?T for the picture type T of the current picture;
setting a next remaining picture as the current picture and performing steps (c)-(g);
determining the actual number of bits Rr used to code the current picture;
determining a difference between the estimated number of bits Rcur to encode the current picture and the actual number of bits used to code the current picture; and
updating ?T and ?T for the picture type T of the current picture only if the difference between the estimated number of bits Rcur to encode the current picture and the actual number of bits used to code the current picture Rr exceeds a value ?;
wherein updating ?T and ?T for the picture type T of the current picture only if the difference between the estimated number of bits Rcur to encode the current picture and the actual number of bits used to code the current picture Rr exceeds a value ? comprises:
computing updated values for ?T and ?T for the picture type of the current picture that minimize the difference between the estimated number of bits Rcur to encode the current picture and the actual number of bits used to code the current picture Rr.

US Pat. No. 10,171,803

IMAGE CAPTURING APPARATUS, CALIBRATION METHOD, AND NON-TRANSITORY COMPUTER-READABLE MEDIUM FOR CALCULATING PARAMETER FOR A POINT IMAGE RESTORATION PROCESS

FUJIFILM Corporation, To...

1. An image capturing apparatus comprising:an image capturing unit;
a display unit, including a processor, that displays an imaged picture imaged by the image capturing unit and a guide linearly shaped along a sagittal direction or a tangential direction in at least one of four corners of the imaged picture, the guide assisting imaging of a calibration image used for calibration in a point image restoration process;
a guide indication control unit included in the display unit that performs display control of the guide assisting the imaging of the calibration image used for the calibration in the point image restoration process; and
a parameter calculation unit that calculates a parameter for the point image restoration process on the basis of the calibration image imaged by the image capturing unit with assistance from the guide.

US Pat. No. 10,171,802

CALIBRATION METHOD AND CALIBRATION DEVICE

DENSO CORPORATION, Kariy...

1. A calibration method for calibrating an attitude of a camera mounted on a vehicle using a plurality of markers each arranged vertically and each positioned at pre-designated height from a road surface, the calibration method comprising:a first process including shooting an image of the plurality of markers with the camera, thereby generating a two-dimensional image;
a second process including converting the two-dimensional image, which is generated in the first process and represents the plurality of markers, into a bird's eye view image on a specific plane, the bird's eye view image reflecting the height of each of the plurality of markers, wherein the specific plane is a road surface or on a plane parallel to the road surface; and
a third process including calculating a parameter of the camera based on a position difference between the plurality of markers in the specific plane obtained in the second process,
wherein
a plurality of columnar marker poles are vertically extended from a road surface, and
markers are positioned on the columnar marker poles so as to face in directions toward the vehicle, each marker at a pre-designated height from the road surface.

US Pat. No. 10,171,801

DISPLAY DEVICE AND DISPLAY METHOD

Japan Display Inc., Toky...

1. A display device comprising:a detector configured to detect position information on a position of a viewer;
a parallax barrier configured to form a first area and a second area, a transmittance of the first area being higher than a transmittance of the second area;
a plurality of light adjustment sets each including a plurality of light sources and a light adjustment layer; and
a display unit configured to display an image including a plurality of parallax images,
wherein
the light sources are disposed on a light source substrate and include a first light source and a second light source,
an optical axis of illumination light from each of the light sources is in a vertical direction vertical to the light source substrate, the optical axis having highest brightness,
the light adjustment layer is configured to change a direction of an optical axis of illumination light irradiated from the first light source to a first bending direction having a first angle with the vertical direction and
a direction of an optical axis of illumination light irradiated from the second light source to a second bending direction having a second angle with the vertical direction, the first angle being different from the second angle, and
the parallax barrier is configured to change a position of the first area to:
a first position such that the optical axis of illumination light irradiated from the first light source passes through the first area; and
a second position such that the optical axis of illumination light irradiated from the second light source passes through the first area.

US Pat. No. 10,171,799

PARALLAX IMAGE DISPLAY DEVICE, PARALLAX IMAGE GENERATION METHOD, PARALLAX IMAGE PRINT

FUJIFILM Corporation, To...

1. A parallax image display device comprising a computer, wherein the computer comprises:an image acquiring processor adapted to acquire a right-eye image and a left-eye image used for generating a parallax image enabling a stereoscopic view;
an information volume distribution calculating processor adapted to calculate an information volume distribution of the right-eye image and an information volume distribution of the left-eye image;
a parallax image generating processor adapted to generate the parallax image from the right-eye image and the left-eye image on the basis of the information volume distribution of the right-eye image and the information volume distribution of the left-eye image; and
a first parallax image display adapted to have a parallax image display area in which square reference regions are arranged in a grid pattern,
wherein the parallax image generating processor compares an information volume of the right-eye image and an information volume of the left-eye image in each of the reference regions on the basis of the information volume distribution of the right-eye image and the information volume distribution of the left-eye image, allocates one selected from a group consisting of only a right-eye region for displaying the right-eye image, only a left-eye region for displaying the left-eye image, and both of the right-eye region and the left-eye region in each of the reference regions on the basis of a size of information volumes of the right-eye image and the left-eye image, and displays the right-eye image and the left-eye image in the right-eye region and the left-eye region, respectively, to generate the parallax image,
wherein the information volume of each of the reference regions is at least one of an amount of harmonic signal components of the right-eye image or the left-eye image corresponding to each of the reference regions, a value of a maximum frequency of the right-eye image or the left-eye image corresponding to each of the reference regions, a variance value of a brightness distribution of the right-eye image or the left-eye image corresponding to each of the reference regions, and a difference in pixel value between the right-eye image and the left-eye image corresponding to each of the reference regions,
wherein the parallax image generating processor comprises an information volume comparing processor, a reference region allocating processor, an image reflecting processor, and a brightness regulating processor,
wherein the information volume comparing processor is adapted to compare an information volume of the right-eye image and an information volume of the left-eye image, both corresponding to a same one of the reference regions, in each of the reference regions on the basis of the information volume distribution of the right-eye image and the information volume distribution of the left-eye image, and output a comparison result,
wherein the reference region allocating processor is adapted to change region areas of the right-eye region and the left-eye region depending on size of the information volume and allocate the right-eye region and the left-eye region to different regions in each of the reference regions on the basis of the comparison result,
wherein the image reflecting processor is adapted to generate the parallax image by reflecting the right-eye image and the left-eye image in the right-eye region and the left-eye region, respectively, and
wherein the brightness regulating processor is adapted to increase brightness of the area of the right-eye region or the left-eye region allocated in each of the reference regions as the allocated area is smaller, and decrease brightness of the allocated area as the allocated area is larger.

US Pat. No. 10,171,797

SYSTEMS AND METHODS TO CONFIRM THAT AN AUTOSTEREOSCOPIC DISPLAY IS ACCURATELY AIMED

Elwha LLC, Bellevue, WA ...

1. An autostereoscopic display system comprising:a processing circuit configured to:
control an adjustable autostereoscopic display to selectively project images representing a left-eye view and a right-eye view of an image;
control an emitter to emit a tracer beam when at least one of the left-eye view and the right-eye view of the image are selectively not projected;
receive feedback data from a sensor configured to detect reflections of the tracer beam;
determine an impact site of the tracer beam on a viewer based on the feedback data; and
adjust a direction of the tracer beam based on the impact site to intercept a desired impact site of the viewer.

US Pat. No. 10,171,796

MOVING BODY SYSTEM

RICOH COMPANY, LTD., Tok...

1. A moving body system comprising:an imaging device attachable to a moving body including at least one wheel and a steering device which is attached to the moving body and is operated to control a movement direction of the moving body in accordance with steering operation of the steering device;
a visual line direction changing mechanism configured to change a visual line direction of the imaging device; and
a control unit configured to determine at least one steering operation information selected from a group consisting of a steering operation angle of the steering device attached to the moving body, a steering operation speed of the steering device attached to the moving body, and an angle of inclination of the wheel of the moving body, and detect, based on said at least one steering operation information determined by the control unit, changing of the movement direction of the moving body, and change the visual line direction of the imaging device in accordance with the changing of the movement direction of the moving body, detected based on said at least one steering operation information.

US Pat. No. 10,171,794

METHOD FOR SELECTING CAMERAS AND IMAGE DISTRIBUTION SYSTEM CAPABLE OF APPROPRIATELY SELECTING CAMERAS

PANASONIC INTELLECTUAL PR...

1. A method, comprising:obtaining, using sensors included in a first number of cameras capturing images of a same scene, positions and image capture angles of the cameras;
selecting, for display, a second number of the cameras capturing the images by using a processor based on the positions and the image capture angles of the cameras;
determining whether to switch at least one of the second number of the cameras to another camera in a frame after the selecting; and
selecting, when the determining determines that the at least one of the second number of the cameras is to be switched, a new camera for the at least one of the second number of the cameras based on the positions and the image capture angles of the N cameras,
wherein the first number is a natural number at least equal to 2,
the second number is a natural number less than the first number, and
in the determining:
when a time elapsed since a previous switching operation is shorter than a first time, the at least one of the second number of the cameras is determined not to be switched to another camera;
when the time elapsed since the previous switching operation is equal to or longer than the first time but shorter than a second time longer than the first time, whether to switch the at least one of the second number of the cameras to another camera is determined in accordance with a first criterion; and
when the time elapsed since the previous switching operation is equal to or longer than the second time, whether to switch the at least one of the second number of the cameras to another camera is determined in accordance with a second criterion, the at least one of the second number of the cameras being more likely to be switched to the another camera according to the second criterion than the first criterion.

US Pat. No. 10,171,791

METHODS AND APPARATUS FOR CONDITIONAL DISPLAY OF A STEREOSCOPIC IMAGE PAIR

QUALCOMM Incorporated, S...

1. A method of displaying data on an electronic display, comprising:determining, via an electronic hardware processor, a vertical disparity between a first digital image and a second digital image representing left and right perspectives of a scene respectively, wherein a horizontal disparity represents a horizontal offset between the left and right perspectives; and
correcting, via the electronic hardware processor, the vertical disparity between the first image and the second image by generating a corrected image;
displaying, on an electronic display, by the electronic hardware processor, the stereoscopic image pair in response to the corrected vertical disparity being below a first threshold; and
displaying, on the electronic display, by the electronic hardware processor, a two dimensional image in response to the corrected vertical disparity exceeding a second threshold.

US Pat. No. 10,171,790

DEPTH SENSOR, IMAGE CAPTURE METHOD, AND IMAGE PROCESSING SYSTEM USING DEPTH SENSOR

Samsung Electronics Co., ...

1. An image capture method performed by a depth sensor, the method comprising:emitting a first source signal having a first amplitude towards a scene, and thereafter, emitting a second source signal having a second amplitude different from the first amplitude towards the scene;
receiving, as a first reflected signal, a reflected portion of the first source signal;
receiving, as a second reflected signal, a reflected portion of the seconds source signal;
demodulating the first reflected signal with an N-times sampling operation to generate a first image;
demodulating the second reflected signal with another N-times sampling operation to generate a second image, wherein N is an integer greater than one; and
interpolating the first and second images to generate a final image, wherein:
the second amplitude is greater than the first amplitude,
the first source signal is used to capture a first point of the scene that is relatively close to the depth sensor, and
the second source signal is used to capture a second point of the scene that is relatively far from the depth sensor.

US Pat. No. 10,171,789

MULTI-SENSOR VIDEO FRAME SYNCHRONIZATION APPARATUS AND METHODS

Texas Instruments Incorpo...

1. A video controller, comprising:a start-of-frame monitor to monitor a time of receipt of a start-of-frame indication associated with a first image sensor and a start-of-frame indication associated with a second image sensor;
a frame delta calculator operationally coupled to the start-of-frame monitor to calculate a time difference between the time of receipt associated with the first image sensor and the time of receipt associated with the second image sensor; and
a frame period adjuster coupled to the frame delta calculator to alter a frame period determining parameter associated with at least one of the first image sensor or the second image sensor from an original value to an adjusted value in order to decrease the time difference if the time difference is greater than or equal to a frame synchronization threshold value and to reset the frame period determining parameter to equal values at the first and second image sensors if the time difference is less than the frame synchronization threshold value, the frame period adjuster being configured to cause a horizontal blanking period of the first or second image sensor to be increased or decreased in response to the altered frame period determining parameter to decrease the time difference.

US Pat. No. 10,171,787

REPRODUCTION DEVICE, REPRODUCTION METHOD, AND RECORDING MEDIUM FOR DISPLAYING GRAPHICS HAVING APPROPRIATE BRIGHTNESS

SONY CORPORATION, Tokyo ...

1. A reproduction device comprising:a readout unit configured to read out coded data of an HEVC stream including an extended video that is a video having a first brightness range that is wider than a second brightness range, brightness characteristic information that represents a brightness characteristic of the extended video, and graphics data that is superimposed on the extended video and that has the second brightness range, from a recording medium that has recorded the coded data, the brightness characteristic information, and the graphics data;
a first decoding unit configured to decode the coded data;
a second decoding unit configured to decode the graphics data;
a first conversion unit configured to convert a first pixel value of the graphics, obtained by decoding, to a second pixel value in the brightness characteristic of the extended video represented by the brightness characteristic information, the second pixel value representing brightness that is equivalent to brightness represented by the first pixel value in a brightness characteristic of the graphics; and
a synthesis unit configured to synthesize the extended video, the synthesized extended video being obtained by decoding the coded data, together with the graphics having the second pixel value,
wherein the readout unit is further configured to read out brightness conversion definition information that is recorded in the recording medium and that is used when performing brightness conversion,
wherein the brightness characteristic information and the brightness conversion definition information are inserted as SEI of the HEVC stream including the coded data,
wherein the brightness conversion definition information comprises an indication of a tone map model set from among a plurality of tone map models in order to perform the brightness conversion, and
wherein the readout unit, the first decoding unit, the second decoding unit, the first conversion unit, and the synthesis unit are each implemented via at least one processor.

US Pat. No. 10,171,786

LENS SHADING MODULATION

Apple Inc., Cupertino, C...

8. A system, comprising:an image capture device comprising a lens;
a memory operatively coupled to the image capture device and having, stored therein, computer program code; and
a programmable control device operatively coupled to the memory and comprising instructions stored thereon to cause the programmable control device to execute the computer program code to:
obtain a first image of a scene captured using the lens, wherein the first image comprises a first plurality of pixels;
determine a lens shading correction level based, at least in part, on a focal distance of the lens used to capture the first image; and
apply the determined lens shading correction level to the first image,
wherein the determined lens shading correction level encompasses both color shading gain and vignetting gain, and wherein the lens shading correction level is a function of normalized diagonal field position from a center of the lens used to capture the first image.

US Pat. No. 10,171,785

COLOR BALANCING BASED ON REFERENCE POINTS

Disney Enterprises, Inc.,...

1. A computer-implemented method of adjusting coloration, the computer-implemented comprising:receiving a selection of one or more source reference points within a source image depicting a source lighting condition;
receiving a selection of one or more target reference points within a target image depicting a target lighting condition distinct from the source lighting condition;
determining a coloration difference between a coloration of the one or more source reference points within the source image and a coloration of the one or more target reference points within the target image; and
normalizing the distinct, depicted source and target lighting conditions by adjusting, by operation of one or more computer processors, the coloration of at least a portion of the source image based on the determined coloration difference and to correspond more closely to the coloration of the target image, whereafter the source image is output.

US Pat. No. 10,171,784

SOLID STATE IMAGING DEVICE AND IMAGING APPARATUS HAVING A PLURALITY OF ADDITION READ MODES

NIKON CORPORATION, Tokyo...

1. An imaging device, comprising:a pixel section including (i) a plurality of first pixels that are each configured to output a first signal generated by light from a first filter having a first spectral characteristic, and (ii) a plurality of second pixels that are each configured to output a second signal generated by light from a second filter having a second spectral characteristic different from the first spectral characteristic, the plurality of first pixels and the plurality of second pixels being alternately arranged in a first direction;
a scanning circuit configured to read the first and second signals, respectively, from the respective plurality of first and second pixels that are arranged in the pixel section;
an outputting circuit including (i) a first outputting circuit that is configured to output a first addition signal generated by adding a plurality of the first signals read from the plurality of first pixels, and (ii) a second outputting circuit configured to output a second addition signal generated by adding a plurality of the second signals read from the plurality of second pixels, the pixel section arranged between the first outputting circuit and the second outputting circuit in a second direction crossing the first direction; and
a controlling circuit configured to control the outputting circuit to shift, in the first direction, a pixel position corresponding to:
(1) a sub-set of the plurality of the first signals to be added by the first outputting circuit among the plurality of the first signals read by the scanning circuit from the plurality of the first pixels; and
(2) a sub-set of the plurality of the second signals to be added by the second outputting circuit among the plurality of the second signals read by the scanning circuit from the plurality of the second pixels.

US Pat. No. 10,171,783

RGB SIGNAL TO RGBY SIGNAL IMAGE CONVERTING SYSTEM AND METHOD

Shenzhen China Star Optoe...

2. An RGB signal to RGBY signal image converting method for an RGBY image display apparatus having multiple pixel units each of which is consisted of a red sub-pixel unit, a green sub-pixel unit and blue sub-pixel unit, comprises steps of:receiving RGB input signals Ri, Gi and Bi from a signal transmitting connector connected to the RGBY image display apparatus;
determining whether a color of the RGB input signals is yellow, wherein if the color of the RGB input signals is yellow, further comprises determining a numerical magnitude relationship between the Ri input signal and Gi input signal; and calculating the RGBY output signals Ro, Go, Bo and Yo according to a determining result; and
calculating and outputting RGBY output signals Ro, Go, Bo and Yo used to control gray scale values of the red, green and blue sub-pixel units for the corresponding pixel unit, when the color of the RGB input signals is not yellow, wherein Y0=0, Ro=Ri, Go=Gi and Bo=Bi.

US Pat. No. 10,171,782

IMAGE SENSOR AND METHOD OF GENERATING RESTORATION IMAGE

SAMSUNG ELECTRONICS CO., ...

1. An image sensor comprising:a plurality of non-color pixel sensors each configured to sense a non-color signal; and
a color pixel sensing region including at least one color pixel sensor configured to sense a color signal,
wherein the color pixel sensing region has an area physically greater than an area of each of the non-color pixel sensors,
wherein the color pixel sensing region is encompassed by the non-color pixel sensors, and
wherein at least two non-color pixel sensors are placed between color pixel sensing regions.

US Pat. No. 10,171,781

PROJECTION APPARATUS, METHOD FOR CONTROLLING THE SAME, AND PROJECTION SYSTEM

CANON KABUSHIKI KAISHA, ...

1. A projection apparatus comprising:a processor; and
a memory having stored thereon instructions that when executed by the processor, cause the processor to:
output an image signal in which a predetermined calibration pattern is synthesized in an input image;
control transmittance or reflectance of a display unit and form an image based on the output image signal;
project an image formed on the display unit by irradiating the display unit with light;
capture the projected image by an image sensor which employs a rolling shutter system in which charge accumulation is performed for each row;
extract the calibration pattern from the captured image and generate a correction parameter for correcting an image to be formed on the display unit depending on a condition of the extracted calibration pattern; and
control image capturing timing so that images before and after update of the display unit is not mixed in the captured image,
wherein controlling the image capturing timing is executed by
recognizing update timing of a predetermined line of the display unit on a basis of a synchronizing signal, and by
starting charge accumulation for a predetermined row of the image sensor during an update period of the display unit.

US Pat. No. 10,171,779

OPTIMIZING DRIVE SCHEMES FOR MULTIPLE PROJECTOR SYSTEMS

MTT Innovation Incorporat...

1. An image projection system, comprising:an image storage mechanism that stores one or more images to be presented in sequence on a projection screen;
a high dynamic range projector including a light source that only directly projects light to a first imaging element that is configured to modulate the phase of light from the light source for producing images on a projection screen with an average brightness over an entire area of the projection screen and images with a higher than average peak brightness over less than the entire area of the projection screen;
a low dynamic range projector including a light source that only projects light to a second imaging element that is different than the first imaging element that is configured to modulate the intensity of light from the first imaging element and the intensity of light from the light source in the low dynamic range projector, wherein the low dynamic range projector has a smaller dynamic range than the high dynamic range projector; and
control hardware that is configured to analyze data for each image to be projected onto the projection screen on a frame by frame basis to selectively control the low dynamic range projector to supply an approximately uniform amount of light onto the second imaging element such that some images are projected onto the projection screen using only the high dynamic range projector and some images requiring a higher than average brightness level over the entire area of the projection screen than is available from the high dynamic range projector but with a reduced dynamic range are projected on the screen using both the high dynamic range projector and the low dynamic range projector.

US Pat. No. 10,171,777

STREAMING AND STORING VIDEO CONTENT CAPTURED BY AN AUDIO/VIDEO RECORDING AND COMMUNICATION DEVICE

Amazon Technologies, Inc....

1. A method for transmitting and storing video images captured by an audio/video (A/V) recording and communication device, the A/V recording and communication device including a camera and a local storage device, the A/V recording and communication device being connected to a network, the method comprising:the A/V recording and communication device detecting a person at the A/V recording and communication device;
the camera of the A/V recording and communication device capturing video images from within a field of view of the camera at the A/V recording and communication device;
initiating a call to a client device via the network;
transmitting the video images in a plurality of data packets to the client device via the network;
receiving at least one negative-acknowledgement (NACK) indicating that at least one of the data packets was lost in transmission;
retransmitting the lost data packets to the network;
receiving a message with a list of data packets that were lost in retransmission;
storing copies of the data packets on the list at the local storage device of the A/V recording and communication device;
receiving a notification that the call with the client device has terminated; and
after receiving the notification that the call with the client device has terminated, retrieving the data packets stored at the local storage device of the A/V recording and communication device and retransmitting the retrieved data packets to the network.

US Pat. No. 10,171,773

DYNAMIC VIDEO IMAGE MANAGEMENT

International Business Ma...

1. A computer system for dynamic video image management, the computer system comprising a computer readable memory, a processing unit communicatively coupled to the computer readable memory, computer readable storage medium, and program instructions stored on the computer readable storage medium for execution by the processing unit via the computer readable memory, the program instructions comprising:program instructions to collect, with respect to a dynamic video image, a set of dynamic image quality factors;
program instructions to determine, based on the set of dynamic image quality factors, a set of display parameter values of a set of display parameters for a set of computing assets to benefit the set of dynamic image quality factors with respect to the dynamic video image;
program instructions to configure, using the set of display parameter values, the set of computing assets to benefit the set of dynamic image quality factors with respect to the dynamic video image;
program instructions to maintain, to configure the set of computing assets without changing a video camera configuration, the video camera configuration;
program instructions to structure the set of computing assets to include a set of secondary computing assets;
program instructions to maintain, to configure the set of computing assets without changing a set of active display parameter values of a set of active display parameters for a set of active computing assets, the set of active display parameter values of the set of active display parameters for the set of active computing assets;
program instructions to disable, for a threshold temporal period, a modification to the set of active display parameter values of the set of active display parameters for the set of active computing assets;
program instructions to structure the set of secondary computing assets to include a plurality of computing devices which run a plurality of separate operating systems which have a plurality of different applications which include a plurality of separate application windows for presentation on a plurality of different physical display screens, wherein the set of display parameter values is for the plurality of separate application windows; and
program instructions to configure the set of secondary computing assets in a gradual fashion to manage the dynamic video image based on a set of incremental changes to the set of display parameter values.

US Pat. No. 10,171,771

CAMERA SYSTEM FOR VIDEO CONFERENCE ENDPOINTS

Cisco Technology, Inc., ...

1. An apparatus comprising:a wide lens camera fixedly positioned within a camera housing to provide an overall view of a space;
a first long focus lens camera fixedly positioned within the camera housing at a first angle with respect to the wide lens camera so that the first long focus lens camera provides a view of a first portion of the space;
a second long focus lens camera that is fixedly positioned within the camera housing at a second angle with respect to the wide lens camera and rotated, about a first vertical axis extending through the second long focus lens camera, towards the first long focus lens camera so that the second long focus lens camera provides a view of a second portion of the space; and
a third long focus lens camera fixedly that is positioned within the camera housing at a third angle with respect to the wide lens camera and rotated, about a second vertical axis extending through the third long focus lens camera, towards the first long focus lens camera so that the third long focus lens camera provides a view of a third portion of the space.

US Pat. No. 10,171,769

SOUND SOURCE SELECTION FOR AURAL INTEREST

International Business Ma...

1. A method comprising:modifying a video recording by adding to the video recording a viewer-selectable region of a video display plane corresponding to a sub-set of pixels within a set of pixels displayed during playback of the video recording, the viewer-selectable region corresponding to a first sound source recorded by at least one microphone of a plurality of microphones from a three-dimensional scene; and
adjusting an audio signal played by the modified video recording based, at least in part, upon selection of the viewer-selectable region during playback of the modified video recording;
wherein:
the at least one microphone records audio from the first sound source on an audio channel that is distinct from the audio channels of other microphones of the plurality of microphones; and
selection of the viewer-selectable region plays an audio recording made by the at least one microphone corresponding to the first sound source.

US Pat. No. 10,171,768

CURVE PROFILE CONTROL FOR A FLEXIBLE DISPLAY

INTERNATIONAL BUSINESS MA...

1. A method comprising:tracking curve profiles applied to one or more flexible displays by one or more users in association with presentation of different digital media on the one or more flexible displays;
building predefined rules base on the tracking, the predefined rules defining preferred curve profiles based on curves applied to the one or more flexible displays in presenting the different digital media and comprising mappings between individual characteristics of different digital media and the preferred curve profiles;
storing the predefined rules as candidates for selection to apply in association with presentation of other digital media;
obtaining a first digital media to be presented on a flexible display;
automatically determining a curve profile to apply to the flexible display in association with presentation of the first digital media on the flexible display, the automatically determining being based at least in part on an analysis of the first digital media to be presented, wherein the automatically determining the curve profile comprises:
comparing identified characteristics of the first digital media to at least one mapping provided by the stored predefined rules;
identifying a predefined rule, of the stored predefined rules and based on the comparing, having one or more mappings of digital media characteristics that correspond to the identified characteristics of the first digital media, the digital media characteristics being those shared with second digital media, different from the first digital media, the second digital media being at least a subset of the different digital media presented on the one or more flexible displays; and
selecting the preferred curve profile of the identified predefined rule, wherein the automatically determined curve profile to apply is the selected preferred curve profile or is determined based on the selected preferred curve profile; and
applying the automatically determined curve profile to the flexible display in association with the presentation of the first digital media on the flexible display.

US Pat. No. 10,171,767

IMAGE READER COMPRISING CMOS BASED IMAGE SENSOR ARRAY

HAND HELD PRODUCTS, INC.,...

1. A method for capturing and decoding at least a two dimensional bar code in image data captured by an image reader, the image reader comprising an image sensor array comprising plurality of pixels in a two-dimensional array, and the image reader further comprising at least one illumination light source, the method comprising:exposing all or substantially all of the pixels in the image sensor array in a global shutter mode, wherein exposing the all or substantially all of the pixels in the global shutter mode comprises exposing the all or substantially all of the pixels in response to an exposure control timing pulse; and
illuminating at least a portion of the bar code in response to an illumination control timing pulse;
wherein the exposure control timing pulse and the illumination control timing pulse are interdependent.

US Pat. No. 10,171,766

IMAGING DEVICE WITH REDUCED DELAY IN DISPLAY

Seiko Epson Corporation, ...

1. An imaging device comprising:a controller including a circuit;
an image sensor that performs imaging operations at intervals of a predetermined sensor cycle;
an image data generator that generates image data based on output data from the image sensor; and
a display that displays an image represented by the image data within a second display scanning period whose length is shorter than a first display scanning period corresponding to a display cycle that is N times the sensor cycle (N being an integer larger than or equal to “2”) by a margin period which is variable.

US Pat. No. 10,171,765

BIT LINE BOOST FOR FAST SETTLING WITH CURRENT SOURCE OF ADJUSTABLE SIZE

OmniVision Technologies, ...

16. A method of fast settling an output line circuit, comprising: maintaining a high potential to a row select (RS) enable to switch on a row select (RS) transistor; maintaining a cascode control voltage (VCN) to bias a first cascode transistor, wherein the cascode control voltage (VCN) is a positive potential to ensure normal operation of the first cascode transistor; maintaining a bias control voltage (VBN) to bias a first bias transistor and a second bias transistor, wherein the bias control voltage (VBN) is a positive potential to ensure normal operation of the first bias transistor and the second bias transistor; maintaining a low potential to a first boost enable signal to open a first boost enable switch; resetting a floating diffusion (FD) to a reset FD voltage (VRFD) by setting a reset (RST) gate to high to switch on a reset (RST) transistor; disconnecting the FD from the reset FD voltage (VRFD) by setting the RST gate to low to switch off the RST transistor; boosting one of a first RST surge current and a second RST surge current to sink a bitline; reading background charges on the FD, wherein the SF converts a background voltage from its gate terminal and provides an amplified background signal to the bitline on the SF source terminal when enabled by the closed RS transistor; transferring charges from a TX receiving terminal to a floating diffusion (FD) by setting a transfer (TX) gate to high to switch on a transfer (TX) transistor; discontinuing the charge transferring to the FD by setting the TX gate to low to switch off the TX transistor; boosting one of a first TX surge current and a second TX surge current to sink a bitline; and reading the image charges on the FD, wherein the SF converts an image signal from its gate terminal and provides an amplified image signal to the bitline on the SF source terminal when enabled by the closed RS transistor.

US Pat. No. 10,171,762

IMAGE SENSING DEVICE

Renesas Electronics Corpo...

1. An image sensing device comprising:a photoelectric conversion element;
a transfer transistor to read out an electric-charge from the photoelectric conversion element;
a floating diffusion to hold the electric-charge read out via the transfer transistor;
a reset circuit to switch a voltage to be supplied to the floating diffusion when the floating diffusion is reset;
an output wire to output an output signal generated based on the electric-charge held in the floating diffusion; and
a reset control circuit to instruct switching of the voltage supplied by the reset circuit to the floating diffusion, and output a reset control signal,
wherein the reset circuit supplies,
a first reset voltage based on a power-source voltage to the floating diffusion in a first reset operation that resets the floating diffusion and the photoelectric conversion element prior to a light-exposure period for exposing the photoelectric conversion element with light, and
supplies a second reset voltage based on a reset correction voltage lower than the power-source voltage to the floating diffusion and thereafter supplies the first reset voltage, in a second reset operation that resets the floating diffusion during the light-exposure period for exposing the photoelectric conversion element with the light.

US Pat. No. 10,171,761

SOLID-STATE IMAGING DEVICE AND ELECTRONIC DEVICE

Sony Corporation, Tokyo ...

1. A solid-state imaging device, comprising:a plurality of pixels;
a vertical signal line configured to output a pixel signal of a pixel of the plurality of pixels;
a clipping circuit configured to limit a first voltage of the vertical signal line to a second voltage,
wherein the clipping circuit includes:
a transistor configured to generate the second voltage based on a third voltage of a gate of the transistor,
a sample holding circuit configured to:
hold a reset level of the pixel that is output to the vertical signal line, and
input the reset level to the gate of the transistor, and
a plurality of capacitors; and
a voltage generation circuit configured to:
apply a fourth voltage to a first capacitor of the plurality of capacitors to read the reset level of the pixel, and
apply a fifth voltage to a second capacitor of the plurality of capacitors to read a signal level of the pixel.

US Pat. No. 10,171,760

SOLID-STATE IMAGING DEVICE, METHOD FOR DRIVING SOLID-STATE IMAGING DEVICE, AND ELECTRONIC APPARATUS USING AN AMPLIFIER AND SIGNAL LINES FOR LOW AND HIGH GAIN

BRILLNICS INC., Grand Ca...

1. A solid-state imaging device comprisinga pixel portion in which pixels are arranged,
a readout circuit including an amplifier capable of amplifying a pixel readout voltage read out from the pixels,
a first signal line to which a readout voltage of a low gain is output, and
a second signal line to which the output side of the amplifier is connected and a readout voltage of a high gain is output, wherein
a pixel includes
a photoelectric conversion element which accumulates a charge generated by photoelectric conversion in an accumulation period,
a transfer element capable of transferring a charge accumulated in the photoelectric conversion element in a transfer period,
a floating diffusion to which a charge accumulated in the photoelectric conversion element is transferred through the transfer element,
a source-follower element which converts the charge of the floating diffusion to a voltage signal in accordance with the charge quantity,
a reset element which resets the floating diffusion to a potential of the second signal line or a predetermined potential in a reset period, and
a feedback capacitor having one electrode connected to the floating diffusion and having another electrode connected to the second signal line, wherein
the first signal line connected to an output line of the voltage signal by the source-follower element and connected to the input side of the amplifier.

US Pat. No. 10,171,759

IMAGING DEVICE, METHOD FOR CONTROLLING IMAGING DEVICE, IMAGING SYSTEM, AND METHOD FOR CONTROLLING IMAGING SYSTEM

JVC KENWOOD CORPORATION, ...

1. An imaging device comprising:a first projection controller configured to control a first infrared projector, capable of projecting infrared light with multiple wavelengths, to project selectively the infrared light with the multiple wavelengths;
an imaging unit configured to image an object in a state where the first infrared projector projects the infrared light; and
a synchronous signal transmitter configured to transmit outward a synchronous signal for synchronizing a timing of projecting infrared light from a second infrared projector controlled by a second projection controller included in another imaging device other than the imaging device, with a timing of projecting the infrared light from the first infrared projector controlled by the first projection controller.

US Pat. No. 10,171,757

IMAGE CAPTURING DEVICE, IMAGE CAPTURING METHOD, CODED INFRARED CUT FILTER, AND CODED PARTICULAR COLOR CUT FILTER

NEC CORPORATION, Tokyo (...

1. An image capturing device comprising:a color filter which separates an incident light into a plurality of colors;
a photo sensor which converts the plurality of colors which the color filter has separated into data representing image signals;
a coded infrared cut filter which is provided in front of the color filter in the light traveling direction or between the color filter and the photo sensor, and which cuts a near infrared light and passes the near infrared light; and
a hardware image processor which acquires plural-color information and near infrared information of a pixel based on a plurality of image signals related to lights which pass a cutting portion of the coded infrared cut filter and an image signal related to a light which passes a transmitting portion of the filter.

US Pat. No. 10,171,754

OVERLAY NON-VIDEO CONTENT ON A MOBILE DEVICE

Sony Interactive Entertai...

1. A mobile device comprising:a video camera;
a display;
a processor; and
a memory communicatively coupled with the processor and storing computer-readable instructions that, upon execution by the processor, cause the mobile device to:
capture, by at least using the video camera, video content displayed on a video display, the video content having a marker that includes time code, the time code identifying a temporal position corresponding to a portion of time in the video content;
track the video content based on the marker;
receive a user selection of a subportion of an image in the video content;
access non-video content associated with the subportion of the image and synchronized with the temporal position of the video content using the time code; and
present, on the display, the non-video content associated with the subportion of the image at substantially the same time as the video content is captured using the video camera.

US Pat. No. 10,171,752

IMAGING APPARATUS, DISPLAY METHOD, AND PROGRAM

Olympus Corporation, Tok...

1. An imaging apparatus comprising:an imaging unit configured to continuously image a subject and generate moving image data of the subject;
a display unit configured to display a moving image corresponding to the moving image data;
a shooting controller configured to control the imaging unit to continuously image the subject in a moving image mode capable of connecting different pieces of the moving image data having different shooting time-points;
a thumbnail generation unit configured to generate resized image data by performing resize processing of reducing a size of image data of at least one frame constituting the moving image data based on the moving image data generated by the imaging unit, and generate a thumbnail representing the moving image data by combining a resized image corresponding to the resized image data with a template having a display area displaying information indicating that a different piece of the moving image data may be connected;
a display controller configured to display the thumbnail generated by the thumbnail generation unit on the display unit; and
an operating unit configured to receive an input of a start signal instructing a start of continuously imaging the subject to the imaging unit and a finish signal instructing a finish of the continuously imaging to the imaging unit,
wherein the moving image mode includes:
a first moving image mode that generates the moving image data by causing the imaging unit to continuously image the subject from a point of input of the start signal until a point of input of the finish signal; and
a second moving image mode that generates the moving image data by causing the imaging unit to continuously image the subject for a prescribed time-span from the point of input of the start signal from the operating unit, the second moving image mode being capable of connecting different pieces of the moving image data generated by the imaging unit at different time-points,
the shooting controller controls the imaging unit to start the continuously imaging in the first moving image mode or the second moving image mode when the operating unit has received the input of the start signal,
the thumbnail generation unit generates the thumbnail when the imaging unit has generated the moving image data in the second moving image mode, and
when the imaging unit has generated the moving image data in the first moving image mode, the thumbnail generation unit generates the thumbnail and thereafter generates trimming image data by performing trimming processing onto an area including the resized image on the thumbnail, and generates a first moving image thumbnail representing the moving image data captured in the first moving image mode by performing, onto the trimming image data, resize processing of enlargement up to an area that covers the display area.

US Pat. No. 10,171,751

SUPERIMPOSING AN IMAGE ON AN IMAGE OF AN OBJECT BEING PHOTOGRAPHED

Chad-Affonso Wathington, ...

1. A system comprising:an image sensor;
a beam combiner;
a lens array located in the system so as to project light from an object onto the image sensor via the beam combiner; and
an electro-optic display, which is located in the system, so that, when activated, a picture of choice is projected onto the image sensor superimposed with the light of the object via the beam combiner;
wherein no lenses are located between the beam combiner and the image sensor.

US Pat. No. 10,171,750

SOLID-STATE IMAGE SENSOR, IMAGING CONTROL METHOD, SIGNAL PROCESSING METHOD, AND ELECTRONIC APPARATUS

SONY CORPORATION, Tokyo ...

1. A solid-state image sensor, comprising:a pixel array unit that includes:
a plurality of pixels,
wherein the plurality of pixels comprise a first pixel and a second pixel,
wherein a first sensitivity of the first pixel is highest among the plurality of pixels, and
wherein the first pixel and the second pixel are of different type; and
a control unit configured to:
control, at least one of an analog gain of each of the first pixel and the second pixel or an exposure time for each of the first pixel and the second pixel, based on a ratio of the first sensitivity of the first pixel and a second sensitivity of the second pixel; and
correct, a first difference between the first sensitivity of the first pixel and the second sensitivity of the second pixel, based on the controlled at least one of the analog gain of each of the first pixel and the second pixel or the exposure time for each of the first pixel and the second pixel.

US Pat. No. 10,171,748

IMAGE PICKUP APPARATUS, NON-TRANSITORY COMPUTER-READABLE MEDIUM STORING COMPUTER PROGRAM, AND IMAGE PICKUP METHOD

Olympus Corporation, Tok...

9. An image pickup method comprising:an exposure control step of determining a proper exposure time, and setting, when the proper exposure time is longer than a frame period, a long exposure time equal to or shorter than the frame period and a short exposure time shorter than the long exposure time such that a total time of the short exposure time and one or more long exposure times is equal to the proper exposure time;
an image pickup step of outputting, for every frame period, a long exposure image exposed for the long exposure time and a short exposure image exposed for the short exposure time within an exposure period of the long exposure image, when the proper exposure time is longer than the frame period; and
a synthesizing step of adding the short exposure image of one frame and the long exposure image or long exposure images of one or more frames, to generate a synthetic image corresponding to the proper exposure time, when the proper exposure time is longer than the frame period.

US Pat. No. 10,171,747

IMAGE CAPTURING APPARATUS, EXTERNAL APPARATUS, IMAGE CAPTURING SYSTEM, METHOD FOR CONTROLLING IMAGE CAPTURING APPARATUS, COMPUTER PROGRAM, AND COMPUTER-READABLE STORAGE MEDIUM

Canon Kabushiki Kaisha, ...

1. An image capturing apparatus to communicate with an external apparatus via a network, the image capturing apparatus comprising:an image capturing unit;
a hardware processor; and
a memory storing instructions to be executed by the hardware processor, wherein, when the instructions stored in the memory are executed by the hardware processor, the image capturing apparatus functions as:
a receiving unit configured to receive, from the external apparatus via the network, a synthesizing command for controlling an operation of synthesizing a plurality of images that have been captured by the image capturing unit under different exposure conditions, and an exposure setting command for controlling an operation of obtaining an image that has been generated under a set exposure condition,
a control unit configured to selectively execute, in a case where the receiving unit receives the synthesizing command and the exposure setting command, one of a synthesizing operation and an exposure setting operation,
a determining unit configured to determine the operation executed by the control unit, and
a transmitting unit configured to transmit, to the external apparatus via the network, operation information indicating operations which are specifiable by the synthesizing command and the exposure setting command received by the receiving unit.

US Pat. No. 10,171,745

EXPOSURE COMPUTATION VIA DEPTH-BASED COMPUTATIONAL PHOTOGRAPHY

Dell Products, LP, Round...

1. A method in an electronic information handling system comprising:recording a first image of a scene at a first exposure level using a three-dimensional (3D) camera;
correlating distances from the 3D camera and exposure levels over a plurality of image elements of the first image;
selecting a first exposure parameter value for at least one of the plurality of image elements having a z-distance value falling within a range of z-distance values;
recording a second image of the scene according to the first exposure parameter value selected for the at least one of the plurality of image elements having a second exposure level; and
constructing a composite image based on at least a portion of the second image for the at least one of the plurality of image elements.

US Pat. No. 10,171,744

IMAGE PROCESSING APPARATUS, IMAGE CAPTURE APPARATUS, AND CONTROL METHOD FOR ADDING AN EFFECT OF A VIRTUAL LIGHT SOURCE TO A SUBJECT

CANON KABUSHIKI KAISHA, ...

1. An image processing apparatus, comprising:a processor; and
a memory including instructions that, when executed by the processor, cause the processor to function as:
an obtainment unit configured to obtain an image derived from image capture;
a computation unit configured to compute an effect of a virtual light source on a subject included in the image obtained by the obtainment unit, the virtual light source being non-existent at the time of the image capture; and
an output unit configured to output an image derived from addition of the effect of the virtual light source to the subject based on a result of the computation by the computation unit, wherein
the computation unit includes:
an estimation unit configured to, based on the obtained image, estimate an illuminating condition by an ambient light source in an environment where the image was captured;
a determination unit configured to, based on a result of the estimation by the estimation unit, determine an illumination direction of the virtual light source and reflective characteristics of the subject illuminated by the virtual light source; and
a processing unit configured to compute the effect of the virtual light source based on the illumination direction of the virtual light source and the reflective characteristics of the subject determined by the determination unit.

US Pat. No. 10,171,743

IMAGE PICKUP APPARATUS, IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM FOR IMPROVING QUALITY OF CAPTURED IMAGE

CANON KABUSHIKI KAISHA, ...

12. An image processing method comprising the steps of:detecting a saturated pixel of an image sensor based on a single image corresponding to image data output from the image sensor;
estimating a luminance value, which is outside a luminance range of the image sensor, of a pixel that was detected to be the saturated pixel of the image sensor based on the single image;
setting an exposure parameter based on the estimated luminance value; and
combining a plurality of images obtained from the image sensor to output a composite image of the plurality of images, the plurality of images obtained from the image sensor including at least one image obtained using the set exposure parameter.

US Pat. No. 10,171,742

IMAGE CAPTURING APPARATUS, METHOD, AND PROGRAM WITH OPERATION STATE DETERMINATION BASED UPON ANGULAR VELOCITY DETECTION

Sony Corporation, Tokyo ...

1. An image capturing apparatus comprising:an angular velocity detection unit configured to respectively detect angular velocities of movement of the image capturing apparatus at a plurality of times;
an operation determination unit configured to determine a panning operation state of the image capturing apparatus based on the detected angular velocities at the plurality of times, the determined panning operation state being one of a plurality of predetermined classifications of panning operation states; and
a zoom control unit configured to perform zoom control based on the determined panning operation state.

US Pat. No. 10,171,738

STABILIZING VIDEO TO REDUCE CAMERA AND FACE MOVEMENT

Google LLC, Mountain Vie...

11. A computerized system, comprising:a camera;
a motion or orientation sensor physically coupled to the camera;
one or more processors;
one or more non-transitory computer-readable devices including instructions that, when executed by the one or more processors, cause performance of operations that include:
receiving, by a computing system, a video stream that includes multiple frames and that was captured by a physical camera;
determining, by the computing system and in a frame of the video stream that was captured by the physical camera, a location of a facial feature of a face that is depicted in the frame;
determining, by the computing system, a stabilized location of the facial feature, taking into account a previous location of the facial feature in a previous frame of the video stream that was captured by the physical camera;
determining, by the computing system and using information received from a movement or orientation sensor coupled to the physical camera, a pose of the physical camera in a virtual space;
mapping, by the computing system, the frame of the video stream that was captured by the physical camera into the virtual space;
determining, by the computing system, an optimized pose of a virtual camera viewpoint in the virtual space from which to generate a stabilized view of the frame, using an optimization process that:
(i) determines a difference between the stabilized location of the facial feature and a location of the facial feature in a stabilized view of the frame viewed from a potential pose of the virtual camera viewpoint;
(ii) determines a difference between the potential pose of the virtual camera viewpoint in the virtual space and a previous pose of the virtual camera viewpoint in the virtual space; and
(iii) determines a difference between the potential pose of the virtual camera viewpoint in the virtual space and the pose of the physical camera in the virtual space; and
generating, by the computing system, the stabilized view of the frame using the optimized pose of the virtual camera viewpoint in the virtual camera space.

US Pat. No. 10,171,737

IMAGING DEVICE

Panasonic Intellectual Pr...

1. An imaging device comprising:a shake detection holder that holds a gyro sensor;
a first sheet metal and a second sheet metal that sandwich the shake detection holder in two facing directions;
a first cushion member that abuts on the first sheet metal and faces the shake detection holder via the first sheet metal; and
a second cushion member that abuts on the second sheet metal and faces the shake detection holder via the second sheet metal, wherein
the first sheet metal does not abut on the second cushion member, and
the second sheet metal does not abut on the first cushion member.

US Pat. No. 10,171,736

CONTROL AN IMAGING MODE OF AN IMAGING DEVICE BASED ON POSTURE INFORMATION OF THE IMAGING DEVICE

SONY CORPORATION, Tokyo ...

1. An imaging system, comprising:an imaging apparatus that comprises first circuitry; and
an information processing apparatus that comprises second circuitry configured to:
receive posture determination information that indicates a posture of the imaging apparatus in a separation state, wherein the imaging apparatus is separate from the information processing apparatus in the separation state;
transmit a setting instruction that sets an imaging mode from a plurality of imaging modes of the imaging apparatus, wherein the transmission of the setting instruction is based on the received posture determination information,
wherein the imaging mode corresponds to the posture of the imaging apparatus, and wherein each of the plurality of imaging modes corresponds to a different posture of the imaging apparatus, and
wherein the first circuitry is configured to:
transmit the posture determination information to the information processing apparatus;
receive the setting instruction;
set the imaging mode based on the received setting instruction; and
capture an image based on the set imaging mode.

US Pat. No. 10,171,735

PANORAMIC VISION SYSTEM

INDUSTRIAL TECHNOLOGY RES...

1. A panoramic vision system, comprising:a processor configured to convert received images into images in a spherical coordinate;
a memory coupled to the processor and configured to store the images in the spherical coordinate; and
a spherical display coupled to the processor, wherein the spherical display has a sphere center, the spherical display comprises a plurality of light-emitting-diode pixels being arranged according to the spherical coordinate, there is a same radial distance between each light-emitting-diode pixel of the plurality of light-emitting-diode pixels and the sphere center, in the plurality of light-emitting-diode pixels, there is a same azimuth spacing between adjacent two of the plurality of light-emitting-diode pixels at a zenith angle, and there is a same zenith spacing between adjacent two of the plurality of light-emitting-diode pixels at an azimuth angle.

US Pat. No. 10,171,734

ROTATABLE IMAGING SYSTEM

OVIO TECHNOLOGIES, INC., ...

1. An imaging system comprising:a rotating unit that includes an imaging camera, wherein the rotating unit is rotatable between a home position and a finish position about a rotation axis such that the imaging camera can capture a first scan,
an alignment camera configured to capture a first alignment image of a subject positioned generally co-axially with the rotation axis, and
at least a first monitor on which the first alignment image is displayed, wherein the first monitor includes at least one alignment marking thereon, wherein the at least one alignment marking includes a stationary alignment marking and a movable alignment marking, and wherein the rotating unit will not rotate from the home position to the finish position unless the stationary alignment marking is within a predetermined tolerance zone.

US Pat. No. 10,171,733

IMAGE PROCESSING APPARATUS AND METHOD, AND PROGRAM AND RECORDING MEDIUM

MITSUBISHI ELECTRIC CORPO...

1. An image processing apparatus comprising:a digital amplifier for multiplying, by a digital gain, a captured image signal output from an image capture unit which captures images in units of frame periods, to generate a luminance-adjusted captured image signal;
a luminance detector for detecting luminance of each of a plurality of regions which respectively form parts of a captured image represented by the captured image signal generated by said digital amplifier;
an extractor for selecting one of the plurality of regions of the captured image, in accordance with designation information designating a region to be extracted, extracting an image of the selected region, and performing distortion correction; and
a controller for, on a basis of the luminance detected by said luminance detector, setting a condition of exposure in said image capture unit, and setting the digital gain used in said digital amplifier; wherein
when the designation information is changed from information designating a first region to information designating a second region, in a first frame period,
said controller
changes the luminance used for setting said condition of the exposure and said digital gain in a frame period immediately following said first frame period, from the luminance of said first region to the luminance of said second region, and
instructs said extractor to change the region extracted from the captured image signal, from said first region to said second region, upon expiration of three frame periods after said first frame period.

US Pat. No. 10,171,732

IMAGE PROCESSING APPARATUS, IMAGE PICKUP APPARATUS, IMAGE PROCESSING METHOD, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM FOR GENERATING AN IMAGE BASED ON PLURALITY OF PARALLAX IMAGES

Canon Kabushiki Kaisha, ...

1. An image processing apparatus comprising:at least one processor operatively coupled to a memory, and serving as:
(a) a determiner configured to determine a weight coefficient that varies depending on a position in each of a plurality of parallax images; and
(b) an image generator configured to synthesize the plurality of parallax images based on the weight coefficient to generate an image,
wherein a sum of the weight coefficients of the plurality of parallax images is constant with respect to all positions in the plurality of parallax images.

US Pat. No. 10,171,731

METHOD AND APPARATUS FOR IMAGE PROCESSING

Samsung Electronics Co., ...

1. An electronic device comprising:a camera;
a display;
a memory; and
at least one processor operatively coupled to the memory and the display, the at least one processor being configured to:
control the display to display a first image being acquired via the camera as a preview image,
control the display to display, according to a user request for zoom-in of the first image, a portion of the first image as zoomed in,
capture, based on a user input, the portion of the first image,
in response to the capturing the portion of the first image, acquire the portion of the first image and remaining portion of the first image to compare with a second image received from another electronic device,
identify, based on analysis of the second image and the first image including the portion and the remaining portion, that both of the first image and the second image include at least one same object,
in response to the identifying, generate information regarding environments from which the first image and the second image are acquired,
generate, based on the information, data associating the portion of the first image with the second image, and
store, in the memory, the data associating the portion of the first image with the second image.

US Pat. No. 10,171,730

INFORMATION PROCESSING APPARATUS, METHOD OF CONTROLLING INFORMATION PROCESSING APPARATUS, AND STORAGE MEDIUM

Canon Kabushiki Kaisha, ...

1. An information processing apparatus, comprising:one or more processors; and
a memory having stored thereon instructions which, when executed by the one or more processors, cause the information processing apparatus to:
obtain captured images that are captured by an image capturing unit at a plurality of positions and orientations;
generate a guidance instruction for causing a captured image corresponding to a position and orientation substantially the same as a captured image of a stipulated state to be obtained;
output the guidance instruction;
determine whether or not one captured image obtained after output of the guidance instruction was captured at a position and orientation substantially the same as the captured image of the stipulated state;
if the one captured image is determined to be captured at a position and orientation substantially the same as the captured image of the stipulated state, generate or update a three-dimensional map from three-dimensional coordinates of a feature included in a captured image based on the plurality of captured images including the one captured image that are captured at the plurality of positions and orientations; and
control to display the guidance instruction.

US Pat. No. 10,171,729

DIRECTIONAL ADJUSTMENT FOR A CAMERA BASED ON EXPOSURE QUALITY INFORMATION

HUAWEI TECHNOLOGIES CO., ...

1. An automatic camera adjustment method for a camera, applied to an electronic device comprising:obtaining, by the electronic device, exposure quality information of a preview image in a shot of the camera by:
obtaining brightness distribution information of the preview image; and
obtaining the exposure quality information of the preview image according to the brightness distribution information of the preview image;
determining, by the electronic device, a parameter for rotating the camera including a direction based on the exposure quality information, by:
determining, based on the exposure quality information, whether the camera needs to be rotated;
dividing the preview image into a plurality of sub-images in a direction parallel to a rotational axis of the camera when the camera needs to be rotated;
obtaining brightness distribution information of each of the plurality of sub-images; and
determining the direction by comparing the brightness distribution information for the each of the plurality of sub-images with the brightness distribution information of the preview image; and
adjusting, by the electronic device, the camera based on the direction for rotating the camera by:
rotating the camera based on the direction; and
stop rotating the camera when exposure quality of the preview image meets a preset condition.