US Pat. No. 10,117,200

METHOD AND APPARATUS FOR ACQUIRING SYNCHRONIZATION FOR DEVICE TO DEVICE TERMINAL IN WIRELESS COMMUNICATION SYSTEM

LG ELECTRONICS INC., Seo...

1. A method of obtaining synchronization, which is obtained by a D2D (device to device) UE in a wireless communication system, comprising the steps of:measuring at least one of D2D synchronization signal reception power for received D2D synchronization signals;
calculating D2D synchronization signal reception quality using the measured D2D synchronization signal reception power;
selecting a D2D synchronization signal among the received D2D synchronization signals based on the D2D synchronization signal reception quality; and
obtaining a synchronization from the selected D2D synchronization signal,
wherein the D2D synchronization signal reception quality corresponds to a value resulted from dividing the D2D synchronization signal reception power by a difference between total reception power and the D2D synchronization signal reception power, and
wherein the total reception power is related to a symbol period corresponding to a predetermined symbol length based on a timing at which a peak occurs while the D2D synchronization signal and predetermined sequences are correlated with each other.

US Pat. No. 10,116,896

DISPLAY APPARATUS AND CONTROL METHOD THEREOF

SAMSUNG ELECTRONICS CO., ...

1. A display apparatus comprising:a first signal transmission device comprising:
a first video cable configured to selectively transmit a first image signal transmitted by a first method and a second image signal transmitted by a second method;
an audio cable configured to transmit a sound signal, and
a first output connector connected to the first video cable and the audio cable;
a second signal transmission device comprising:
a second video cable configured to transmit a third image signal transmitted by the second method, and
a second output connector connected to the second video cable;
a receiver comprising a first input terminal to which the first output connector is configured to be connected, and a second input terminal to which the second output connector is configured to be connected; and
a controller configured to:
determine an image output mode based on whether the first output connector is connected to the first input terminal and whether the second output connector is connected to the second input terminal,
in response to determining that the first input terminal is connected to the first output connector and the second input terminal is not connected to the second output connector, determine the image output mode as a first mode to output a composite image by processing a composite signal, and
in response to determining that the first input terminal is connected to the first output connector and the second input terminal is connected to the second output connector, determine the image output mode as a second mode to output a component image by processing a plurality of component signals.

US Pat. No. 10,114,946

METHOD AND DEVICE FOR DETECTING MALICIOUS CODE IN AN INTELLIGENT TERMINAL

BEIJING QIHOO TECHNOLOGY ...

1. A method for detecting malicious code in an intelligent terminal, comprising:acquiring a virtual machine executable file of an application from an application layer of an intelligent terminal operating system;
decompiling the virtual machine executable file to obtain a decompiled function information structure;
parsing the decompiled function information structure to obtain a virtual machine instruction sequence and a virtual machine mnemonic sequence corresponding to the virtual machine instruction sequence;
analyzing and determining function functionality of the virtual machine mnemonic sequence, and determining a target feature according to the virtual machine instruction sequence corresponding to the virtual machine mnemonic sequence having the function functionality; and
matching the target feature using preset malicious codes feature library, and if matching succeeds, determining that the virtual machine executable file of the application contains malicious codes.

US Pat. No. 10,114,502

TOUCH PANEL COMPRISING TOUCH ELECTRODES IN TWO AREAS IN WHICH A DISTANCE BETWEEN TWO ADJACENT TOUCH ELECTRODES IN THE FIRST AREA DIFFERS FROM THAT OF THE SECOND AREA

INTERFACE OPTOELECTRONIC ...

1. A touch panel comprising:a first touch area comprising a plurality of first sensing electrodes electrically insulated from each other, every two adjacent first sensing electrodes being spaced from each other by a first distance; and
a second touch area comprising a plurality of second sensing electrodes electrically insulated from each other, every two adjacent second sensing electrodes being spaced from each other by a second distance;
wherein the second distance is less than the first distance;
wherein the first sensing electrodes and at least one of the second sensing electrodes detect for touch operation at different time periods;
wherein the first distance is twice the second distance;
wherein the touch panel comprises a first working mode and a second working mode; wherein in the first working mode, all of the first sensing electrodes and at least one of the second sensing electrodes is in work; wherein in the second working mode, all of the second sensing electrodes are in work, and all of the first sensing electrodes are in an off state;
wherein a number of the second sensing electrodes in the first working mode is less than a number of the second sensing electrodes in the second working mode;
wherein the second sensing electrodes are defined as a first group and a second group, and the second sensing electrodes of the first group and the second sensing electrodes of the second group are alternately arranged;
wherein only one second sensing electrode of the second group is arranged between every two adjacent second sensing electrodes of the first group;
and wherein all of the second sending electrodes are arranged in a column, the first group comprises all odd number second sensing electrodes and the second group comprises all even number second sensing electrodes; wherein in the first working mode, all of the second sensing electrodes of the first group or all of the second sensing electrodes of the second group are in work.

US Pat. No. 10,114,141

SUBSURFACE RESISTIVITY MODELING WITH ELECTROMAGNETIC FIELDS USING A CONDUCTIVE CASING

GroundMetrics, Inc., San...

1. A method for conducting an electromagnetic survey of subsurface targets of interest by calculating AC electromagnetic fields below an earth surface comprising:modeling the earth below the earth surface using a known distribution of electrical resistivity, or a representative approximation of the electrical resistivity distribution derived from current or historical data, to form a subsurface model;
transmitting current from a source, at least in part, along a conducting casing of a borehole to produce a primary electromagnetic field, said field being a DC electric field;
calculating the primary electromagnetic field produced by the casing to determine a casing model and then create an equivalent electromagnetic source based on the casing model;
representing the casing within the subsurface model with the equivalent electromagnetic source embedded in the subsurface model, wherein the casing is removed from the subsurface model and the equivalent electromagnetic source is located where the casing was located;
calculating the AC electromagnetic fields produced by the equivalent electromagnetic source; and
determining a depth, thickness or lateral extent of the target of interest based on the AC electromagnetic fields produced by the equivalent electromagnetic source.

US Pat. No. 10,113,730

SUSPENDED BULB LAMP

1. A suspended bulb lamp comprising:a lamp holder with a top and a lower portion;
a bulb shell connected to the lamp holder;
an LED light disposed inside the bulb shell and connected to the lamp holder;
a pull cord for suspending the lamp holder having two ends;
a lamp holder shell covering an upper portion of the lamp holder;
wherein,
the lamp holder shell is detachably connected to the lamp holder;
a first seal ring is disposed between the lamp holder shell and the lamp holder;
a control switch for controlling the LED light is disposed on the top of the lamp holder;
an actuating member for triggering the control switch is disposed inside the lamp holder shell;
a spring acts on the actuating member to enable the actuating member to move up and down within the lamp holder shell; and
one end of the pull cord is connected to the actuating member, while the other end of the pull cord is a suspended free end.

US Pat. No. 10,112,896

METHOD FOR SYNTHESIZING DISSYMMETRIC SULFOETHER

SOOCHOW UNIVERSITY, Suzh...

1. A method for synthesizing dissymmetric sulfoether, comprising:a) under the condition of tetrabutylammonium halide catalysis, compounds having a structure of formula (I), compounds having a structure of formula (II) and salts having sulfur and oxygen are reacted in a solvent to give dissymmetric sulfoether having a structure of formula (III);

wherein, R1 is selected from phenyl, substituted phenyl, naphthyl, substituted naphthyl, thienyl or substituted thienyl; R2 is selected from hydrogen, phenyl, substituted phenyl, naphthyl, substituted naphthyl, thienyl or substituted thienyl; or R1, R2 form fluorene ring or thioxanthone ring with the C to which it is attached;
R3 is selected from hydrogen or alkyl;
R4 is selected from hydrogen, phenyl, substituted phenyl, naphthyl, substituted naphthyl, thienyl or substituted thienyl; R5 is selected from hydrogen; or R4, R5 form fluorene ring or thioxanthone ring with the C to which it is attached;
R6 is selected from C1˜C30 alkyl, cyano-substituted C1˜C20 alkyl, cyano-substituted C1˜C20 benzyl, C1˜C5 alkyl-substituted benzyl, halogen-substituted benzyl, fluorenyl and any of the structural substituents represented in formulas (a-1)˜(a-9):
in formulas (a-4)˜(a-9), m1, m2, m3, n, q, p1, p2, r1, r2 and e are integer from 0 to 5, respectively;X is selected from Cl, Br or I;
said salts having sulfur and oxygen include sodium thiosulfate and/or sodium sulfite.

US Pat. No. 10,112,011

TIME AVERAGED BASAL RATE OPTIMIZER

DexCom, Inc., San Diego,...

1. A method for optimizing a basal rate profile for use with continuous insulin therapy, comprising:providing a programmed basal rate profile for insulin therapy, wherein the basal rate profile comprises an insulin delivery schedule that includes one or more blocks of time, and wherein each block of time defines an insulin delivery rate;
periodically or intermittently updating the programmed basal rate profile based on a retrospective analysis of continuous glucose sensor data over a predetermined time window; and
adjusting the basal rate profile of the updated programmed basal rate profile in response to real time continuous glucose sensor data indicative of actual or impending hyperglycemia or hypoglycemia, wherein adjusting comprises dynamically increasing or decreasing the basal rate of the updated programmed basal rate profile in real time in response to real time continuous glucose sensor data indicating actual hyperglycemia, impending hyperglycemia, actual hypoglycemia, or impending hypoglycemia.

US Pat. No. 10,112,009

INTRAVENOUS PUMPING AIR MANAGEMENT SYSTEMS AND METHODS

Baxter International Inc....

1. An intravenous (“IV”) liquid delivery system comprising:an IV pump tubing set;
a pump actuator operable with the IV pump tubing set;
an air removal device located downstream of the pump actuator, the air removal device including an air passing but liquid retaining filter and a liquid passing but air retaining filter;
an indicating device that indicates that the air removal device is present; and
a control unit operable with the indicating device, the control unit configured so that when the air removal device is indicated as being present, the pump actuator is allowed to operate the IV pump tubing set even if air is detected upstream of the air removal device.
US Pat. No. 10,113,266

TREATMENT OF FILAMENTS OR YARN

E I DU PONT DE NEMOURS AN...

1. A method for treating a filament or yarn comprising:(a) forming a first mixture of reagents comprising first multi-functional isocyanate oligomers and first multi-functional epoxy oligomers, wherein the ratio of total isocyanate groups to total epoxy groups in the first mixture is in the range of from 0.8 to 1.2,
(b) dipping a filament or yarn into the first mixture of reagents,
(c) removing solvent from the dipped filament or yarn,
(d) dipping the filament or yarn from step (c) into a catalyst whereby the first isocyanate oligomers and the first epoxy oligomers form a network on the surface of the filament or yarn,
(e) forming a second mixture of reagents comprising second multi-functional isocyanate oligomers and second multi-functional epoxy oligomers, wherein the ratio of total isocyanate groups to total epoxy groups in the second mixture is in the range of from 0.8 to 1.2,
(f) dipping the filament or yarn of step (d) into the second mixture of reagents, and
(g) dipping the filament or yarn from step (f) into a catalyst whereby the second isocyanate oligomers and the second epoxy oligomers react to crosslink and form a network on the outer surface of the network formed by the first epoxy and first isocyanate oligomers of step (d), with the proviso that the double bonds present in the second epoxy and second isocyanate oligomers do not participate in the crosslinking reaction.

US Pat. No. 10,117,173

OUT-OF-BAND POWER DOWN NOTIFICATION

Parallel Wireless, Inc., ...

1. A mobile base station for reducing coverage interruptions for users connected thereto, comprising:a vehicle bus notification module coupled to a vehicle electrical power system and configured to determine a vehicle battery power level, the vehicle electrical power system powering the mobile base station;
a first radio access network interface for communicating with mobile devices using a first radio access technology;
a backhaul interface for communicating with an operator core network;
a processor, in communication with the vehicle bus notification module, the first radio access network interface, and the backhaul interface; and
a memory, further comprising instructions that when executed by the processor, perform steps comprising:
receiving a vehicle bus low power alert at the vehicle bus notification module;
requesting, in response to receiving the vehicle bus low power alert, from a network server, a mobile device detach procedure for the mobile devices;
sending, in response to receiving the vehicle bus low power alert, to the network server, a message to cause the network server to perform power control of a neighboring base station to increase or decrease transmission power; and
sending a message via the backhaul interface to the operator core network to request a notification to be sent to the mobile devices, the notification configured to include human-readable information regarding the vehicle battery power level of the mobile base station,
thereby enabling the mobile devices to be notified via the operator core network when the vehicle battery power level is low.

US Pat. No. 10,117,163

NETWORK ACCESS METHOD AND MOBILE COMMUNICATION TERMINAL

GUANGDONG OPPO MOBILE TEL...

1. A network access method, comprising:transmitting, before a mobile communication terminal reaches a visited place, a request for acquiring shared network information to a shared server if the mobile communication terminal detects that a network identification of a public land mobile network (PLMN) of the visited place does not exist in a local memory of the mobile communication terminal, the local memory being different from a subscriber identity module (SIM) card of the mobile communication terminal, wherein the network identification of the PLMN of the visited place is a network identification of an operator of the visited place, which signs a roaming agreement with an operator of a home place to which a subscriber identity module attached to the mobile communication terminal belongs, and the shared network information comprises the network identification of the PLMN of the visited place which a second terminal pushes to the shared server;
receiving, before the mobile communication terminal reaches the visited place, the shared network information which the shared server transmits in response to the request, and acquiring the network identification of the PLMN of the visited place from the shared network information;
adding, before the mobile communication terminal reaches the visited place, the network identification of the PLMN of the visited place to an equivalent PLMN (EPLMN) list by the mobile communication terminal, wherein the EPLMN list comprises a network identification of a PLMN of the home place and the network identification of the PLMN of the visited place; and
accessing a network according to the EPLMN list after the mobile communication terminal is moved from the home place to the visited place and the mobile communication terminal is turned on or closes a current airplane mode;
wherein before transmitting the request for acquiring the shared network information to the shared server if the mobile communication terminal detects that the network identification of the PLMN of the visited place does not exist in the local memory of the mobile communication terminal, the method comprises:
determining, before the mobile communication terminal reaches the visited place, whether the network identification of the PLMN of the visited place exists in the memory of the mobile communication terminal;
triggering, before the mobile communication terminal reaches the visited place, to perform a step of transmitting the request for acquiring the shared network information to the shared server, if a determined result is no; and
triggering, before the mobile communication terminal reaches the visited place, to perform a step of adding the network identification of the PLMN of the visited place to the EPLMN list, if the determined result is yes.

US Pat. No. 10,117,122

TELECOMMUNICATIONS APPARATUS AND METHODS

SONY CORPORATION, Tokyo ...

1. A terminal device configured to operate in a wireless network configured to support communications with terminal devices using a primary component carrier operating on radio resources within a first frequency band and a secondary component carrier operating on radio resources within a second frequency band, wherein the wireless network supports a connected mode of operation in which terminal devices receive user-plane data from the wireless network using the primary and/or secondary component carrier and an idle mode of operation in which terminal devices do not receive user-plane data from the wireless network, the terminal device comprising:circuitry configured to
receive first Radio Resource Control (RRC) signaling transmitted from the wireless network to the terminal device over the radio resources within the first frequency band, the first RRC signaling indicating a measurement configuration for making measurements of radio channel conditions for radio resources within the second frequency band, wherein the first frequency band is a licensed frequency band in the wireless network and the second frequency band is an unlicensed frequency band in the wireless network;
identify the measurement configuration for making the measurements of radio channel conditions for radio resources within the second frequency band based on the first RRC signaling received from the wireless network, wherein the measurement configuration relates to at least measuring a signal strength received by the terminal device on the radio resources within the second frequency band;
measure radio channel conditions for radio resources within the second frequency band in accordance with the measurement configuration;
determine if the measurement of the signal strength comprising measurement of radio channel conditions meets a trigger criterion, and each time the trigger criterion is met, transmitting a measurement report to the network infrastructure equipment indicating that the trigger criterion has been met.

US Pat. No. 10,117,121

TERMINAL DEVICE AND BASE STATION DEVICE

SHARP KABUSHIKI KAISHA, ...

1. A terminal device, comprising:a measurement unit which performs
first measurement for performing measurement by using a first reference signal and
second measurement for performing measurement by using a second reference signal;
a reception unit which receives criteria information of criteria for triggering of a measurement reporting event; and
a transmission unit which transfers a measurement report message including a measurement result of the first measurement or a measurement result of the second measurement, wherein
the measurement reporting event includes
a first measurement reporting event based on the first measurement and
a second measurement reporting event based on the second measurement; and
the criteria information includes,
criteria information on triggering criteria of the first measurement reporting event, and
criteria information on triggering criteria of the second measurement reporting event;
the measurement report message includes,
the measurement result of the first measurement in a case where the first measurement reporting event is triggered, and
the measurement result of the second measurement in a case where the second measurement reporting event is triggered,
the criteria information of the second measurement reporting event includes second information for specifying a triggering quantity used for evaluating criteria for triggering of an event of a measurement reporting related to the second reference signal,
the second information is information indicating CSI-RSRP (Channel State Information Reference Signal Received Power), and
the second reference signal is a CSI-RS (Channel State Information Reference Signal).

US Pat. No. 10,117,110

STRUCTURING AND METHOD FOR WIRELESS RADIO ACCESS NETWORK DEPLOYMENT

1. A radio access network system comprising one or more radio access network elements communicatively linked to form at least a portion of a radio access network, the one or more radio access network elements each comprising one or more processors, the system further comprising:one or more reconfigurable connection points comprising one or more processors executable to provide all or a subset of radio transmission functions for one or more radio access technologies, the radio transmission functions linking user equipment operating on the radio access technology to the radio access network; and
one or more reconfigurable radio access network functional elements each executable on at least one of the radio access network elements, each of the radio access network functional elements linked directly or indirectly to one or more of the reconfigurable connection points, each radio access network functional element configurable to provide one or more non-radio transmission functions for enabling communication between the user equipment and a core network linked to the radio access network, each of the radio access network functional elements reconfigurable to permit introduction and removal of non-radio transmission functions.

US Pat. No. 10,117,106

BACKOFF MECHANISM TECHNIQUES FOR SPATIAL REUSE

QUALCOMM Incorporated, S...

1. A method for wireless communication, comprising:receiving, at a first device, a packet;
decoding at least a portion of a preamble of the packet to determine whether the packet is sent by a member of an overlapping basic service set (OBSS), wherein the decoding comprises:
identifying a sequence in the preamble of the packet;
determining whether the sequence matches a sequence for packets intended for members of a BSS associated with the first device; and
determining that the packet is sent by the member of the OBSS in response to a determination that the match fails;
deferring a backoff operation in response to a start of the decoding; and
resuming the backoff operation in response to the determination that the packet is sent by the member of the OBSS, wherein the backoff operation is resumed before an end of the packet.

US Pat. No. 10,117,104

RESOURCE ALLOCATION METHOD AND APPARATUS FOR COOPERATIVE TRANSMISSION OF BASE STATIONS IN WIRELESS COMMUNICATION SYSTEM

Samsung Electronics Co., ...

1. A method by a first base station for performing cooperative resource allocation with at least one second base station in a wireless communication system, the method comprising:transmitting a first reference signal generated based on a first digital bit sequence to a mobile station;
receiving first channel quality information determined based on a signal to interference and noise ratio (SINR) of the first reference signal from the mobile station for a resource allocation;
transmitting, to the second base station, a resource allocation request message, wherein the resource allocation request message includes information on a specific data to be transmitted to the mobile station when the second base station retains the specific data;
receiving, from the second base station, a resource allocation response message including one of accept and reject indications to the resource allocation request message;
transmitting a resource allocation information generated based on the first channel quality information to the mobile station; and
transmitting a first packet, derived from data to be transmitted to the mobile station, to the mobile station in cooperation with the second base station based on the resource allocation information,
wherein second channel quality information, determined based on a SINR of a second reference signal generated based on a second digital bit sequence and transmitted from the second base station to the mobile station, is transmitted from the mobile station to the second base station for the resource allocation,
wherein the first digital bit sequence is specific to the first base station and the second digital bit sequence is specific to the second base station,
wherein the first packet is transmitted from the first base station to the mobile station based on the first channel quality information,
wherein a second packet, different from the first packet, derived from the data, is transmitted from the second base station to the mobile station based on the second channel quality information and the information on the specific data, and
wherein the first packet and the second packet is combined into data by the mobile station.

US Pat. No. 10,117,092

MOBILE DEVICE TRANSFER STATION

Future Dial, Inc., Sunny...

1. A system comprising:a body having a plurality of compartments, wherein each respective compartment is adapted to receive and retain a respective mobile computing device, and wherein each respective compartment includes a respective connector for electrically coupling to the respective mobile computing device; and
a control system comprising:
a processor;
memory coupled to the processor and storing instructions that, when executed by the processor, cause the control system to automatically:
detect electrical coupling, to the control system, of a first mobile computing device retained in a first compartment of the body via a first connector in the first compartment;
retrieve data from the first mobile computing device via the first connector;
analyze the retrieved data; and
configure the first mobile computing device based on the analysis of the retrieved data, wherein configuring the first mobile computing device includes:
storing content from the first mobile computing device in a data repository in communication with the control system; and
erasing the content from the first mobile computing device; and
a digital camera in communication with the control system, wherein the memory further stores instructions to cause the control system to interface with the digital camera, comprising:
receiving an image of the first mobile computing device from the digital camera;
performing an image recognition analysis on the received image to identify a feature of the first mobile computing device;
analyzing the feature of the first mobile computing device; and
assigning a condition rating to the identified feature based on the analysis of the feature.

US Pat. No. 10,117,084

CONTEXT-DEPENDENT ALLOCATION OF SHARED RESOURCES IN A WIRELESS COMMUNICATION INTERFACE

Apple Inc., Cupertino, C...

1. A method implemented in a user device having a communication interface configured to concurrently receive and process global navigation satellite system (GNSS) signals and cellular voice signals, the method comprising:determining whether a cellular voice call is in progress;
in response to determining that a cellular voice call is in progress:
determining whether the cellular voice call is an emergency call; and
in response to determining that the cellular voice call is an emergency call:
identifying a GNSS scenario based on a current location estimation;
determining a current quality of service metric based on received GNSS signals;
determining a quality of service threshold based at least in part on the GNSS scenario;
determining whether the current quality of service metric exceeds the quality of service threshold;
if the quality of service metric exceeds the quality of service threshold, selecting a first operating state for the communication interface, wherein the first operating state prioritizes receiving and processing of cellular voice signals; and
if the quality of service metric does not exceed the quality of service threshold, selecting a second operating state for the communication interface, wherein the second operating state prioritizes receiving and processing of GNSS signals.

US Pat. No. 10,117,082

WIRELESS TECHNOLOGY BRIDGING SYSTEM

PAYPAL, INC., San Jose, ...

1. A wireless communication system, comprising:a first wireless subsystem that is configured to receive wireless communications of a first wireless technology type;
a second wireless subsystem that is configured to send wireless communications of a second wireless technology type that are different than the wireless communications of the first wireless technology type; and
a bridging engine that is coupled to each of the first wireless subsystem and the second wireless subsystem, wherein the bridging engine is configured to:
convert a first wireless communication of the first wireless technology type, which was received through the first wireless subsystem, to a first wireless communication of the second wireless technology type; and
provide the first wireless communication of the second wireless technology type to the second wireless subsystem such that the second wireless subsystem sends the first wireless communication of the second wireless technology type to at least a first device.

US Pat. No. 10,117,080

APPARATUS AND METHOD OF DETERMINING AN OPEN STATUS OF A CONTAINER USING RFID TAG DEVICES

Walmart Apollo, LLC, Ben...

1. A radio frequency identification (RFID) device comprising:a first portion of a container, the container being in a closed orientation and configured to be moved into an open orientation by a user;
a second portion of the container removably coupled to the first portion;
a first RFID tag fixed to the first portion and configured to communicate only in a near field of RFID communication;
a conductive element implemented at the second portion and located in proximity to the first RFID tag when the container is in the closed orientation and configured to function as a far field antenna for the first RFID tag such that the first RFID tag is readable by an RFID reader in a far field of RFID communication when the container is in the closed orientation; and
a second RFID tag fixed to the first portion such that the second RFID tag is shielded by the second portion and is not readable by the RFID reader when the container is in the closed orientation;
wherein the first portion and the second portion are configured such that upon a user action to open the container at least a first amount, the first portion and the second portion move relative to each other decoupling the conductive element from the first RFID tag such that the first RFID tag is no longer readable in the far field indicating a first open status of the container; and
wherein the first portion and the second portion are further configured such that upon a user action to open the container at least a second amount, the first portion and the second portion move relative to each other such that the second RFID tag is no longer shielded and is readable by the RFID reader indicating a second open status of the container.

US Pat. No. 10,117,072

SYSTEM AND METHOD FOR EFFICIENT SHORT MESSAGE SERVICE DELIVERY USING PRIVATE SUBSCRIBER IDENTITY INFORMATION

Verizon Patent and Licens...

1. A device, comprising:one or more memories; and
one or more processors, communicatively coupled to the one or more memories, to:
receive, from a subscriber identity module (SIM) over-the-air (OTA) (SIM OTA) device, a short message peer-to-peer (SMPP) message,
the SMPP message including an international mobile subscriber identity (IMSI) associated with a user device, and
the SMPP message being associated with modifying a universal integrated circuit card (UICC) of the user device;
determine, using the IMSI associated with the user device and without using a mobile station international subscriber directory number (MSISDN) of the user device, a network device to which the user device is connected; and
provide, to the network device, a short message service (SMS) message associated with modifying the UICC of the user device to permit the UICC of the user device to be modified.

US Pat. No. 10,117,070

APPARATUS AND METHOD OF GROUP COMMUNICATIONS

QUALCOMM, Incorporated, ...

1. A method for group communication, comprising:receiving, at a machine-type communication (MTC) interworking function (MTC IWF), a group message request including a group message and a group identifier;
mapping, by the MTC IWF in response to receiving the group message request, the group identifier to a temporary mobile group identifier (TMGI) or other message identifier associated with the group message;
obtaining, from a home subscriber service (HSS) or a home location service (HLR), subscriber information for each device of a plurality of devices in a group identified by the group identifier or TMGI or other message identifier, wherein the subscriber information comprises, for each device in the group, a device identifier and one or more delivery mechanisms supported by the device, with each device providing the one or more delivery mechanisms supported by the device to the HSS or HLR;
determining, by a decision component of the MTC IWF and based, at least in part, on the one or more delivery mechanisms supported by each device, a message delivery mechanism for delivering the group message to each device or subgroup of devices in the group, wherein a first delivery mechanism is determined for a first device in the group and a second delivery mechanism is determined for a subgroup of devices including a plurality of second devices in the group, with the first delivery mechanism being different than the second delivery mechanism;
transmitting the group message to the first device in the group in response to determining the first delivery mechanism for the first device, the transmission to the first device including a subscription identifier associated with the first device, via a point-to-point protocol as the first delivery mechanism; and
transmitting the group message to the plurality of second devices in response to determining the second delivery mechanism for the plurality of second devices, the transmission to the plurality of second devices including the group identifier, via a one-to-many communication protocol as the second different delivery mechanism.

US Pat. No. 10,117,065

SYSTEMS AND METHODS FOR LEARNING WIRELESS TRANSCEIVER LOCATIONS AND UPDATING A SPATIALLY-DEPENDENT PATH-LOSS MODEL

Athentek Innovations, Inc...

1. A system to generate a spatially-dependent path-loss model associated with an indoor environment, the system comprising:wireless access points (APs) positioned throughout the indoor environment;
a calibration device moveable within the indoor environment such that the calibration device is configured to transmit wireless signals to the wireless APs and receive wireless signals transmitted by the wireless APs at a number of calibration reference points (RPs) located throughout the indoor environment; and
a server comprising a processing unit, a memory unit, and a server communication unit, wherein the server communication unit is in communication with the wireless APs and the calibration device, wherein the processing unit is programmed to:
divide a coordinate-plane representing the indoor environment into non-overlapping tiles,
obtain RP calibration transmit strengths of the wireless signals transmitted by the calibration device at each of the calibration RPs to the wireless APs at a first frequency, wherein the RP calibration transmit strengths are obtained from the calibration device or a wireless signal-strength database,
obtain AP transmit strengths of the wireless signals transmitted by each of the wireless APs to the calibration device at each of the calibration RPs at the first frequency, wherein the AP transmit strengths are obtained from each of the wireless APs or the wireless signal-strength database,
obtain RP-to-AP calibration RSSIs measured at each of the wireless APs from the wireless signals transmitted by the calibration device at each of the calibration RPs at the first frequency, wherein the RP-to-AP calibration RSSIs are obtained from each of the wireless APs or the wireless signal-strength database,
obtain AP-to-RP calibration RSSIs measured by the calibration device at each of the calibration RPs from the wireless signals transmitted by each of the wireless APs at the first frequency, wherein the AP-to-RP calibration RSSIs are obtained from the calibration device or the wireless signal-strength database,
calculate RP-to-AP distance vectors between each of the calibration RPs and the wireless APs,
construct a set of path-loss equations using the RP calibration transmit strengths at the first frequency, the RP-to-AP calibration RSSIs obtained at the first frequency, the RP-to-AP distance vectors, and tile-specific path-loss coefficient for each of the tiles within the indoor environment,
construct the set of path-loss equations using the RP calibration transmit strengths at the first frequency, the AP transmit strengths at the first frequency, the RP-to-AP calibration RSSIs obtained at the first frequency, the AP-to-RP calibration RSSIs obtained at the first frequency, the RP-to-AP distance vectors, and the tile-specific path-loss coefficient for each of the tiles within the indoor environment,
solve the set of path-loss equations to yield a value for the tile-specific path-loss coefficient for each of the tiles within the indoor environment, and
add AP-to-RP RSSIs to the wireless signal-strength database to update the wireless signal-strength database.

US Pat. No. 10,117,062

METHOD AND APPARATUS FOR VEHICULAR DEVICE-FUNCTION CONTROL

Ford Global Technologies,...

1. A system comprising:a processor configured to: transmit a series of impulses into a vehicle interior; receive data from a wearable-device receiver receiving the impulses, the data indicating arrival times and magnitudes of the impulses;
analyze the arrival times and magnitudes to determine a likely wearable device location; and
control a functionality aspect of a mobile device associated with the wearable device, based on the likely wearable device location;
wherein the processor is configured to determine one or more voxels in which the wearable device is likely located, the voxels having predefined associations to seating locations within the vehicle, and the determination based on a look-up of predefined expected signal values in a table predetermined for a vehicle model.

US Pat. No. 10,117,046

DISCRETE LOCATION CLASSIFICATION

Apple Inc., Cupertino, C...

1. A method for identifying a location of a mobile device, the method comprising:during each of a plurality of instances of time:
measuring, by the mobile device, one or more signal properties of one or more other devices across a time interval;
obtaining, by the mobile device, an identifier from each of the one or more other devices;
creating, by the mobile device, a data point to include the one or more signal properties, wherein each dimension of the data point corresponds to respective one of the one or more other devices and a value for the dimension corresponds to a signal property for that dimension; and
storing, by the mobile device, the data point in a database of the mobile device, the database storing a plurality of data points corresponding to the plurality of instances of time;
analyzing, by the mobile device, the plurality data points in the database to determine clusters of data points, wherein different clusters of data points correspond to different locations in physical space;
after determining the clusters of data points, detecting, by the mobile device, an event at an input device of the mobile device;
in response to detecting the event, measuring, by the mobile device, one or more new signal properties of one or more of the plurality of other devices at one or more new times;
creating, by the mobile device, a new data point from the one or more new signal properties; and
identifying, by the mobile device, a first cluster of the clusters of data points corresponding to the new data point by comparing the new data point with one or more data points in the first cluster and determining that the new data point is within a threshold distance of the one or more data points in the first cluster, thereby determining the location of the mobile device.

US Pat. No. 10,117,034

LEAVING GROUP BASED ON MESSAGE FROM AUDIO SOURCE

Sonos, Inc., Santa Barba...

1. A first playback device comprising:a speaker driver;
one or more processors; and
tangible, non-transitory computer-readable media comprising a set of instructions that, when executed by the one or more processors, cause the first playback device to implement a method, the method including:
receiving, from a first audio information source, first audio information; sending the first audio information to a second playback device;
playing back, via the speaker driver, the first audio information in synchrony with the second playback device;
receiving, from a second audio information source, (i) a first message, and (ii) second audio information; and
in response to receiving the first message from the second audio information source:(i) determining that the first playback device is configured to playback the second audio information; (ii) stopping play back of the first audio information in synchrony with the second playback device; and (iii) playing back, via the speaker driver, the second audio information without sending the second audio information to the second playback device.

US Pat. No. 10,117,011

ADJUSTABLE HOLDER FOR A MICROPHONE ACCESSORY AND METHOD OF USE

The Music People, Inc., ...

1. A holder for securing a microphone, comprising:a bracket having a body extending along a bracket axis between a first end and a second end;
a mounting bar having a body extending along a mounting bar axis between a first end and a second end, the mounting bar having a platform extending from the second end of the body;
a first fastener securing the mounting bar to the bracket such that the mounting bar axis is generally parallel to the bracket axis, the first fastener being releasable such that the mounting bar is adjustable in a direction parallel to the bracket axis when the first fastener is released and the mounting bar is fixed relative to the bracket when the first fastener is secured;
an arm having a body extending along an arm axis between a first end and a second end;
a second fastener securing the first end of the arm to the platform, the second fastener being releasable such that the arm is rotatable about the mounting bar axis when the second fastener is released and the arm is fixed relative to the platform when the second fastener is secured;
a threaded post disposed on the second end of the arm, the threaded post being configured to mate with a microphone accessory;
wherein the arm is a first arm and the holder further comprises:
a second arm having a body extending along a second arm axis between a first end and a second end;
a third fastener securing the first end of the second arm to the platform, the third fastener being releasable such that the second arm is rotatable about the mounting bar axis when the third fastener is released and the second arm is fixed about the mounting bar axis when the third fastener is secured.

US Pat. No. 10,117,000

METHOD AND SYSTEM FOR HARDWARE AGNOSTIC DETECTION OF TELEVISION ADVERTISEMENTS

Silveredge Technologies P...

1. A computer-implemented method for a hardware agnostic detection of one or more advertisements broadcasted across one or more channels, the computer-implemented method comprising:normalizing, at an advertisement detection system with a processor, each frame of a pre-determined number of frames of a video corresponding to a broadcasted media content on a channel, wherein each frame being normalized based on a histogram normalization and histogram equalization and wherein each frame being normalized by adjusting luminous intensity value of each pixel to a desired luminous intensity value;
scaling, at the advertisement detection system with the processor, each frame of the corresponding pre-determined number of frames of the video to a pre-defined scale corresponding to the broadcasted media content on the channel, wherein the scaling of each frame being done by keeping a constant aspect ratio;
trimming, at the advertisement detection system with the processor, a first pre-defined region and a second pre-defined region of each frame by a pre-defined percentage of a frame width, a frame height and a pre-defined number of pixels in each frame, wherein each frame being trimmed based on calculation of a pre-determined height and a pre-determined width corresponding to the first pre-defined region having a channel logo and the second pre-defined region having a ticker;
extracting, at the advertisement detection system with the processor, a first set of audio fingerprints and a first set of video fingerprints corresponding to each trimmed frame of the pre-determined number of frames corresponding to the media content broadcasted on the channel, wherein the first set of audio fingerprints and the first set of video fingerprints being extracted sequentially in real time; and
generating, at the advertisement detection system with the processor, a set of digital signature values corresponding to the first set of video fingerprints, wherein the generation of each digital signature value of the set of digital signature values being done by:
dividing each prominent frame of one or more prominent frames into a pre-defined number of blocks, wherein each block of the pre-defined number of blocks having a pre-defined number of pixels;
grayscaling each block of each prominent frame of the one or more prominent frames;
calculating a first bit value and a second bit value for each block of the prominent frame, wherein the first bit value and the second bit value being calculated from comparing a mean and a variance for the pre-defined number of pixels in each block of the prominent frame with a corresponding mean and variance for a master frame in a master database; and
obtaining a 32 bit digital signature value corresponding to each prominent frame, wherein the 32 bit digital signature value being obtained by sequentially arranging the first bit value and the second bit value for each block of the pre-defined number of blocks of the prominent frame.

US Pat. No. 10,116,997

METHOD AND APPARATUS FOR TRANSMITTING/RECEIVING CONTENT IN A BROADCAST SYSTEM

Samsung Electronics Co., ...

1. A method of transmitting media data, the method comprising:identifying a package including a plurality of assets, transport characteristics of each asset of the plurality of assets, and composition information on the plurality of assets; and
transmitting the package,
wherein the composition information includes information on spatial and temporal relationships among the plurality of assets,
wherein the information on spatial and temporal relationships comprises synchronization information for synchronizing the plurality of assets in the package, and target characteristic information required to at least one device for presenting each of the plurality of assets,
wherein the target characteristic information includes information for indicating whether the plurality of assets in the package are required to be presented across two or more screens, the plurality of assets being different from each other, and information indicating whether the plurality of assets in the package require a user interactive input, and
wherein the transport characteristics include quality of service (QoS) information and delivery direction information of the plurality of assets.

US Pat. No. 10,116,990

INFORMATION DISTRIBUTION SYSTEM AND METHOD FOR DISTRIBUTING CONTENT INFORMATION

SONY CORPORATION, Tokyo ...

1. An information distribution method for a server apparatus to distribute image data and selected content information appended thereto, comprising the steps of:receiving, by receiver circuitry at the server from a first information processing terminal of a user, image data input by the user, text data input by the user appended to the image data and selectively edited text data comprised of said text data selectively edited by the user's first information processing terminal prior to being received by the server;
storing at the server the received image data at an addressable storage location;
storing in a user information database user information representing predetermined characteristics and activities of the user;
selecting, at the server apparatus, content information to be appended to said image data based on (a) the received text data, (b) the received edited text data and (c) the user information;
appending the selected content information to at least the image data;
providing a web page address of a web page related to and containing more information than the selected content information; and
transmitting, by transmitting circuitry, said image data with (a) the storage location address of said image data appended thereto, (b) the selected content information appended thereto and (c) the web page address appended thereto, to a second information processing terminal of a recipient;
wherein said image data, the edited text data and the content information are displayed on said second information processing terminal based on the storage location address of said image data; and
wherein specific information is shared by the user of said first information processing terminal with the recipient at said second information processing terminal via said server.

US Pat. No. 10,116,985

METHOD AND APPARATUS FOR DISPLAYING A BULLET CURTAIN IN A VR VIDEO

Beijing Xiaomi Mobile Sof...

1. A method for displaying a bullet curtain in a Virtual Reality (VR) video, comprising:detecting, by a VR device, a visual field of a user via a motion sensor;
determining, by the VR device, a target bullet curtain to be displayed in the visual field of the user based on location information of a plurality of bullet curtains stored in a bullet curtain library; and
displaying, by the VR device, the target bullet curtain in a display area corresponding to the visual field of the user,
wherein the location information of each of the plurality of bullet curtains indicates a sender's visual field when the bullet curtain is sent by the sender watching the VR video; and
wherein determining, by the VR device, the target bullet curtain to be displayed in the visual field of the user based on the location information of the plurality of bullet curtains stored in the bullet curtain library comprises:
determining, by the VR device, a bullet curtain for which an overlap area of the sender's visual field and the user's visual field is larger than or equal to a first threshold from the plurality of bullet curtains as the target bullet curtain, according to the sender's visual field and the user's visual field at the moment each bullet curtain is sent.

US Pat. No. 10,116,984

PORTABLE TERMINAL, INFORMATION PROCESSING APPARATUS, CONTENT DISPLAY SYSTEM AND CONTENT DISPLAY METHOD

MAXELL, LTD., Kyoto (JP)...

1. A content receiving apparatus, the content receiving apparatus comprising:a digital television broadcast receiver;
a signal separator which conducts de-multiplexing video data and audio data from a signal;
a processor which executes video processing to video data;
a network communication module which connects to the internet; and
a controller configured to:
control a first video content to be received via a digital television broadcast;
control the first video content received via a digital television broadcast to be outputted to a display;
control an identifier for identifying a second video content to be received from an external mobile terminal;
control display state information to be received from the external mobile terminal;
control the second video content to be acquired using the identifier; and
control the second video content to be outputted to the display,
wherein the controller controls output of the first video content on the display to be terminated before output of the second video content on the display being started.

US Pat. No. 10,116,981

VIDEO MANAGEMENT SYSTEM FOR GENERATING VIDEO SEGMENT PLAYLIST USING ENHANCED SEGMENTED VIDEOS

MICROSOFT TECHNOLOGY LICE...

8. A computer-implemented method for video management, the method comprising:receiving a search query for video content;
identifying a plurality of relevant enhanced segmented videos, wherein a relevant enhanced video segment is an enhanced segmented video in a cognitive index that satisfies the search query based at least in part of the corresponding plurality of segmentation dimensions, wherein the enhanced segmented video is generated based on segmentation rules, and segment reconstruction rules;
receiving a selection of at least a subset of the plurality of relevant enhanced segmented videos to generate a video segment playlist;
generating the video segment playlist wherein the video segment playlist comprises references to the subset of the plurality of relevant enhanced segmented videos; and
causing playback of the subset of the plurality of relevant enhanced segmented videos based on the references in the video segment playlist.

US Pat. No. 10,116,978

MECHANISM FOR DISTRIBUTING CONTENT DATA

RESOURCE CONSORTIUM LIMIT...

1. A method for distributing one or more digital content items to at least one receiver of one or more receiving devices, the method comprising:receiving, by a content distribution computer, user preference data comprising one or more user preferred content types, user billing information and data identifying a location of at least one receiving device where the one or more digital content items are to be received, and storing the user preference data in a user account;
generating, by the content distribution computer, a customized content listing comprising a plurality of digital content items from a plurality of digital content providers selected based on the user preferred content types and the data identifying the location of at least one receiving device;
receiving, by the content distribution computer, a selection signal for selecting one of the plurality of digital content items included in the customized content listing;
attempting to bill, by the content distribution computer, the user account for the selected digital content item using a cost associated with the selected digital content item and the user billing information; and
transmitting, by the content distribution computer, the selected digital content item to a receiver corresponding to the data identifying the location of the at least one receiving device if the user account was successfully billed for the selected digital content item.

US Pat. No. 10,116,970

VIDEO DISTRIBUTION, STORAGE, AND STREAMING OVER TIME-VARYING CHANNELS

Empire Technology Develop...

1. A method to provide video distribution, storage, and streaming over time-varying channels, the method comprising:grouping a stream of video frames to form one or more groups-of-pictures (GOPs), wherein each GOP of the one or more GOPs includes a plurality of sub-groups-of-pictures (sub-GOPs);
encoding the plurality of sub-GOPs of video frames into a plurality of blocks, wherein each block of the plurality of blocks includes at least a portion of an individual encoded video frame;
determining two or more priority levels to be assigned to the plurality of blocks;
assigning a priority level to each block, in the plurality of blocks, based on one or more of an importance and a context of each block within an associated sub-GOP;
one or more of storing and distributing the plurality of blocks based on the priority level assigned to each block;
selecting one or more of a storage type and a content distribution network path such that an aggregate loss probability is less for blocks with a higher priority level compared to blocks with a lower priority level; and
transmitting, to a requesting client device, blocks with the higher priority level over a higher quality delivery channel, compared to blocks with the lower priority level which are transmitted over a lower quality delivery channel,
wherein the transmission of the blocks with the higher priority level over the higher quality delivery channel, compared to the blocks with the lower priority level which are transmitted over the lower quality delivery channel, facilitates a particular quality of service (QoS) being provided for the requesting client device.

US Pat. No. 10,116,960

LOW-COMPLEXITY INTRA PREDICTION FOR VIDEO CODING

NTT DOCOMO, INC., Tokyo ...

1. A video encoding method comprising computer executable steps executed by a processor of a video encoder to implement an intra-prediction operation that derives a prediction block of a target block with boundary pixels of the target block interpolated along an intra prediction angle, wherein the boundary pixels comprise a horizontal array of horizontal boundary pixels and a vertical array of vertical boundary pixels, the intra-prediction operation comprising:obtaining a value of an inverse angle parameter, corresponding to the intra prediction angle, from a look-up table which lists values of inverse angle parameters in relation, respectively, to a plurality of different intra prediction angles;
identifying at least some of the vertical boundary pixels located in the vertical array at positions which are a function of multiplication between the obtained value of the inverse angle parameter and a value of a horizontal location identifier which is a variable representing positions in an extension of an extended horizontal array;
adding the identified at least some of the vertical boundary pixels as horizontal boundary pixels to the extension of the extended horizontal array; and
using only the horizontal boundary pixels in the extended horizontal array, without using the vertical boundary pixels, to derive the prediction block of the target block, and
wherein the horizontal location identifier takes values of ?1 . . . (size×the intra prediction angle)/rangelimit,where size represents a size of a target block to be predicted and rangelimit represents a range limit of the plurality of intra prediction angles, which is fixed to a constant of 32.

US Pat. No. 10,116,957

DUAL FILTER TYPE FOR MOTION COMPENSATED PREDICTION IN VIDEO CODING

GOOGLE INC., Mountain Vi...

1. An apparatus for encoding or decoding a video frame, comprising:a processor configured to execute instructions stored in a non-transitory storage medium to:
determine whether a first component of a motion vector represents sub-pixel motion;
determine whether a second component of the motion vector represents sub-pixel motion;
responsive to a determination that the first component of the motion vector represents sub-pixel motion and a determination that the second component of the motion vector represents sub-pixel motion:
determine a first interpolation filter for motion prediction using the motion vector along a first axis;
determine a second interpolation filter for motion prediction using the motion vector along a second axis different from the first axis, the second interpolation filter being different from the first interpolation filter;
apply the first interpolation filter to pixels of a reference frame identified using the motion vector to generate a temporal pixel block; and
apply the second interpolation filter to the temporal pixel block to generate a first prediction block for a first block of the video frame; and
at least one of:
encode the first block of the video frame using the first prediction block by producing a residual block as a difference between the first block and the first prediction block, and encoding the residual block into an encoded bitstream for decoding by a decoder; or
decode the first block of the video frame using the first prediction block by decoding an encoded residual block from the encoded bitstream to generate a residual block, and reconstructing the first block for display by combining the residual block with the first prediction block.

US Pat. No. 10,116,947

METHOD AND APPARATUS FOR CODING MULTILAYER VIDEO TO INCLUDE SCALABLE EXTENSION TYPE INFORMATION IN A NETWORK ABSTRACTION LAYER UNIT, AND METHOD AND APPARATUS FOR DECODING MULTILAYER VIDEO

SAMSUNG ELECTRONICS CO., ...

1. A multilayer video encoding method comprising:encoding a multilayer video;
generating network abstraction layer (NAL) units for data units included in the encoded multilayer video; and
adding scalable extension type information, for a scalable extension of the multilayer video, to a video parameter set (VPS) NAL unit among the NAL units, the VPS NAL unit comprising VPS information that is information applied to the multilayer video; and
outputting an encoded scalable video bitstream including the encoded multilayer video and the generated NAL units with added scalable extension type information;
wherein the adding of the scalable extension type information comprises adding, to a header of the VPS NAL unit: 1) a scalable extension type table index indicating a scalable extension type table among scalable extension type tables including combinations of scalable extension types that are applicable to the multilayer video; and 2) a plurality of sub-layer indexes indicating specific scalable extension types included in a combination among the combinations of the scalable extension types included in the scalable extension type table indicated by the scalable extension type table index.

US Pat. No. 10,116,946

IMAGE ENCODING/DECODING METHOD AND DEVICE

ELECTRONICS AND TELECOMMU...

1. A method for picture decoding supporting layers, the method comprising:receiving a bitstream comprising the layers;
acquiring information on a maximum number of sub-layers for each of the layers by decoding the bitstream; and
acquiring a residual block of a current block by decoding the bitstream,
wherein the information on the maximum number of sub-layers is included in video parameter set extension information and signaled, and
wherein a video parameter set comprises information on a maximum number of sub-layers, and
in response to the video parameter set extension information not comprising the information on a maximum number of sub-layers for a layer among the layers, the maximum number of sub-layers for the layer is derived based on the information included in the video parameter set.

US Pat. No. 10,116,945

MOVING PICTURE ENCODING APPARATUS AND MOVING PICTURE ENCODING METHOD FOR ENCODING A MOVING PICTURE HAVING AN INTERLACED STRUCTURE

PANASONIC INTELLECTUAL PR...

1. A moving picture encoding apparatus which encodes a moving picture having an interlaced structure, the moving picture encoding apparatus comprising:a storage which stores fields as reference pictures; and
an encoder which encodes a current field as a B-picture, using a first reference picture list which includes only one field in a same parity as the current field, and a second reference picture list which includes only one field in an opposite parity to the current field,
wherein the one field included in the first reference picture list and the one field included in the second reference picture list are two fields located forward relative to the current field in display order.

US Pat. No. 10,116,944

VIDEO ENCODING DEVICE, VIDEO ENCODING METHOD, AND PROGRAM

NEC CORPORATION, Tokyo (...

1. A video encoding device comprising:first video encoding section, implemented by a hardware including at least one processor, which encodes an input image to generate first coded data;
coded data transcoding section, implemented by the at least one processor, which transcodes the first coded data generated by the first video encoding section, to generate second coded data; and
second video encoding section, implemented by the at least one processor, which generates a prediction signal with regard to the input image based on the second coded data supplied from the coded data transcoding section,
wherein the first video encoding section comprises:
dividing section which divides the input image into a plurality of image areas; and
one or more encoding sections which perform encoding in units of blocks, each encoding corresponding to the image area in which there are a plurality of blocks, and
wherein the first video encoding section also encodes blocks that are included in image areas adjacent to the image area for which the encoding section performs encoding, together with the blocks in the image area for which the encoding section performs encoding.

US Pat. No. 10,116,943

ADAPTIVE VIDEO COMPRESSION FOR LATENCY CONTROL

NVIDIA CORPORATION, Sant...

1. A computer-implemented method for adaptively compressing video frames, the method comprising:encoding a first plurality of video frames based on a first video compression algorithm to generate first encoded video frames;
transmitting the first encoded video frames to a client device;
receiving a user input event;
switching from the first video compression algorithm to a second video compression algorithm in response to the user input event;
encoding a second plurality of video frames based on the second video compression algorithm to generate second encoded video frames;
transmitting the second encoded video frames to the client device;
determining that a threshold period of time has elapsed since receiving the user input event; and
in response to determining that the threshold period of time has elapsed, switching from the second video compression algorithm to the first video compression algorithm, wherein the first video compression algorithm requires less network bandwidth for transmitting data than the second video compression algorithm, and the first video compression algorithm results in greater latency when encoding data than the second video compression algorithm.

US Pat. No. 10,116,941

INTER PREDICTION METHOD AND APPARATUS THEREFOR

LG Electronics Inc., Seo...

1. A video decoding apparatus, comprising:an entropy-decoder configured to receive information on a parallel merge level which indicates a size of a parallel merging unit region;
a predictor configured to generate a merging candidate list for a current block when a merge mode is applied to the current block, to derive motion information of the current block based on one of a plurality of merging candidates constituting the merging candidate list, to derive prediction samples of the current block based on the derived motion information; and
an adder configured to generate a reconstructed picture based on the prediction samples,
wherein the current block corresponds to a prediction unit (PU) belonging to the parallel merging unit region,
wherein the PU is partitioned from a coding unit (CU),
wherein for the PU, in the CU and the parallel merging unit region, spatial merge candidates which are identical to spatial merge candidates of a 2N×2N PU which has a same size as the parallel merging unit region are used for the merging candidate list,
wherein the spatial merge candidates for the PU are derived from a lower left corner neighboring block, a left neighboring block, an upper right corner neighboring block, an upper neighboring block, and an upper left corner neighboring block of the parallel merging unit region, and
wherein the parallel merging unit region is determined based on the parallel merge level, and the information on the parallel merge level is received through a picture parameter set.

US Pat. No. 10,116,940

METHOD FOR ENCODING VIDEO, METHOD FOR DECODING VIDEO, AND APPARATUS USING SAME

LG ELECTRONICS INC., Seo...

1. A video decoding method by a video decoder, comprising:receiving first flag information indicating whether a picture in a reference layer is not needed for an inter-layer prediction and second flag information indicating whether the reference layer is directly referred by a current layer;
decoding and storing pictures in the reference layer;
deriving an inter-layer reference picture for a current block from at least one of the decoded pictures in the reference layer based on the first flag information and the second flag information;
constructing a reference picture list comprising the inter-layer reference picture in the reference layer and a reference picture in the current layer;
deriving a predicted sample of the current block in the current layer based on the inter-layer reference picture comprised in the reference picture list; and
deriving a reconstructed sample of the current block based on the predicted sample and a residual sample of the current block,
wherein when the first flag information indicates that a specific picture in the reference layer is not needed for the inter-layer prediction, the specific picture is not comprised in the inter-layer reference picture set,
wherein when the second flag information indicates that the reference layer is not directly referred by the current layer, the reference layer is not used for the inter-layer prediction of the current layer, and
wherein the first flag information is received through a slice segment header, and in all slice segment header of the picture in the reference layer, a value of the first flag information is set to the same.

US Pat. No. 10,116,939

METHOD OF DERIVING MOTION INFORMATION

INFOBRIDGE PTE. LTD., Si...

1. A method of encoding video data in a merge mode, the method comprising:determining motion information of a current block;
generating a prediction block of the current block using the motion information;
generating a residual block using the current block and the prediction block;
transforming the residual block to generate a transformed block;
quantizing the transformed block using a quantization parameter and a quantization matrix to generate a quantized block;
scanning coefficient components of the quantized block using a diagonal scan;
entropy-coding the scanned coefficient components of the quantized block; and
encoding the motion information,
wherein encoding the motion information comprises the sub-steps of:
constructing a merge list using available spatial and temporal merge candidates;
selecting a merge predictor among merge candidates of the merge list; and
encoding a merge index specifying the merge predictor,
wherein when the current block is a second prediction unit partitioned by asymmetric partitioning, the spatial merge candidate corresponding to a first prediction unit partitioned by the asymmetric partitioning is not to be listed on the merge list, and
wherein the quantization parameter is determined per a quantization unit and is encoded using a quantization parameter predictor,
a minimum size of the quantization unit is adjusted by a picture parameter set,
if two or more quantization parameters are available among a left quantization parameter, an above quantization parameter and a previous quantization parameter of a current coding unit, the quantization parameter predictor is generated using first two available quantization parameters among the left quantization parameter, the above quantization parameter and the previous quantization parameter, and
if only one is available among the left quantization parameter, the above quantization parameter and the previous quantization parameter, the available quantization parameter is set as the quantization parameter predictor.

US Pat. No. 10,116,938

SYSTEM FOR CODING HIGH DYNAMIC RANGE AND WIDE COLOR GAMUT SEQUENCES

ARRIS Enterprises LLC, S...

1. A method of encoding a digital video, comprising:receiving a digital video data set including at least one of high dynamic range (HDR) and wide color gamut (WCG) video data;
converting a portion of the digital video data set from an input color space to an intermediate color space to generate intermediate color converted video data and generating metadata identifying the input color space, the intermediate color space and the portion of the digital video data set;
applying a compression transfer function to the intermediate color converted video data to generate compressed video data and generating metadata characterizing the compression transfer function and identifying the portion of the digital video data set;
converting the compressed video data from the intermediate color space to a final color space to generate final color converted video data and generating metadata identifying the intermediate color space, the final color space and the portion of the digital video data set;
identifying a characteristic of the portion of the digital video data set;
modifying a perceptual transfer function according to the identified characteristic;
applying the modified perceptual transfer function to the portion of the digital video data set to generate a perceptually modified portion of the digital video data set;
applying a perceptual normalization including at least one of a gain factor or an offset to the perceptually modified digital video data set to generate a perceptually normalized portion of the digital video data set;
encoding the perceptually normalized portion of the video data set to generate a bit stream;
combining the metadata identifying the input color space and the intermediate color space, the metadata characterizing the compression transfer function and the metadata identifying the final color space with the metadata that indicates the modification of the perceptual transfer function to generate combined metadata;
wherein the portion of the digital video data to which the perceptual transfer function is applied includes the final color converted video data; and
transmitting, to a decoder, the bit stream and metadata that indicates the modification of the perceptual transfer function, that identifies the perceptual normalization, and that identifies the portion of the video data set; wherein the transmitting transmits the bit stream and the combined metadata to the decoder.

US Pat. No. 10,116,937

ADJUSTING QUANTIZATION/SCALING AND INVERSE QUANTIZATION/SCALING WHEN SWITCHING COLOR SPACES

Microsoft Technology Lice...

1. A computing device comprising:one or more buffers configured to store an image or video; and
an image encoder or video encoder configured to perform operations comprising:
encoding units of the image or video to produce encoded data, including, when switching from a first color space to a second color space between two of the units, adjusting final quantization parameter (“QP”) values or intermediate QP values for color components of the second color space according to per component color space adjustment factors, wherein the first color space is RGB and the second color space is YCoCg, and wherein the per component color space adjustment factors adjust the final QP values or intermediate QP values for the color components of the second color space by offsets of ?5, ?3 and ?5 for Y, Co and Cg components, respectively; and
outputting the encoded data as part of a bitstream.

US Pat. No. 10,116,936

MOVING IMAGE CODING DEVICE, MOVING IMAGE DECODING DEVICE, MOVING IMAGE CODING METHOD, AND MOVING IMAGE DECODING METHOD

1. A moving image coding device that divides an image into MBs and codes the MBs, the moving image coding device having a memory and a processor, the processor comprising:a coarse search unit that calculates a moving amount and a moving direction of each of the MBs;
an MB parallel processing unit that performs preprocessing to code the image with respect to each of the MBs that are contained in an MB line constituting the image and for which the moving amount and the moving direction are calculated, and writes the resulting MB information in a storage unit in the processing order of the MBs;
a coding unit that reads out the MB information stored in the storage unit in a raster order and codes the MBs; and
an MB line parallel processing unit that configures the MB arranged in a horizontal direction as an MB line, performs the preprocessing with respect to each of the MB line, and includes a plurality of the MB parallel processing units;
wherein the moving image coding device is operable in two modes, switchable between a mode performed by one or more of the MB parallel processing units included in the MB line parallel processing unit and a mode performed by one or more of the MB parallel processing units included in a plurality of MB line parallel processing units.

US Pat. No. 10,116,935

IMAGE ENCODING METHOD, IMAGE DECODING METHOD, IMAGE ENCODING DEVICE, IMAGE DECODING DEVICE, AND IMAGE ENCODING/DECODING DEVICE

SUN PATENT TRUST, New Yo...

1. An image decoding method for decoding an image from a bitstream on a per block basis, the image decoding method comprising:predicting a current block in the image using a reference block different from the current block, to generate a prediction block; and
generating a reconstructed block using the prediction block,
wherein the generating includes:
first filtering for filtering a boundary between the reconstructed block and a decoded neighboring block neighboring the current block, using a first filter strength which is set using first prediction information for the prediction of the current block and second prediction information for prediction of the decoded neighboring block;
second filtering for filtering the boundary using a second filter strength; and
determining whether or not the boundary is a first boundary, the first boundary being at least one of a tile boundary and a slice boundary,
wherein the first filtering is in-loop filtering in a loop in which a filtered reconstructed block is used as a reference block for another block,
wherein the second filtering is post filtering outside the loop,
wherein the second filtering is performed without performing the first filtering when the determining determines that the boundary is the first boundary,
wherein the second filter strength is set based on supplemental information included in a header of the bitstream,
wherein the determining determines whether or not the boundary is the first boundary based on the supplemental information, and
wherein the supplemental information indicates (i) whether or not filtering the tile boundary is enabled and (ii) whether or not filtering the slice boundary is enabled.

US Pat. No. 10,116,934

IMAGE PROCESSING METHOD AND APPARATUS

HUAWEI TECHNOLOGIES CO., ...

1. An image processing method implemented by an encoder, the method comprising:acquiring N pieces of motion information from N adjacent image blocks adjacent to a current image block, wherein the N adjacent image blocks correspond to the N pieces of motion information, wherein the N pieces of motion information indicate N reference image blocks in a reference image of the current image block, wherein the N pieces of motion information correspond to the N reference image blocks, and wherein N is a positive integer;
determining candidate motion information from the N pieces of motion information according to a preset rule, wherein the candidate motion information comprises two or more pieces of information of the N pieces of motion information;
determining, in the reference image, a location range of a to-be-stored pixel according to the candidate motion information;
storing all pixels in the location range, wherein the location range covers two or more candidate reference image blocks, wherein the candidate reference image blocks comprise two or more image blocks of the N reference image blocks, and wherein the candidate reference image block is an image block corresponding to the candidate motion information;
reading the pixels in the location range; and
performing encoding processing on the current image block according to the pixels in the location range, to generate a target data stream.

US Pat. No. 10,116,933

METHOD OF LOSSLESS MODE SIGNALING FOR VIDEO SYSTEM WITH LOSSLESS AND LOSSY CODING

MEDIATEK INC., Hsin-Chu ...

1. A method of lossless mode signaling for a coding system supporting both lossy coding and lossless coding, wherein a picture is divided into multiple slices and each slice is divided into multiple coding units, the method comprising:receiving input data associated with a current picture;
if the lossless coding is allowed for the current picture, incorporating or parsing a first syntax element in a picture level to indicate whether a second syntax element is present in each slice for selecting lossy coding or lossless coding;
if the first syntax indicates that the second syntax element is present, incorporating or parsing the second syntax element in each slice of the current picture to indicate whether a forced lossless coding mode is selected,
if the second syntax element indicates that the forced lossless coding mode is selected, encoding or decoding all coding units in the slice using lossless coding; and
if the second syntax indicates that the forced lossless coding mode is not selected, encoding or decoding each coding unit in the slice according to a third syntax element indicating whether each coding unit is coded using lossless coding or not.

US Pat. No. 10,116,932

IMAGE FILTER DEVICE, DECODING DEVICE, ENCODING DEVICE, AND DATA STRUCTURE

Sharp Kabushiki Kaisha, ...

1. An image filter device comprising:a deblocking filter that performs deblocking on first target pixels in side boundaries of a unit region and generates a deblocked image including the unit region;
filter circuitry that performs adaptive filtering on second target pixels in the unit region of the deblocked image, wherein the second target pixels are all pixels included in the unit region; and
reference region setting circuitry that sets a reference region to be referenced by the filter circuitry to calculate a pixel value of one of the second target pixels according to a position of the one of the second target pixels in the unit region;
wherein the reference region setting circuitry includes:
first setting circuitry that sets a position of an upstream edge of the reference region lower than or equal to a virtual boundary line, which separates the unit region into an upstream side and a downstream side, when the one of the second target pixels is on the downstream side, and
second setting circuitry that sets a position of a downstream edge of the reference region higher than the virtual boundary line, which separates the unit region into the upstream side and the downstream side, when the one of the second target pixels is on the upstream side; wherein
the deblocking filter, the filter circuitry, and the reference region setting circuitry are implemented by one or more processors.

US Pat. No. 10,116,930

ICON-BASED HOME CERTIFICATION, IN-HOME LEAKAGE TESTING, AND ANTENNA MATCHING PAD

Viavi Solutions, Inc., S...

1. A system for determining the magnitude of leakage in a subscriber's premises installation for a cable network that is configured to provide a signal level in a range of ?5 dBmV to 0 dBmV, the system comprising:a signal generator configured to be secured to a suitable network port at a subscriber's premises to wiredly connect the signal generator to cable wiring in the subscriber's premises, the signal generator including a frequency source operable to generate an output signal in a range of 40 dB to 70 dB above the signal level provided by the cable network to supply the output signal to the cable wiring in the subscriber premises, the frequency source being shielded to prevent transmission of radiated frequency source oscillations, and
a signal level meter operable to be transported around the subscriber's premises and measure signal levels radiating from the subscriber's premises, the signal level meter including an output device configured to output the signal levels measured by the signal level meter,
wherein the signal generator is configured to supply the output signal through the suitable network port to the cable wiring at the subscriber's premises so that a high power offset is maintained when the signal generator is secured to the suitable network port and wiredly connected to the cable wiring, and
wherein the signal level meter is configured to measure the signal levels including a first signal level corresponding to the output signal of the signal generator.

US Pat. No. 10,116,929

MULTIMEDIA QUALITY MONITORING METHOD, AND DEVICE

Huawei Technologies Co., ...

1. A multimedia quality monitoring method, comprising:determining, by a multimedia quality monitoring apparatus, multimedia quality of multimedia due to compression of the multimedia according to video quality due to compression of video data of the multimedia and audio quality due to compression of audio data of the multimedia;
acquiring, by the multimedia quality monitoring apparatus, multimedia distortion quality corresponding to video distortion and/or audio distortion of the multimedia, wherein the multimedia distortion quality comprises multimedia distortion quality due to packet loss and/or multimedia distortion quality due to rebuffering;
and
determining, by the multimedia quality monitoring apparatus, quality of the multimedia according to the multimedia quality of the multimedia due to compression of the multimedia and the multimedia distortion quality;
wherein acquiring the multimedia distortion quality further comprises:
(a) acquiring the multimedia distortion quality due to packet loss, wherein acquiring the multimedia distortion quality due to packet loss comprises:
(1) determining video quality due to packet loss according to the video quality due to compression of the video data of the multimedia and a video packet loss rate and/or determining audio quality due to packet loss according to the audio quality due to compression of audio data of the multimedia and an audio packet loss rate;
(2) determining video distortion quality due to packet loss according to the video quality due to packet loss and the video quality due to compression of the video data of the multimedia and/or determining audio distortion quality due to packet loss according to the audio quality due to packet loss and the audio quality due to compression of audio data of the multimedia;
(3) determining a video packet loss distortion factor according to the video distortion quality due to packet loss and the video quality due to compression of the video data of the multimedia and/or determining an audio packet loss distortion factor according to the audio distortion quality due to packet loss and the audio quality due to compression of audio data of the multimedia;
(4) determining a multimedia packet loss distortion factor according to the video packet loss distortion factor and/or the audio packet loss distortion factor; and
(5) determining the multimedia distortion quality due to packet loss according to the multimedia packet loss distortion factor and the multimedia quality of the multimedia due to compression of the multimedia; and/or
(b) acquiring the multimedia distortion quality due to rebuffering according to a rebuffering parameter of the multimedia corresponding to a transmission process.

US Pat. No. 10,116,928

THREE-DIMENSIONAL (3D) DISPLAY SCREEN AND 3D DISPLAY DEVICE

Shanghai Tianma Micro-ele...

1. A three-dimensional (3D) display screen, comprising:a pixel array comprising m laterally displaced groups,
wherein:
a laterally displaced group in the m laterally displaced groups includes n rows of sub-pixel units arranged in an array and sequentially numbered as a 1st sub-pixel unit row to a nth sub-pixel unit row, the sub-pixel units in a same sub-pixel unit row are arranged in a first lateral direction, m is a positive integer larger than or equal to 1, and n is a positive integer larger than or equal to 2;
a sub-pixel unit in the n rows of sub-pixel units includes a plurality of light-shielding stripes arranged in parallel and has a length of L in the first lateral direction, the plurality of light-shielding stripes are disposed inside the sub-pixel unit, two adjacent light-shielding stripes have a gap of P in the first lateral direction, and P in the laterally displaced group, along the first lateral direction, the nth sub-pixel unit row has a lateral displacement of P with respect to the 1st sub-pixel unit row, an ith sub-pixel unit row in the n rows of sub-pixel units has a lateral displacement of P/n with respect to an (i?1) th sub-pixel unit row in the n rows of sub-pixel units, where i is a positive integer and 1 along the first lateral direction, the lateral displacement between any two sub-pixel unit rows in the pixel array is less than or equal to P.

US Pat. No. 10,116,927

METHOD FOR REPRODUCING IMAGE INFORMATION AND AUTOSTEREOSCOPIC SCREEN

Fraunhofer-Gesellschaft z...

1. A method for reproducing image information on an autostereoscopic screen, which has a pixel matrix with a plurality of pixels and also an optical grid arranged in front of the pixel matrix, wherein the plurality of pixels of the pixel matrix are arranged such that they form a plurality of columns arranged equidistantly side by side with a column direction that is vertical or inclined relative to a vertical direction, and wherein the optical grid has a group of strip-shaped structures oriented parallel to the plurality of columns and arranged equidistantly side by side and gives light originating from the plurality of pixels at least one defined propagation plane, which is spanned from a defined horizontal propagation direction and the column direction, wherein a period length (D) of the optical grid, the period length being defined by a lateral offset of adjacent strip-shaped structures, is greater by a factor n×Ln/(Ln+a) than a lateral offset (d) of directly adjacent columns, wherein “a” denotes an effective distance between the pixel matrix and the optical grid, Ln denotes a nominal viewing distance of the autostereoscopic screen, and n denotes an integer greater than two, wherein the method comprises:assigning an angle value and a location coordinate value to each column of the plurality of columns, wherein the angle value is defined as a measure for an angle between a horizontal reference direction and the defined horizontal propagation direction which is given to the light originating from the plurality of pixels of a respective column by the optical grid, and wherein the location coordinate value specifies a position, in a lateral direction, of the respective column;
for each column of the plurality of columns, calculating an extract of an image by image synthesis, wherein the image is a parallel projection of a three dimensional (3D) scene to be reproduced having a projection direction that is defined by the angle corresponding to the angle value assigned to the respective column, and wherein the extract is defined by a strip of the image that has a lateral position in the image corresponding to the location coordinate value assigned to the respective column; and
controlling the plurality of pixels of the pixel matrix in such a way that each column of the plurality of columns has written into it the extract calculated for the respective column.

US Pat. No. 10,116,926

3D SCANNING CONTROL APPARATUS BASED ON FPGA AND CONTROL METHOD AND SYSTEM THEREOF

SHENZHEN ESUN DISPLAY CO....

1. A 3D scanning control apparatus based on FPGA (Field Programmable Gate Array), for controlling a 3D scanner to scan, wherein the apparatus comprises:a first projection control module configured for controlling at least one structured light generation unit to project to an object;
a first image acquisition control module configured for controlling at least one shooting unit to capture at least one projection image of the object when the first projection control module is projecting;
a second projection control module configured for controlling at least another one structured light generation unit to project to the object for one more time;
a second image acquisition control module configured for controlling at least one corresponding shooting unit to capture the projection images of the object for one more time when the second projection control module is projecting; and
a data processing module configured for processing the captured projection images with at least one of the Bayer color rendition, color space conversion and phase unwrapping, by using algorithm in the FPGA;
a driver module coupled to the structured light generation units and the shooting units via corresponding third interfaces; and
an optimization module coupled to the shooting units via other corresponding third interfaces;
wherein, the driver module is configured for driving the structured light generation unit and the shooting unit to rotate an angel with the object as the axis and a second cycle as the time interval, until a circle is rotated; the optimization module is configured for providing a soft light environment when the shooting unit is capturing the projection image.

US Pat. No. 10,116,925

TIME-RESOLVING SENSOR USING SHARED PPD + SPAD PIXEL AND SPATIAL-TEMPORAL CORRELATION FOR RANGE MEASUREMENT

SAMSUNG ELECTRONICS CO., ...

15. An imaging unit comprising:a light source operative to project a laser pulse onto a three-dimensional (3D) object; and
an image sensor unit that includes:
a plurality of pixels arranged in a two-dimensional (2D) pixel array, wherein each pixel in at least one row of pixels in the 2D pixel array includes:
a pixel-specific plurality of Single Photon Avalanche Diodes (SPADs), wherein each SPAD is operable to convert luminance received in a returned pulse into a corresponding electrical signal, wherein the returned pulse results from reflection of the projected pulse by the 3D object,
a pixel-specific first control circuit coupled to the pixel-specific plurality of SPADs, wherein, for each SPAD receiving luminance in the returned pulse, the pixel-specific first control circuit is operable to process the corresponding electrical signal from the SPAD and generate a SPAD-specific output therefrom,
a pixel-specific device operable to store an analog charge, and
a pixel-specific second control circuit coupled to the pixel-specific first control circuit and the pixel-specific device, wherein the pixel-specific second control circuit is operable to initiate transfer of a pixel-specific first portion of the analog charge from the pixel-specific device, and terminate the transfer upon receipt of at least two SPAD-specific outputs from the pixel-specific first control circuit within a pre-defined time interval, and
a processing unit coupled to the 2D pixel array and operative to:
provide an analog modulating signal to the pixel-specific second control circuit in each pixel in the row of pixels to control the transfer of the pixel-specific first portion of the analog charge, and
determine a pixel-specific Time of Flight (TOF) value of the returned pulse based on the transfer of the pixel-specific first portion of the analog charge within the pre-defined time interval.

US Pat. No. 10,116,924

COLOR ANALYSIS AND CONTROL USING AN ELECTRONIC MOBILE DEVICE TRANSPARENT DISPLAY SCREEN

1. A method for comparing the image data of a predetermined object using an electronic mobile device, comprising the steps of:using at least one transparent display screen associated with the electronic mobile device for generating optical processing data;
storing said optical processing data, and optical processing instructions, and computer processor algorithms in a memory associated with the electronic mobile device;
operating a computer processor associated with the electronic mobile device and said memory for executing said optical processing instructions and computer processor algorithms in response to said optical processing data;
directing an optical lens of the electronic mobile device to capture an object image of an object for display on said at least one transparent display screen;
collecting a first set of optical processing data using data deriving from the capture of the object image, said first set of optical processing data comprising a first set of color image data;
displaying the object for display through a transparent portion of the said at least one transparent display screen and generating there from a second set of optical processing data comprising a second set of image data;
receiving and storing in said the memory a said second set of image data associated with perceiving the object through the at least one transparent display screen;
executing instructions on the computer processor for determining image difference values between said first set of image data and said second set of image data; and
displaying on the at least one transparent display screen image difference values from the group consisting of color differences, texture differences, transparency differences, lighting differences, motion differences, focus differences and the like.

US Pat. No. 10,116,923

IMAGE PROCESSING APPARATUS, IMAGE PICKUP APPARATUS, IMAGE PROCESSING METHOD, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM FOR IMPROVING QUALITY OF IMAGE

CANON KABUSHIKI KAISHA, ...

1. An image processing apparatus comprising:a generator configured to generate difference information relating to a difference in a luminance value between a plurality of parallax images;
a gain distribution determiner configured to determine a gain distribution depending on a reduction rate distribution determined based on the plurality of parallax images and the difference information generated by the generator;
an intensity determiner configured to determine an intensity of an unnecessary component based on a product of the gain distribution and the difference information, the unnecessary component corresponding to a ghost or a flare; and
a reducer configured to generate an output image by reducing, using the intensity of the unnecessary component, the unnecessary component from a synthesized image obtained by synthesizing the plurality of parallax images.

US Pat. No. 10,116,922

METHOD AND SYSTEM FOR AUTOMATIC 3-D IMAGE CREATION

Google LLC, Mountain Vie...

1. A method for image creation, the method comprising:receiving, by a system comprising a hardware processor, a first two-dimensional image;
comparing, by the system, the first two-dimensional image to a second two-dimensional image to determine whether the first two-dimensional image has a given similarity to the second two-dimensional image, wherein the given similarity is determined by combining at least the first two-dimensional image and the second two-dimensional image and determining whether a desired stereoscopic effect resulting from the combined first two-dimensional image and second two-dimensional image falls within a range of the desired stereoscopic effect and whether an image composition of the combined first-two dimensional image and the second two-dimensional image has changed; and
in response to the comparison determining that the first two-dimensional image has at least the given similarity to the second two-dimensional image based on the desired stereoscopic effect falling within the range of the desired stereoscopic effect and based on the image composition, generating, by the system, a three-dimensional image that combines at least the first two-dimensional image and the second two-dimensional image.

US Pat. No. 10,116,920

BALANCING COLORS IN A SCANNED THREE-DIMENSIONAL IMAGE

FARO TECHNOLOGIES, INC., ...

1. A method of optically scanning and measuring a scene, the method comprising:providing a first scanner, the scanner including a first light emitter for emitting a first light onto the scene, a first light receiver for receiving a first portion of the first light from the scene, and a first processor, the first scanner having a first angle measuring device, a second angle measuring device and a distance meter;
providing a second scanner, the second scanner including a second light emitter for emitting a second light onto the scene, a second light receiver for receiving a portion of the second light from the scene, and a second processor;
measuring with a first scanner in a first scanner location three-dimensional (3D) coordinates and a color for each of a plurality of first object points in the scene based at least in part on the emitting of the first light, an angle measured by the first angle measuring device, an angle measured by the second angle measuring device and a receiving of the first portion with the distance meter;
measuring with the second scanner in a second scanner location 3D coordinates and a color for each of a plurality of second object points in the scene based at least in part on the emitting of the second light and the receiving of the second portion;
selecting a plurality of areas within the scene, each area being defined by a plurality of cells and including at least one first object point from the first plurality of object points and further including at least one second object point from the second plurality of object points;
determining an adapted second color for each second object point, wherein in each of the plurality of areas the adapted second color is based at least in part on a statistical distribution of the colors of the at least one first object point in the area;
storing the 3D coordinates and the color for each first object point; and
storing the 3D coordinates and the adapted second color for each second object point.

US Pat. No. 10,116,919

METHOD AND ARRANGEMENT FOR ESTIMATING AT LEAST ONE CROSS-CHANNEL COLOUR MAPPING MODEL FROM AN SET OF TUPLES OF CORRESPONDING COLOURS RELATIVE TO AT LEAST TWO IMAGES

THOMSON LICENSING, Issy-...

1. A method for compensation of colour differences between at least two images imaging a same scene, the colours of which are represented according to m colour channels, comprising:extracting from said images a set of tuples of corresponding colours;
estimating from said set of tuples of corresponding colours a channel-wise colour mapping model for each of said m colour channels;
selecting within said set of tuples of corresponding colours at least one intermediate tuple having colours with a difference to said estimated channel-wise colour mapping model that are smaller than a determined threshold;
estimating from said at least one selected intermediate tuple of corresponding colours at least one cross-channel coulour mapping model for at least one of said m colour channels;
generating a final set of final tuples of corresponding colours from said at least one selected intermediate tuple of corresponding colours such that said final tuples have colors with a difference to said estimated cross-channel colour mapping model that are smaller than a determined threshold; and
compensating colour difference between said images based on said final set of final tuples of corresponding colours.

US Pat. No. 10,116,918

DISPARITY IMAGE GENERATING DEVICE, DISPARITY IMAGE GENERATING METHOD, AND IMAGE

TOYOTA JIDOSHA KABUSHIKI ...

1. A disparity image generating device comprising:a disparity image acquiring unit configured to acquire chronologically consecutive first and second disparity images based on an imaging result of an environment around a vehicle, the first disparity image being a disparity image acquired by the disparity image acquiring unit at a first time, the second disparity image being a disparity image acquired by the disparity image acquiring unit at a second time which is a time after the first time;
a first correcting unit configured to optimize a disparity value of a first target pixel from among pixels configuring the first disparity image using semi-global matching, based on a disparity value of a pixel configuring at least a part of a first pixel route which is in a first pixel region configured with a plurality of pixels around the first target pixel, the first pixel route being a pixel route in at least one direction from the first target pixel toward the first pixel region;
a second correcting unit configured to optimize a disparity value of a second target pixel from among pixels configuring the second disparity image using the semi-global matching, based on a disparity value of a pixel configuring at least a part of a second pixel route which is in a second pixel region configured with a plurality of pixels around the second target pixel, the second pixel route being a pixel route in at least one direction from the second target pixel toward the second pixel region, the second pixel route being a pixel route in a direction approximately opposite to a direction of the first pixel route, the second target pixel being positioned at a position corresponding to the first target pixel; and
a disparity image generating unit configured to calculate a desired disparity image, based on a comparison between the first disparity image optimized by the first correcting unit and the second disparity image optimized by the second correcting unit.

US Pat. No. 10,116,917

IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM

CANON KABUSHIKI KAISHA, ...

1. An image processing apparatus that corrects a depth image representing information about a depth to a subject in a scene, which is the same scene of a plurality of images obtained by photographing the same subject from different viewpoints, the image processing apparatus comprising:a unit configured to determine a pixel of interest in a first image that is taken to be a reference among the plurality of images and peripheral pixels existing within a predetermined region according to the pixel of interest, the peripheral pixels being pixels for which a weight is to be derived;
an evaluation value derivation unit configured to derive a matching evaluation value between the plurality of images for the respective peripheral pixels;
a weight determination unit configured to determine a weight for the respective peripheral pixels in the correction based on the evaluation value; and
a correction unit configured to correct a pixel value of the pixel of interest in the depth image by using the weight and a pixel value of the peripheral pixels,
wherein the evaluation value derivation unit specifies, for each peripheral pixel, a pixel corresponding to the pixel of interest in a second image among the plurality of images, the second image being different from the first image, by using a depth of the respective peripheral pixels in the depth image and derives the matching evaluation value of each peripheral pixel based on the pixel value of the pixel of interest and the pixel value of the specified corresponding pixel.

US Pat. No. 10,116,916

METHOD FOR DATA REUSE AND APPLICATIONS TO SPATIO-TEMPORAL SUPERSAMPLING AND DE-NOISING

NVIDIA CORPORATION, Sant...

1. A method, comprising:generating a current frame of image data in a memory; and
for each pixel in the current frame of image data:
sampling a resolved pixel color for a corresponding pixel in a previous frame of image data stored in the memory;
adjusting the resolved pixel color based on a statistical distribution of color values for a plurality of samples in the neighborhood of the pixel in the current frame of image data to generate an adjusted pixel color, comprising:
calculating a mean color value based on the color values for a plurality of samples in the neighborhood of the pixel;
calculating a variance for each color component based on the color values for the plurality of samples in the neighborhood of the pixel; and
generating an axis-aligned bounding box (AABB) based on the mean color value and a standard deviation from the mean color value, wherein the standard deviation from the mean color value is calculated based on the variance, for each color component; and
blending a color value for the pixel in the current frame of image data with the adjusted pixel color to generate a resolved pixel color for the pixel in the current frame of image data.

US Pat. No. 10,116,914

STEREOSCOPIC DISPLAY

3DI LLC, Burke, VA (US)

1. An auto-stereoscopic image display device comprising:at least one sensor to track positions of eyes, based on facial recognition, in relation to the stereoscopic device;
a processor configured to map coordinates of a three-dimensional (3D) virtual object generated by the stereoscopic display device, wherein the 3D virtual object has a location in the physical space in front of and relative to the display device;
an image display panel configured to
display separate pairs of first and second stereoscopic images of the 3D virtual object displayed to the eyes for each viewer of the stereoscopic display device so that the 3D virtual object is seen by each viewer in the same physical location, wherein the first and the second stereoscopic images for each viewer are based upon viewpoint perspectives of an angle and distance of the perspective location of the eyes of each viewer as detected by the at least one sensor,
adjust at least two layers of blocking electronically configurable liquid crystal material in a repeating pattern of closed geometric shapes to block light along at least two axes, wherein each of the first and the second stereoscopic images for each viewer are directed to the left and right eyes of each viewer by passing through the layers of blocking electronically configurable liquid crystal material such that a different repeating pattern of closed geometric shapes for layers of material is configured for each individual user independent of a tilt angle of said user's head, and
block light from the first and the second stereoscopic images so it may not be seen outside of the corresponding viewpoint in both an X and an Y axis, wherein an Z axis is perpendicular to the stereoscopic display device,
so that another viewer does not see the pairs of the first and the second stereoscopic images of the 3D virtual object when the another viewer is located above, below, left, or right of the corresponding viewpoint,
wherein the 3D virtual object is viewable by a plurality of viewers, as shown by the separate first and the second stereoscopic images received at individual viewer as configured by the at least two layers of blocking electronically configurable liquid crystal material arranged in repeating patterns of closed geometric shapes,
wherein a hand, a finger, or a pointer interacts of the 3D virtual object when detected near the physical location of the 3D virtual object as shown by the first and the second stereoscopic images for each viewer such that the first and the second stereoscopic images are updated for each viewer based on the interaction with the 3D virtual object and the angle and distance of the perspective locations of the eyes of the respective viewer.

US Pat. No. 10,116,913

3D VIRTUAL REALITY MODEL SHARING AND MONETIZATION ELECTRONIC PLATFORM SYSTEM

DOUBLEME, INC, San Jose,...

1. A three-dimensional body double-generating, social sharing, and monetization electronic system comprising:a HoloPortal electronic system that incorporates a dedicated physical studio space including a center stage, a plurality of stationary cameras surrounding the center stage, and a 3D reconstruction electronic system, which is configured to capture, calculate, reconstruct, and generate graphical transformation of a target object to create a 3D body double model from pre-calibrated image sources from the plurality of stationary cameras;
a HoloCloud electronic system comprising uncalibrated portable video recording devices positioned at multiple-angle views around the target object to generate uncalibrated raw multiple-angle video data streams, a cloud computing resource containing a scalable number of graphics processing units (GPU's) that receive the uncalibrated raw multiple-angle video data streams from the uncalibrated portable video recording devices, a pre-processing module in the cloud computing resource that calibrates temporal, spatial, and photometrical variables deduced from the uncalibrated raw multiple angle video data streams as a post-capture process, which in turn generates background 3D geometry and 360-degree virtual reality videos, and a 3D reconstruction module in the cloud computing resource for providing depth map computations, voxel grid reconstructions, and deformed mesh generations for creation of another 3D body double model that resembles the target object;
a 3D model and content database configured to store the 3D body double model created from the HoloPortal electronic system or the HoloCloud electronic system;
an electronic 3D content sharing software executed on a computer server connected to the 3D model and content database, wherein the electronic 3D content sharing software configures the computer server to upload, list, transmit, and share 3D model animations and 3D contents that are created from the HoloPortal electronic system and the HoloCloud electronic system; and
a client-side 3D content viewer and management user interface executed on a notebook computer, a desktop computer, a mobile communication device, or a web server, wherein the client-side 3D content viewer and management user interface is configured to purchase, sell, transmit, receive, or playback a 3D content incorporating the 3D body double model via the electronic 3D content sharing software and the 3D model and content database.

US Pat. No. 10,116,912

METHOD OF DISPLAYING AN IMAGE AND DISPLAY DEVICE FOR PERFORMING THE SAME

Samsung Display Co., Ltd....

1. A method of displaying an image, comprising:receiving image data for a content image;
determining a modulation region and a peripheral region in the content image based on at least one of a first position derived from a mouse device and a second position derived from an eye detecting device;
generating a left-eye content image and a right-eye content image based on the image data for the content image such that the modulation region has a three-dimensional depth;
displaying the left-eye content image and the right-eye content image; and
periodically changing the three-dimensional depth of the modulation region by changing a modulation distance between the modulation region in the left-eye content image and the modulation region in the right-eye content image based at least in part on a periodic modulation reference timing.

US Pat. No. 10,116,911

REALISTIC POINT OF VIEW VIDEO METHOD AND APPARATUS

QUALCOMM Incorporated, S...

1. A method of providing video corresponding to a dynamic and arbitrary viewing position, the method comprising:receiving, at a video server from at least one camera, image data representing multiple views of a scene, each view having a capture position identifying a capture angle and a capture distance of a camera capturing image data for the view;
receiving, at the video server from a viewing device for presenting the video, a server capability request, wherein the server capability request is received before providing image data of the scene to the viewing device;
transmitting, from the video server to the viewing device for presenting the video, in response to the server capability request, server capability information indicating that the video server can generate a video data stream corresponding to a requested viewing position;
receiving, at the video server, a request for the scene from the viewing device for presenting the video, the request including a viewing position relative to the viewing device of a viewer within a viewing area of the viewing device detected by a position detector coupled with the viewing device;
determining, by the video server, that the multiple views do not include a view associated with a capture position aligned with the viewing position;
identifying, by the video server, a first view of the multiple views of the scene and a second view of the multiple views of the scene, said identifying based on a comparison of the viewing position and the capture position of each view included in the multiple views, and wherein the first view is captured from a first capture position and the second view is captured from a second capture position, and wherein the viewing position is between the first capture position and the second capture position; and
generating, by the video server, an output stream including first image data for the first view and second image data for the second view for transmission to the viewing device, wherein a three-dimensional image of the scene is formed from a combination of the first image data with the second image data.

US Pat. No. 10,116,909

DETECTING A VERTICAL CUT IN A VIDEO SIGNAL FOR THE PURPOSE OF TIME ALTERATION

PRIME IMAGE DELAWARE, INC...

1. A method, comprising:receiving, in real-time, a video program segment having a sequence of digital video images, each digital video image having a plurality of multi-bit pixels;
generating, for each multi-bit pixel, a single-bit indicator that is set when the pixel is active and cleared when the pixel is not active;
counting the single-bit indicators that are set to represent active pixels in each one of adjacent frames of the sequence of digital video images, wherein a vertical cut is not detected when the count between adjacent frames is approximately the same;
calculating a percentage of change value between adjacent frames when the count between adjacent frames is not approximately the same;
comparing the percentage of change value to a positive threshold value and a negative threshold value, wherein a positive change bit is set when the percentage of change value exceeds the positive threshold value, a negative change bit is set when the percentage of change value exceeds the negative threshold value, and a no change bit is set when the percentage of change value does not exceed the positive threshold value or the negative threshold value;
analyzing a pattern of the positive change bits, the negative change bits, and the no change bits over a plurality of sequential digital video images;
determining that a vertical cut has occurred in the sequence of digital video images when the pattern of the positive change bits, negative change bits, and no change bits matches a pre-defined pattern; and
adding or removing individual frames in real-time at the location of the vertical cut to alter a duration of the video program segment.

US Pat. No. 10,116,907

METHODS, SYSTEMS AND APPARATUSES FOR OPTICALLY ADDRESSED IMAGING SYSTEM

THE BOEING COMPANY, Chic...

1. A method of addressing a projection system comprising the steps of:positioning a plasma-containing projection device at a predetermined location;
positioning an electro-optical device at a predetermined location relative to the plasma-containing projection device, the electro-optical device operative to generate a write beam;
activating the projection device by applying a voltage across the plasma-containing device to generate plasma in the plasma-containing device;
generating the write beam;
directing the write beam to the plasma-containing projection device; and
exclusively optically addressing information to the plasma-containing projection device via the write beam;
wherein the write beam is operative to cause a shift in the value of an index of refraction of a material in the plasma-containing projection device to thereby generate an image projected by the plasma-containing projection device.

US Pat. No. 10,116,905

SYSTEM AND METHOD OF VIRTUAL ZONE BASED CAMERA PARAMETER UPDATES IN VIDEO SURVEILLANCE SYSTEMS

HONEYWELL INTERNATIONAL I...

1. A method comprising:a processor of a surveillance system recording first video with a first level of a picture quality for a first camera;
the processor recording second video with the first level of the picture quality for a second camera;
the processor detecting a selection of a first portion of a secured area, wherein the selection of the first portion of the secured area is received via an operator drawing a shape on a diagram of the secured area displayed on a user interface, and wherein the processor detects a second portion of the secured area outside of the first portion of the secured area as an unselected zone;
the processor identifying the first camera within the first portion of the secured area;
the processor recording the first video with a second level of the picture quality for the first camera for a predetermined time period, wherein the second level of the picture quality includes increased video quality relative to the first level of the picture quality by increasing image resolution, increasing frames per second, decreasing a group of pictures (GOP) value, decreasing a compression ratio, or decreasing a bit rate;
the processor identifying the second camera within the unselected zone;
the processor recording the second video with a third level of the picture quality for the second camera for the predetermined time period, wherein the third level of the picture quality includes decreased video quality relative to the first level of the picture quality by decreasing the image resolution, decreasing the frames per second, increasing the GOP value, increasing the compression ratio, or increasing the bit rate; and
the processor recording the first video with the second level of the picture quality for the first camera for the predetermined time period concurrently with recording the second video with the third level of the picture quality for the second camera for the predetermined time period,
wherein the first video with the first level of the picture quality and the second video with the first video quality combined does not exceed a predetermined imposed bandwidth constraint, and
wherein the first video with the second level of the picture quality and the second video with the third video quality combined does not exceed the predetermined imposed bandwidth constraint.

US Pat. No. 10,116,904

FEATURES IN VIDEO ANALYTICS

HONEYWELL INTERNATIONAL I...

1. A video analytics function for streaming video from a video source arranged to monitor a field of view (FOV) that modifies a compression level of an object of interest (“object”) within the FOV, the video analytics function embodied as a set of instructions on a non-transitory computer readable medium, the video analytics function executable by a computer and implementing the following steps:reconstructing the FOV comprising the streaming video for viewing at an end-user interface;
receiving end-user commands at the end-user interface to define an object field encompassing the object within the FOV based on a monitoring priority for the object;
defining the compression level for the object including partial compression or full compression that fully masks the object;
compressing the streaming video within the object field according to the compression level;
monitoring the FOV of the streaming video;
analyzing first data associated with the FOV of the streaming video for a detectable event including movement and a direction of a person in the FOV; and
automatically decreasing the compression level of the object field in response to a detected event, wherein the detected event includes the movement of the person within the object field.

US Pat. No. 10,116,902

PROGRAM SEGMENTATION OF LINEAR TRANSMISSION

Comcast Cable Communicati...

1. A method comprising:determining, by a computing device and based on content scheduling information associated with a media stream:
content from the media stream, wherein the content comprises non-commercial content and commercial content; and
a content type associated with the non-commercial content;
determining, based on the content type, one or more expected visual elements corresponding to the content type;
determining, based on a comparison between the one or more expected visual elements and the content from the media stream, a non-commercial portion of the content from the media stream;
determining that a quantity of repeating elements in a second portion of the content from the media stream satisfies a threshold, wherein the second portion is different from the non-commercial portion; and
storing, after determining that the quantity satisfies the threshold, an updated version of the content from the media stream, wherein the updated version omits one or more of the repeating elements.

US Pat. No. 10,116,900

METHOD AND APPARATUS FOR INITIATING AND MANAGING CHAT SESSIONS

APPLE INC., Cupertino, C...

1. A machine-implemented method performed by at least one machine for initiating a video chat session, the method comprising:in response to a request for starting a single group video chat among a plurality of members, determining whether all members have a chat service account with the same chat service provider;
initiating multiple group video chats among the members in response to determining that not all of the plurality of members have a chat service account with the same chat service provider, wherein each member has at least one chat service account to participate in at least one of the multiple group video chats; and
after the multiple group video chats have started, merging the multiple group video chats into the single group video chat using communication among the members of the multiple group video chats, without involving at least one chat server associated with the chat service provider of at least one of the plurality of members.

US Pat. No. 10,116,899

METHOD AND APPARATUS FOR FACILITATING SETUP, DISCOVERY OF CAPABILITIES AND INTERACTION OF ELECTRONIC DEVICES

LOGITECH EUROPE, S.A., L...

1. A system for configuring and/or controlling one or more electronic devices, comprising:a beacon generation system that comprises:
a first processor;
a wireless transceiver that is configured to transmit a beacon signal that comprises beacon information; and
non-volatile memory having the beacon information stored therein, and also a number of instructions which, when executed by the first processor, causes the beacon generation system to perform operations comprising:
receive an input from a first electronic device or a user;
wirelessly transmit the beacon information to a first electronic device after receiving the input from the first electronic device or the user,
wherein the beacon information includes information that is used by a software application running on the first electronic device to:
select a second electronic device out of a plurality of external electronic devices; and
initiate communication with the second electronic device.

US Pat. No. 10,116,898

INTERFACE FOR A VIDEO CALL

FACEBOOK, INC., Menlo Pa...

1. A method, comprising:displaying a full-sized interface for a video call on a display associated with a participant in the video call, wherein the display is a touch interface;
displaying a reduced-size interface for the video call on a portion of the display associated with the first participant in the video call, the portion being smaller than an entirety of the display, the interface comprising a main window displaying a current relevant video communication in the video call and a roster of additional participants in the video call;
registering a haptic contact initiation signal at a first location on the display in the portion of the display comprising the interface;
registering a haptic contact release signal at a second location on the display; and
moving the interface for the video call based on a difference between the first location and the second location;
receiving an instruction to display a second video communication associated with a second participant that is identified as a previous relevant video communication in the video call while a first video communication associated with the first participant is flagged as the current relevant video communication; and
displaying the second video communication in the main window of the interface.

US Pat. No. 10,116,897

PHOTOMETRIC STABILIZATION FOR TIME-COMPRESSED VIDEO

Adobe Systems Incorporate...

1. In a digital medium environment to reduce at least some photometric characteristic changes of time-compressed video, a method implemented by a computing device, the method comprising:determining, by the computing device, correspondences of pixels in adjacent frames of a time-compressed video;
determining, by the computing device, photometric transformations between the adjacent frames of the time-compressed video, the photometric transformations describing how photometric characteristics of the correspondences change between the adjacent frames;
computing, by the computing device, a measure of photometric similarity between the adjacent frames based on the photometric characteristics of the correspondences;
computing, by the computing device, filters for smoothing photometric characteristic changes across the time-compressed video as combinations of the determined photometric transformations by combining the determined photometric transformations according to weights indicating that photometric transformations between similar frames of the time-compressed video, as indicated by the measure of photometric similarity, influence the filters more than the photometric transformations between less similar frames; and
generating, by the computing device, digital content comprising photometrically stabilized time-compressed video, in part, by using the computed filters to smooth the photometric characteristic changes.

US Pat. No. 10,116,895

SIGNAL DISPLAY OUTPUT METHOD, APPARATUS, AND SYSTEM

Huawei Technologies Co., ...

1. A signal display output method, comprising:receiving, by a TV box expansion device, a radio television signal by using a radio frequency port, wherein the radio television signal comprises a first television signal and a second television signal;
performing, by the TV box expansion device, demodulation processing on the radio television signal to obtain a to-be-decoded digital signal, comprising:
performing, by the TV box expansion device, demodulation processing on the first television signal to obtain a to-be-decoded first digital signal, and performing demodulation processing on the second television signal to obtain a to-be-decoded second digital signal;
sending, by the TV box expansion device, the to-be-decoded digital signal to an Internet Protocol (IP) TV box for decoding processing on the to-be-decoded digital signal to obtain a decoded digital signal for display output, comprising:
sending, by the TV box expansion device, the to-be-decoded first digital signal and the to-be-decoded second digital signal to the Internet Protocol (IP) TV box; and
receiving and storing, by the TV box expansion device, a decoded second digital signal sent by the IP TV box.

US Pat. No. 10,116,893

SELECTIVELY CONTROLLING A DIRECTION OF SIGNAL TRANSMISSION USING ADAPTIVE AUGMENTED REALITY

HIGHER GROUND LLC, Palo ...

1. A method comprising:(a) determining, by a communication device having an antenna, a direction to an intended transceiver, the direction being substantially in an actual direction to the intended transceiver;
(b) determining, by the communication device, a desired direction that is different from the direction to the intended transceiver;
(c) determining by the communication device, an anticipated direction to the desired direction;
(d) determining, by the communication device and based on the desired direction, parameters of expected energy values corresponding to a plurality of pre-defined antenna directions of the antenna around the desired direction;
(e) receiving, by the communication device using the antenna, a plurality of measured energy values corresponding to a plurality of antenna directions of the antenna around the anticipated direction;
(f) calculating, by the communication device, a directional offset using the parameters of expected energy values and the plurality of measured energy values;
(g) generating, by the communication device, an updated anticipated direction by updating the anticipated direction based on the calculated directional offset; and
(h) repeating steps (e) through (g) using the updated anticipated direction as the anticipated direction.

US Pat. No. 10,116,892

BITLINE BOOST FOR FAST SETTLING WITH CURRENT SOURCE OF ADJUSTABLE BIAS

OmniVision Technologies, ...

1. A fast settling output line circuit, comprising: a photodiode (PD) (202) adapted to accumulate image charges in response to incident light; at least one transfer (TX) transistor (204) coupled between the PD (202) and a floating diffusion (FD) (208) to transfer the image charges from the PD (202) to the floating diffusion (FD) (208), wherein a transfer (TX) gate voltage (206) controls transmission of the image charges from a TX receiving terminal (207) of the TX transistor to the FD (208); a reset (RST) transistor (210) coupled to supply a reset FD voltage (VRFD) to the FD (208), wherein a reset (RST) gate voltage (212) controls the RST transistor; a source follower (SF) transistor (216) coupled to receive a voltage of the FD (208) from a SF gate terminal and provide an amplified signal to a SF source terminal (218); a bitline enable transistor (226) coupled to link between a bitline (224) and a bitline source node (BLSN) (230), wherein a bitline enable voltage (228) controls the bitline enable transistor (226); a current source generator (231) coupled to connect between the BLSN (230) and a ground (AGND), wherein the current source generator (231) sinks adjustable current from the BLSN (230) to the AGND through a cascode transistor (232) and a bias transistor (242) controlled by a cascode control voltage (234) and a bias control voltage (244); a cascode hold capacitor (250) coupled between the cascode control voltage (234) and the AGND; a bias hold capacitor (252) coupled between the bias control voltage (244) and the AGND; and a bias boost driver (255) coupled to control the cascode control voltage (234) and the bias control voltage (244).

US Pat. No. 10,116,891

IMAGE SENSOR HAVING STACKED IMAGING AND DIGITAL WAFERS WHERE DIGITAL WAFER HAS STACKED CAPACITORS AND LOGIC CIRCUITRY

1. An electronic device, comprising:a first integrated circuit die having formed therein at least one photodiode, read circuitry for the at least one photodiode, and readout circuitry for the first integrated circuit die, wherein the read circuitry has an input coupled to the at least one photodiode and an output, wherein the readout circuitry has an input coupled to the output of the read circuitry and an output;
a second integrated circuit die in a stacked arrangement with the first integrated circuit die and having formed therein at least one storage capacitor associated with the at least one photodiode; and
an interconnect between the first and second integrated circuit dies for coupling the output of the read circuitry to the at least one storage capacitor;
wherein the output of the readout circuitry provides for readout of data stored in the at least one storage capacitor.

US Pat. No. 10,116,889

IMAGE SENSOR WITH TWO-DIMENSIONAL SPLIT DUAL PHOTODIODE PAIRS

OmniVision Technologies, ...

1. An image sensor, comprising:an array of split dual photodiode (DPD) pairs arranged into a plurality of first groupings and a plurality of second groupings, wherein each first grouping of the array of split DPD pairs consists entirely of either first-dimension split DPD pairs or entirely of second-dimension split DPD pairs, wherein each first grouping of the array of split DPD pairs consisting of the first-dimension split DPD pairs is adjacent to an other first grouping of the array of split DPD pairs consisting of the second-dimension split DPD pairs, wherein the first-dimension is orthogonal to the second-dimension, wherein each one of the split DPD pairs is coupled to sense both phase information and image information from incident light;
a plurality of floating diffusion (FD) regions arranged in each first grouping of the split DPD pairs; and
a plurality of transfer transistors, wherein each one of the plurality of transfer transistors is coupled to a respective photodiode of a respective split DPD pair, and is coupled between the respective photodiode and a respective one of the plurality of FD regions.

US Pat. No. 10,116,888

EFFICIENT METHOD AND SYSTEM FOR THE ACQUISITION OF SCENE IMAGERY AND IRIS IMAGERY USING A SINGLE SENSOR

Eyelock LLC, New York, N...

1. A method of processing images acquired using a single image sensor, comprising:acquiring, by an image sensor, a first image of an object at a predetermined distance with respect to the image sensor, the first image including a time-invariant signal component corresponding to the object, and a random time-varying signal component introduced by one or more noise sources; and
processing the first image according to a selection from a first mode and a second mode, comprising:
if the first mode is selected, acquiring the first image using a first portion of the image sensor, and processing the first image acquired via the first portion by retaining signals from the time-invariant and time-varying signal components with spatial frequencies at and below a threshold spatial frequency predetermined for iris recognition, wherein the object comprises an iris; and
if the second mode is selected, acquiring the first image using a second portion of the image sensor, and processing the first image acquired via the second portion by reducing at least a portion of the signals from the time-invariant and time-varying signal components with spatial frequencies at or below the threshold spatial frequency.

US Pat. No. 10,116,887

SOLID-STATE IMAGING DEVICE AND CAMERA

PANASONIC INTELLECTUAL PR...

1. A solid-state imaging device, comprising:a plurality of pixel circuits arranged in rows and columns;
a plurality of unit power supply circuits that generate a second power supply voltage from a first power supply voltage based on a reference voltage and supply the second power supply voltage to amplifier transistors provided in the plurality of pixel circuits; and
a regulator circuit that generates the reference voltage that is constant,
wherein each of the plurality of unit power supply circuits is provided for a corresponding one of the columns of the plurality of pixel circuits or for a corresponding one of the pixel circuits, and supplies the second power supply voltage to the amplifier transistors in the pixel circuits that belong to the corresponding one of the columns or to the amplifier transistor in the corresponding one of the pixel circuits.

US Pat. No. 10,116,886

DEVICE AND METHOD FOR DIRECT OPTICAL IMAGE CAPTURE OF DOCUMENTS AND/OR LIVE SKIN AREAS WITHOUT OPTICAL IMAGING ELEMENTS

JENETRIC GmbH, Jena (DE)...

1. A device for direct optical recording of a security-related object without optically imaging elements, the device comprising:a placement surface for depositing the object, and a sensor layer disposed under the object on a substrate layer transparent at least in a visible wavelength range;
the sensor layer having light-sensitive elements in a two-dimensional pixel grid and being disposed in a layer body with a circuitry based on thin film transistor (TFT) electronics;
a light source being a primary light-emitting layer for illuminating the object with at least light portions of the primary light-emitting layer from a direction of the sensor layer through the placement surface, wherein all layers of the layer body disposed between the primary light-emitting layer and the placement surface transmit at least portions of light in the visible wavelength range;
the light-sensitive elements of the sensor layer being disposed at a distance of less than a mean pixel spacing from the object on the placement surface, the mean pixel spacing being defined by the two dimensional pixel grid;
the light sensitive elements each having a control unit disposed within the sensor layer for controlling an exposure time to obtain an image captured with a predefined exposure time;a shutter for changing the exposure time by changing a shutter setting of the light sensitive elements in the sensor layer if an overexposure or underexposure has been determined;a storage for storing the image and for storing a resulting image when no further change of the exposure time is needed; and
an internal computing device for analyzing the image at least for overexposure or underexposure, for determining whether a further iteration is needed to change the exposure time, and for further evaluating illumination intensity and adapting the illumination intensity of the primary light-emitting layer below the placement surface if an underexposure or overexposure of the object is determined;
wherein the security-related object is selected from the groups consisting of personal identification documents, passports or driver's licenses and single-fingerprints, multiple finger prints and handprints.

US Pat. No. 10,116,882

DISPLAY APPARATUS FOR SUPERIMPOSING AND DISPLAYING IMAGES

CASIO COMPUTER CO., LTD.,...

1. A display apparatus comprising:a display unit; and
a processor that is configured to:
perform control for superimposing and displaying a plurality of images in the display unit such that at least one of the plurality of images can be observed through one or more other images distinguishably;
designate one or more of the plurality of images; and
detect a user manipulation performed for the plurality of images,
wherein the process performs control for changing the designated one or more images spatially or temporally according to the detected user manipulation while keeping the plurality of images superimposed and displayed.

US Pat. No. 10,116,881

IMAGE APPARATUS AND METHOD FOR RECEIVING VIDEO SIGNAL IN MULTIPLE VIDEO MODES

SAMSUNG ELECTRONICS CO., ...

1. A video signal processing apparatus comprising:a video signal input unit including a plurality of video input terminals that includes a first video input terminal for receiving a plurality of types of video signals and a second video input terminal for receiving one type of video signals; and
a signal processing unit configured to:
determine whether a first video signal is received via the first video input terminal,
determine whether a second video signal is received via the second video input terminal,
in response to the second video signal being received via the second video input terminal while the first video signal is being received via the first video input terminal, process the first and second video signals received via the first video input terminal and the second video input terminal based on an automatically determined first video mode corresponding to a first type of the plurality of types of video signals, and
in response to the second video signal not being received via the second video input terminal while the first video signal is being received via the first video input terminal, process the first video signal received via the first video input terminal based on an automatically determined second video mode corresponding to a second type of the plurality of types of video signals.

US Pat. No. 10,116,879

METHOD AND APPARATUS FOR OBTAINING AN IMAGE WITH MOTION BLUR

Alcatel Lucent, Boulogne...

1. Method for obtaining an image containing a portion with motion blur, comprising:controlling at least one camera to take a first, second and third picture in a determined order of an object and a background, such that said first picture is taken with a first exposure time, said second picture with a second exposure time, and said third picture with a third exposure time, said second exposure time being longer than said first and said third exposure time, such that said second picture contains a blurred image of the background and/or the object if said object and/or said background is moving with respect to said at least one camera;
generating a final image containing at least a portion of said blurred image of the second picture as well as a portion derived from said first and/or third picture using said first, second and third picture,
wherein generating of the final image comprises:
using the first and the third picture to determine a shape and a position of the object in said first and said third picture;
isolating the at least a portion of the blurred image from the second picture, using the position and shape of the object in the first and third picture; and
combining the isolated at least a portion of the blurred image with a portion derived from the first and/or third picture to obtain the final image.

US Pat. No. 10,116,878

METHOD FOR PRODUCING MEDIA FILE AND ELECTRONIC DEVICE THEREOF

SAMSUNG ELECTRONICS CO., ...

1. A method for producing a media file in an electronic device, the method comprises:detecting an event during recording of media frames;
determining at least one effect to be applied on the media frames;
applying the determined effect on at least one of at least one first media frame from a first set of the media frames and at least one second media frame from a second set of the media frames; and
generating a media file comprising the first and second sets of the media frames.

US Pat. No. 10,116,877

IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD

Canon Kabushiki Kaisha, ...

1. An image processing apparatus comprising:one or more processors; and
a memory storing instructions which, when the instructions are executed by the one or more processors, cause the image processing apparatus to function as:
an obtaining unit configured to obtain a first image and a second image;
a determination unit configured to determine a partial area of the first image as a composite area to be combined with the second image; and
a combining unit configured to combine the second image with the composite area,
wherein the determination unit determines the composite area based on distance information with regard to a plurality of partial areas of the first image, and
wherein the determination unit further sets a prohibited area in the first image, and does not set, among the plurality of partial areas, a partial area that overlaps the prohibited area as the composite area.

US Pat. No. 10,116,875

IMAGE PICKUP APPARATUS AND METHOD FOR CONTROLLING THE SAME TO PREVENT DISPLAY OF A THROUGH IMAGE FROM BEING STOPPED WHEN A SHUTTER UNIT IS NOT COMPLETELY OPENED

Olympus Corporation, Tok...

1. An image pickup apparatus comprising:an image pickup device including an imaging plane on which imaging pixels are arranged;
a shutter unit which adjusts an amount of light incident upon the imaging plane;
an image pickup control unit which drives the shutter unit and picks up a still image by the image pickup device, captures a first through image by the image pickup device when the shutter unit is opened, and picks up a second through image including a light-shielded area by the image pickup device, the light-shielded area being formed by shielding part of light incident upon the imaging plane by the shutter unit when the shutter unit is partly light-shielded; and
a display control unit which causes a display device to display a through image using at least the first through image and the second through image,
wherein the display control unit superimposes a superimposing image on the light-shielded area of the second through image to cause the display device to display a through image based on the second through image on which the superimposing image is superimposed,
wherein the display control unit includes an advice display unit which superimposes an advice display on the second through image as the superimposing image during a period from when the still image is completely picked up until at least the shutter unit is opened.

US Pat. No. 10,116,874

ADAPTIVE CAMERA FIELD-OF-VIEW

MICROSOFT TECHNOLOGY LICE...

1. A display device, comprising:a display;
a movable mount;
a camera having an optical field-of-view;
an orientation sensor; and
a controller configured to receive image output from the camera, select, based on the image output, a first clipped field-of-view of the camera to thereby capture a target within the first clipped field-of-view, and in response to a change in an orientation of the camera identified by output from the orientation sensor, select, based on the image output and the output from the orientation sensor, a second clipped field-of-view to thereby capture the target within the second clipped field-of-view, the first and second clipped field-of-views being subsets of the optical field-of-view and being angularly offset from each other.

US Pat. No. 10,116,873

SYSTEM AND METHOD TO ADJUST THE FIELD OF VIEW DISPLAYED ON AN ELECTRONIC MIRROR USING REAL-TIME, PHYSICAL CUES FROM THE DRIVER IN A VEHICLE

Ambarella, Inc., Santa C...

1. An apparatus comprising:a first sensor configured to generate a first video signal based on a targeted view from a vehicle;
a second sensor configured to generate a second video signal based on a targeted view of a driver; and
a processor configured to (A) receive said first video signal, (B) receive said second video signal, (C) determine a field of view to present to said driver, (D) generate a third video signal and (E) present said third video signal to an electronic mirror configured to show said field of view, wherein (a) said field of view is determined based on (i) a body position of said driver extracted from said second video signal by determining a distance from said second sensor and (ii) said first video signal, (b) said distance from said second sensor is based on a comparison of a number of pixels of a known object in a first video frame showing an interior of said vehicle without said driver and a second video frame of said interior of said vehicle with said driver, (c) said field of view displayed on said electronic mirror is configured to emulate a view from a reflective mirror as seen from a point of view of said driver and (d) said electronic mirror implements at least one of a rear view mirror and a side view mirror for said vehicle.

US Pat. No. 10,116,872

IMAGE CAPTURING APPARATUS, METHOD, AND PROGRAM WITH OPERATION STATE DETERMINATION BASED UPON ANGULAR VELOCITY DETECTION

Sony Corporation, Tokyo ...

1. An image capturing apparatus comprising:an angular velocity detection unit configured to respectively detect angular velocities of movement of the image capturing apparatus at a plurality of times;
an operation determination unit configured to determine a panning operation state of the image capturing apparatus based on the detected angular velocities at the plurality of times, the determined panning operation state being one of a plurality of predetermined classifications of panning operation states; and
a zoom control unit configured to perform zoom control based on the determined panning operation state.

US Pat. No. 10,116,871

TUNNEL LINING SURFACE INSPECTION SYSTEM AND VEHICLE USED FOR TUNNEL LINING SURFACE INSPECTION SYSTEM

WEST NIPPON EXPRESSWAY EN...

1. A tunnel lining surface inspection system wherein, while a vehicle is travelling in a tunnel, a tunnel lining surface image is photographed and is processed into an image used for inspecting the tunnel lining surface, the system comprising:a plurality of line sensors mounted in the vehicle, having a photography range of one side face in both side faces of the tunnel lining surface, which photography images of each area along a circumferential direction of the tunnel lining surface,
a fixing member mounted in a lodging space of the vehicle, on which the plurality of line sensors arranged along the circumferential direction of the tunnel lining surface and fixed so that the one side face in the both side faces of the tunnel lining surface can be photographed,
a drive axis mounted in the fixing member for fixing the plurality of line sensors to a first photography position where one side face in the both side faces of the tunnel lining surface can be photographed and for fixing the plurality of line sensors to a second photography position where the other side face in the both side faces of the tunnel lining surface can be photographed, which rotates the fixing member in the circumferential direction of the tunnel lining surface,
a first image processing unit capturing imaging data having been photographed by the plurality of line sensors, and
a second image processing unit processing the imaging data having been captured in the first image processing unit, wherein
the first image processing unit, while the plurality of line sensors being fixed in the first photography position after the drive axis being driven to the left and the fixing member being rotated to the left side in the circumferential direction of the tunnel lining surface, performs processing of capturing a first imaging data having been photographed by the plurality of line sensors, showing one side face in the both side faces of the tunnel lining surface, and, while the plurality of line sensors being fixed in the second photography position after the drive axis being driven to the right and the fixing member being rotated to the right side in the circumferential direction of the tunnel lining surface, performs processing of capturing a second imaging data having been photographed by the plurality of line sensors, showing the other side face in the both side faces of the tunnel lining surface, and
the second image processing unit performs processing of selecting the imaging data forming the identical span of the tunnel lining surface in the first imaging data and the second imaging data according to each span of the tunnel lining surface, and performs image synthesis processing to obtain the images showing both side faces of the tunnel lining surface according to each span of the tunnel lining surface.

US Pat. No. 10,116,870

SINGLE CAMERA VISION SYSTEM FOR LOGISTICS APPLICATIONS

Cognex Corporation, Nati...

1. A vision system for acquiring images of features of objects of varying height passing under a camera field of view in a transport direction comprising:a camera with an image sensor defining a height:width aspect ratio of at least 1:4;
a lens assembly comprising a front lens group and a rear lens group, the front lens group including a front convex lens and a rear composite lens, the rear lens group comprising a variable lens element, the lens assembling being in optical communication with the image sensor and having an adjustable viewing angle at constant magnification within a predetermined range of working distances;
a distance sensor that measures a distance between camera and at least a portion of object; and
an adjustment module that adjusts the viewing angle based upon the distance.

US Pat. No. 10,116,869

IMAGE PICKUP APPARATUS AND DISPLAY CONTROL METHOD

Sony Corporation, (JP)

1. An image processing apparatus comprising:circuitry configured to:
detect an edge of an input image; and
control display of an output image based on the input image and a highlight signal, in which the highlight signal is generated based on the detected edge of the input image and the highlight signal is displayed in a color set for a predetermined range of a level of an edge that a detection level of the detected edge is within.

US Pat. No. 10,116,866

STABILIZATION OF LOW-LIGHT VIDEO

Facebook, Inc., Menlo Pa...

1. A method comprising:by a computing device, determining a first maximum exposure time for capturing one or more image frames of a video clip, wherein the first maximum exposure time is based on a first amount of motion of the computing device and a first light level;
by the computing device, initiating capture of the image frames, wherein each of the captured image frames has an exposure time that is less than or equal to the first maximum exposure time;
by the computing device, while the capture of the image frames is in progress, determining a second amount of motion of the computing device and a second light level; and
by the computing device, determining whether to adjust the first maximum exposure time to a second maximum exposure time based on the second amount of motion and the second light level.

US Pat. No. 10,116,864

IMAGING APPARATUS, IMAGING DISPLAY CONTROL METHOD, AND PROGRAM

Sony Corporation, Tokyo ...

1. An image processing apparatus for controlling an image capturing apparatus, the image processing apparatus comprising:a memory; and
a processor configured to
control, during a capturing operation of images by the image capturing apparatus, display of an area indication indicating a range of an area for moving the image capturing apparatus, at least part of the images captured within the range being used for generating a synthetic image having a field of view wider than that of the images,
control, during the capturing operation, display of a reference position indication indicating a position within the range of an identified subject identified by user operation with the area indication associated with the synthetic image, and
display an instruction indicating a direction that the image capturing apparatus should be moved based on the position of the subject.

US Pat. No. 10,116,863

SCANNING WITH FRAME AVERAGING

Goodrich Corporation, Ch...

1. A method of obtaining image data comprising:scanning an imaging area with an imaging device while obtaining multiple overlapping images of the imaging area; and
transforming the overlapping images by performing frame averaging on the overlapping images to produce at least one enhanced image of the imaging area, wherein transforming the overlapping images by performing frame averaging is performed automatically at a coarse level to produce the at least one enhanced image, and further comprising:
transforming the overlapping images by performing super resolution frame averaging on at least one portion of the at overlapping images to produce at least one super resolution image of the imaging area wherein the at least one super resolution image has a finer sampling than the at least one enhanced image.

US Pat. No. 10,116,862

IMAGING APPARATUS

OLYMPUS CORPORATION, Tok...

1. An image generation apparatus comprising:a first imaging circuit that acquires first image data;
a second imaging circuit that acquires second image data;
a control circuit that searches a region corresponding to the first image data from the second image data;
a designating circuit that limits a region in the second image data corresponding to the first image data by a touch operation designating a limited region in the second image data corresponding to the first image data; and
a communication circuit that is provided in the second imaging circuit, transmits, upon receipt of an information acquiring operation, information obtained by analyzing the limited region or the corresponding region in the second image data to a server, and receives information relating to the first image data from the server.

US Pat. No. 10,116,861

GUIDED IMAGE CAPTURE USER INTERFACE

Ricoh Company, Ltd., Tok...

1. A computer-implemented method comprising:generating a first user interface configured to receive and present product information for an item including dimensions of the item;
receiving a first image;
generating a second user interface to present a template, the template including a bounding box sized to match the dimensions of the item, the second user interface configured to present the bounding box overlaid over a second image;
receiving input to capture a portion of the second image within the bounding box;
responsive to the input to capture the portion of the second image, generating a third user interface to present the first image and the captured portion of the second image as variants of a face of the item; and
storing the captured portion of the second image as a variant of the face of the item and the information of the item in a database.

US Pat. No. 10,116,860

IMAGING OPERATION GUIDANCE DEVICE AND IMAGING OPERATION GUIDANCE METHOD

OLYMPUS CORPORATION, Tok...

1. An imaging operation guidance device, comprising:an image sensor that obtains a current image;
an attitude sensor that measures motion of the image sensor;
a memory that stores at least one previous image and an operation history for the image sensor; and
a controller that is communicatively coupled to the image sensor, the attitude sensor and the memory, wherein the controller:
stores measurements from the attitude sensor in the memory,
identifies an object of interest that is located in the at least one previous image that is missing from the current image, and
determines guidance instructions for obtaining a future image based on the operation history and the measurements from the attitude sensor, wherein the guidance instructions are determined to restore the object of interest to the future image.

US Pat. No. 10,116,859

IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD THAT PRESENT ASSIST INFORMATION TO ASSIST PHOTOGRAPHING

OLYMPUS CORPORATION, Tok...

1. An image processing apparatus comprising:a display;
a memory; and
a hardware processor which, under control of a program stored in the memory, controls execution of processes comprising:
an image acquisition process which acquires image data;
a photographic information acquisition process which acquires photographic information concerning the image data;
a scene/subject discrimination process which discriminates a photographic scene or a subject in the image data based on the photographic information;
an assist information retrieval process which retrieves assist information concerning a lens in accordance with a discrimination result of the photographic scene or the subject;
a lens information acquisition process which acquires lens information which is information indicating a relationship between a corresponding lens and a user;
an assist information priority setting process which sets a priority of pieces of assist information to be displayed on the display in accordance with the acquired lens information; and
a display process which displays the retrieved assist information on the display,
wherein the assist information comprises lens-related assist information which includes at least one of a sample image showing an example corresponding to the lens, a type of the lens used to acquire the sample image, a specification of the lens used to acquire the sample image, and a setting of the lens used to acquire the sample image,
wherein the lens information includes at least one of information indicating whether the corresponding lens is mounted in an imaging apparatus which acquires the image data, information indicating that the corresponding lens has been mounted in the imaging apparatus which acquires the image data, and information indicating whether the user possesses the corresponding lens, and
wherein the lens information further includes information indicative of a time of purchasing the corresponding lens, information indicative of a time of mounting the corresponding lens in the imaging apparatus for a first time, and information indicative of a number of pieces of image data acquired by using the corresponding lens.

US Pat. No. 10,116,858

IMAGING APPARATUS, CONTROL METHOD THEREOF, PROGRAM, AND RECORDING MEDIUM

Canon Kabushiki Kaisha, ...

1. An imaging apparatus comprising:an optical system that includes a focus adjustment lens that operates to move forward and backward in an optical axis direction in a predetermined movable area;
an imaging element that has an imaging plane capable of being curved and that captures an image of a subject formed via the optical system;
an evaluation unit that determines an evaluation value indicating a degree of in-focus of an image signal output from the imaging element based on the image signal;
an adjustment unit that adjusts, based on the evaluation value, a position of the focus adjustment lens to, among positions in the predetermined movable area, a position with the highest evaluation value; and
a control unit that performs control of the curvature of the imaging plane for correcting an image plane curve in the optical system and that performs control of the curvature of the imaging plane for bringing the image signal into focus, wherein
in a case where the adjustment unit adjusts the position of the focus adjustment lens to an end portion of the predetermined movable area, the control unit performs the control of the curvature of the imaging plane for bringing the image signal into focus on a priority basis.

US Pat. No. 10,116,857

FOCUS ADJUSTMENT APPARATUS, CONTROL METHOD OF FOCUS ADJUSTMENT APPARATUS, AND IMAGING APPARATUS

Canon Kabushiki Kaisha, ...

1. A focus adjustment apparatus comprising:an imaging unit configured to convert light from an optical system to an electric signal by photoelectric conversion and output an image signal for imaging and a pair of parallax image signals in a focus detection area;
a focus detection unit configured to detect a defocus amount using the pair of parallax image signals;
a control unit configured to control adjustment of a focus position of the optical system based on the defocus amount;
a first determination unit configured to determine whether the imaging unit is imaging a subject with a repetitive pattern in the focus detection area; and
a second determination unit configured to determine whether a degree of image blurring is equal to or more than a predetermined degree of blurring using at least one of the image signal for imaging and the pair of parallax image signals, wherein
when the first determination unit determines that the imaging unit is imaging a subject with a repetitive pattern in the focus detection area and the second determination unit determines that the degree of image blurring is equal to or more than the predetermined degree of blurring, the control unit moves a focus lens in the optical system to acquire a new defocus amount.

US Pat. No. 10,116,856

IMAGING APPARATUS AND IMAGING METHOD FOR CONTROLLING A DISPLAY WHILE CONTINUOUSLY ADJUSTING FOCUS OF A FOCUS LENS

Olympus Corporation, Tok...

1. An imaging apparatus that carries out a focus adjustment operation by moving a focus lens based on an image signal of an image sensor for forming a subject image, comprising:a focus controller that generates an evaluation value by extracting given signal components from the image signal, and carries out focus adjustment by calculating position of the focus lens where the evaluation value becomes a peak;
a display that displays an image based on image data generated from the image signal of the image sensor; and
a controller that executes to display an image using the display by generating image data based on an image signal that has been acquired during a focus adjustment operation where continuous focus adjustment is executed by the focus controller, wherein
the controller, as initial image display after commencement of the continuous focus adjustment operation, executes display using the display based on image data acquired when a movement position of the focus lens is within a predetermined vicinity of a predicted in-focus position that is based on a history of at least one past in-focus position of the focus lens when an in-focus position was reached in the past, from among image data that has been acquired during the focus adjustment operation, and
from commencement of the continuous focus adjustment operation until the movement position of the focus lens is within the predetermined vicinity of the predicted in-focus position, an image based on image data generated from the image signal is not displayed on the display section.

US Pat. No. 10,116,855

AUTOFOCUS METHOD FOR MICROSCOPE AND MICROSCOPE COMPRISING AUTOFOCUS DEVICE

CARL ZEISS MICROSCOPY GMB...

1. A microscope for imaging a sample, the microscope comprising:an image detector,
an objective, which has a focal plane lying in a sample space and images the sample space onto the image detector, wherein the position of the focal plane in the sample space is adjustable, and
an autofocus device having:
a light modulator which is adapted to generate a luminous modulation object that is intensity-modulated periodically along one direction and to additionally generate a luminous comparison object which extends along the direction of the modulation object,
an autofocus illumination optical unit which projects the modulation object and the comparison object to the sample space such that a projection of the modulation object and a projection of the comparison object are formed in the sample space,
a separate autofocus camera,
an autofocus imaging optical unit which images the projection of the modulation object and the projection of the comparison object onto the separate autofocus camera,
a control device which receives signals of the separate autofocus camera and is adapted:
to determine from the signals of the autofocus camera an intensity distribution which the projection of the image of the modulation object has along the direction, and an intensity distribution, which the projection of the image of the comparison object has along the direction, and
to evaluate the intensity distribution of the image of the projection of the comparison object, and to generate a corrected intensity distribution of the image of the projection of the modulation object based on the evaluated intensity distribution, in which corrected intensity distribution effects of reflectivity variations in the sample space are reduced or eliminated,
wherein the control device is further adapted to generate a focus control signal based on the corrected intensity distribution, which focus control signal defines the adjustment of the location of the focal plane when imaging the sample to the image detector.

US Pat. No. 10,116,854

PHOTOELECTRIC CONVERSION APPARATUS, SWITCHING AN ELECTRIC PATH BETWEEN A CONDUCTIVE STATE AND A NON-CONDUCTIVE STATE

1. A photoelectric conversion apparatus, comprising:a sensor cell unit comprising a photoelectric conversion unit, an amplification unit, a select switch, and a reset switch, the amplification unit comprising an input node and an output node;
an output line;
a signal processing unit; and
a control unit,
wherein the output node is electrically connected to the signal processing unit via the select switch and via the output line in this order,
wherein an electrical path between the output node and the output line is switched between a conductive state and a non-conductive state by the select switch,
wherein the input node is electrically connected to the photoelectric conversion unit, and is electrically connected to the signal processing unit via the reset switch and via the output line in this order,
wherein an electric path between the input node and the output line is switched between a conductive state and a non-conductive state by the reset switch,
wherein the control unit is configured to control the select switch to be in a conductive state in a period in which the reset switch is in a conductive state, and
wherein the sensor cell unit further comprises a switch, and a capacitance element electrically connected to the input node via the switch.

US Pat. No. 10,116,852

CONTROL DEVICE, CONTROL SYSTEM, CONTROL METHOD AND PROGRAM

Sony Corporation, Tokyo ...

1. A remote camera control device comprising:a communication circuit configured to transmit an operation request to an external camera device, and to selectively transmit a sensor information to the external camera device; and
a control circuit configured to access a product information of the external camera device, and in a case that the external device does not include a local sensor, cause the communication circuit to transmit the sensor information to the external camera device.

US Pat. No. 10,116,851

OPTIMIZED VIDEO DENOISING FOR HETEROGENEOUS MULTISENSOR SYSTEM

SAGEM DEFENSE SECURITE, ...

1. A method for temporal denoising of a sequence of images, said method comprising:/a/ capturing, by a first sensor, a sequence of first images corresponding to a given scene, each first image being divided into elements each associated with a corresponding area of said first image,
/b/ capturing, by a second sensor of a type different from the type of the first sensor, a sequence of second images corresponding to said given scene, each second image corresponding to a first image, each second image being divided into elements each associated with a corresponding area of said second image, each pair of element and associated area of the second image corresponding to a pair of element and associated area of the corresponding first image, and
/c/ obtaining, by calculation circuitry, a first sequence of images derived from the sequence of first images and a second sequence of images derived from the sequence of second images,
/d/ obtaining, by the calculation circuitry, for each area of each of the images of the first and second sequences of images, an associated weight,
/e/ obtaining, by the calculation circuitry, a first weighted sequence of images, in which each element of each image is equal to the corresponding element of the first sequence of images weighted by the weight associated with the area associated with said corresponding element, and a second weighted sequence of images, in which each element of each image is equal to the corresponding element of the second sequence of images weighted by the weight associated with the area associated with said corresponding element,
/f/ obtaining, by the calculation circuitry, a sequence of enhanced images resulting from combining sequences of images comprising the first weighted sequence of images and the second weighted sequence of images,
/g/ obtaining, by the calculation circuitry, a motion estimation based on the obtained sequence of enhanced images,
/h/ obtaining, by the calculation circuitry, based on the calculated motion estimation, a spatial alignment of the images of a sequence of images to be displayed derived from sequences of images corresponding to the given scene and comprising the sequence of first images and the sequence of second images,
/i/ a temporal denoising, by the calculation circuitry, based on the determined spatial alignment of the sequence of images to be displayed.

US Pat. No. 10,116,850

METHOD AND AN ELECTRONIC DEVICE FOR AUTOMATICALLY CHANGING SHAPE BASED ON AN EVENT

Samsung Electronics Co., ...

1. A method for automatically changing a shape of a flexible electronic device, the method comprising:identifying, by the flexible electronic device, at least one event triggered in the flexible electronic device; and
changing, by the flexible electronic device, the shape of a surface of the flexible electronic device, according to the at least one identified event,
wherein the changing of the shape of the surface of the flexible electronic device comprises changing, if the at least one event is associated with at least one camera, the shape of the flexible electronic device such that the at least one camera is positioned to at least one side of the flexible electronic device, according to the at least one event associated with the at least one camera.

US Pat. No. 10,116,848

ILLUMINATION AND IMAGING SYSTEM FOR IMAGING RAW SAMPLES WITH LIQUID IN A SAMPLE CONTAINER

Screen Holdings Co., Ltd....

1. An imaging apparatus that images a raw sample as an imaging object carried together with liquid in a sample container, the apparatus comprising:a holder that holds the sample container;
an imaging optical system, arranged to face the sample container held by the holder, that has an object-side hypercentric property;
an imaging element that images an image of the imaging object focused by the imaging optical system; and
an illuminator that illuminates the imaging object from a side opposite to the imaging optical system across the sample container held by the holder, wherein:
the illuminator includes a light source and an illumination optical system that causes light emitted from the light source to be incident on a sample surface where the imaging object is present;
the illumination optical system has an optical axis coaxial with that of the imaging optical system and an exit pupil position located between the illumination optical system and the imaging optical system;
the holder arranges the sample surface between the exit pupil position and the imaging optical system;
the sample container contains a well with a bottom surface having optical transparency;
the well carries the raw sample as the imaging object together with the liquid;
a size of an imaging field of view of the imaging apparatus is smaller than a size of the bottom surface of the well; and
the imaging field of view covers only a central area of the well, the central area being distant from a peripheral edge of the well.

US Pat. No. 10,116,847

IMAGING UNIT AND IMAGING APPARATUS

PANASONIC INTELLECTUAL PR...

1. An imaging apparatus comprising:a main body,
a first camera fixedly arranged on the main body;
an opening-closing unit provided rotatably with respect to the main body via a first hinge unit;
a second camera mounted on the opposite side of the opening-closing unit from the first hinge unit; and
a second hinge unit for rotatably supporting the imaging direction of the second camera, wherein
a body surface of the second camera is uneven, and
the rotation direction of the first hinge unit and the rotation direction of the second hinge unit are the same rotation direction at the time of starting shooting with the second camera.

US Pat. No. 10,116,846

GAZE BASED DIRECTIONAL MICROPHONE

Tobii AB, Danderyd (SE)

1. A system for converting sounds to electrical signals, the system comprising:a gaze tracking device, wherein the gaze tracking device determines a gaze direction of a user;
a microphone; and
a processor configured to:
select a first sound reproduction unit to reproduce sound received by the microphone when the first sound reproduction unit is within the gaze direction of the user and to not select a second sound reproduction unit to reproduce the sound received by the microphone when the second sound reproduction unit is outside the gaze direction of the user; and
select the second sound reproduction unit to reproduce the sound received by the microphone when the second sound reproduction unit is within the gaze direction of the user and to not select the first sound reproduction unit to reproduce the sound received by the microphone when the first sound reproduction unit is outside the gaze direction of the user.

US Pat. No. 10,116,845

IMAGING DEVICE

Ricoh Company, Ltd., Tok...

1. An imaging device comprising:an imaging unit having an imager configured to image a subject, and a holder configured to hold the imager at one end thereof;
a housing including a recess formed in a first surface thereof, and configured to house the imaging unit, the housing being a housing for a video conferencing device; and
a hinge having a hinge member housed in the recess pivotally coupled to the housing around an axle extending approximately in parallel with the first surface inside the recess of the housing, wherein
the imaging unit pivots around the axle via the hinge between a housing position at which the imaging unit is housed inside the recess of the housing and a projecting position at which the imaging unit is projected from the recess of the housing,
wherein:
the imager includes an imaging element having a rectangular shape with a 16:9 aspect ratio, a lens configured to introduce external light into the imaging element, and a lens hood mounted at an outer periphery of the lens, the imaging element disposed inside of the housing,
the lens hood projects from a surface of the lens by a distance to allow the imager to introduce light for imaging a subject from the lens into the imaging element and to block unnecessary light introduced from the lens into the imaging element,
a shape of the lens hood is substantially rectangular, and has an aspect ratio substantially the same as the aspect ratio of the imaging element, and
the substantially rectangular shape of the lens hood has substantially the same shape as the rectangular shape of the imaging element in both a horizontal and a vertical dimension.

US Pat. No. 10,116,844

CAMERA MODULE HAVING BASE WITH MENTAL SUBSTRATE, CONDUCTIVE LAYERS AND INSULATION LAYERS

TDK TAIWAN CORP., Yangme...

1. A camera module, comprising:a lens driving mechanism;
a lens unit, disposed on the lens driving mechanism;
a circuit board, comprising:
a metal member;
a metal wire;
an insulation layer, disposed between the metal member and the metal wire; and
an image sensor, disposed on the circuit board and electrically connected to the metal wire, wherein the lens driving mechanism can drive the lens unit to move relative to the image sensor, and the image sensor can catch the light through the lens unit; and
a base, disposed between the image sensor and the lens unit, comprising:
a metal substrate;
a first conductive layer, electrically connected to the lens driving mechanism; and
a first insulation layer, disposed between the metal substrate and the first conductive layer.

US Pat. No. 10,116,843

ELECTRICAL BRACKET AND CIRCUIT CONDUCTING METHOD FOR CAMERA MODULE

Ningbo Sunny Opotech Co.,...

1. A camera module, comprising:an optical lens;
a photosensitive sensor which comprises a plurality of photosensitive sensor guides, wherein said optical lens is located along a photosensitive path of said photosensitive sensor;
a circuit board which comprises a plurality of circuit board guides;
an electrical bracket coupled on said circuit board;
a plurality of first connection units pre-formed on surfaces of said electrical bracket at predetermined locations, wherein when said photosensitive sensor is coupled at said electrical bracket, said photosensitive sensor guides are aligned with and electrically connected to said first connection units respectively to electrically connect said electrical bracket with said photosensitive sensor;
a plurality of second connection units pre-formed on said surfaces of said electrical bracket at predetermined locations, wherein when said electrical bracket is coupled on said circuit board, said circuit board guides are aligned with and electrically connected to said second connection units respectively to electrically connect said electrical bracket with said circuit board, such that said electrical bracket forms an electrical connection means for electrically connecting said photosensitive sensor with said circuit board; and
a driver, which comprises a plurality of driver guides, arranged for operatively coupling with said optical lens, wherein when said driver is coupled at said electrical bracket, said first connection units are aligned with and electrically connected to said driver guides respectively to electrically connect said electrical bracket with said driver, such that said electrical bracket not only forms an assembling means for connecting said driver with said optical lens but also forms said electrical connection means for electrically connecting said photosensitive sensor and said driver with said circuit board.

US Pat. No. 10,116,842

GATHERING RANGE AND DIMENSIONAL INFORMATION FOR UNDERWATER SURVEYS

CATHX RESEARCH LTD., Cou...

1. An underwater survey system for gathering range and 3D dimensional information of subsea objects, the system comprisinga camera configured to capture images of a subsea scene; and
one or more reference projection light sources configured to project one or more structured light beams
the camera configured to capture a sequence of images of each of a plurality of fields of view within the scene, where each of the plurality of fields of view of the scene is illuminated by one or more of the light sources, and wherein the camera and light sources are synchronized so that each time an image is acquired, a specific configuration of light source parameters and camera parameters is used;
the one or more reference projection light sources having a fixed distance from the camera and a fixed orientation in relation to the camera.

US Pat. No. 10,116,840

ARRAY CAMERA, ELECTRICAL DEVICE, AND METHOD FOR OPERATING THE SAME

LG ELECTRONICS INC., Seo...

1. A method for operating an array camera comprising a plurality of camera modules, the method comprising:acquiring images through the camera modules;
when a size of a first object present in the acquired images is equal to or greater than a predetermined size, extracting a first image acquired by a first camera module and a second image acquired by a second camera module, the first camera module and the second camera module being two adjacent camera modules selected from among the plurality of camera modules;
calculating first distance information regarding the first object based on the first image and the second image; and
when a size of a second object present in the acquired images is less than the predetermined size, extracting a third image acquired by a third camera module and a fourth image acquired by a fourth camera module, the third camera module and the fourth camera module being two spaced apart camera modules selected from among the plurality of camera modules;
calculating second distance information regarding the second object based on the third image and the fourth image.

US Pat. No. 10,116,839

METHODS FOR CAMERA MOVEMENT COMPENSATION FOR GESTURE DETECTION AND OBJECT RECOGNITION

Atheer Labs, Inc., Mount...

1. A method, comprising:receiving a video stream comprised of a sequential series of frames from a camera, wherein the video stream is captured at a frame rate;
receiving motion data from a motion sensor that is physically associated with the camera to detect motion of the camera, wherein the motion data is captured at a sampling rate;
associating a first frame of the sequential series of frames with a portion of the motion data that is captured approximately contemporaneously with the first frame, the portion of the motion data indicative of an amount of movement of the camera when the camera captured the first frame;
when the sampling rate is greater than the frame rate, aggregating a first frame sample of the motion data and a second sample of the motion data captured between the first frame of the sequential series of frames and a second frame of the sequential series of frames to obtain an aggregated movement value representative of the motion of the camera when the camera captured the first frame;
comparing the aggregated movement value with a first threshold for the amount of movement of the camera;
when the aggregated movement value does not exceed the first threshold, accepting the first frame from the video stream; and
when the aggregated movement value exceeds the first threshold, rejecting the first frame from the video stream.

US Pat. No. 10,116,838

METHOD AND APPARATUS FOR PROVIDING SIGNATURES OF AUDIO/VIDEO SIGNALS AND FOR MAKING USE THEREOF

GRASS VALLEY CANADA, Tor...

1. A method for setting a signal delay based on generated video signatures representative of a content of a video signal, the method comprising:for each of a first video signal and second video signal comprising the first signal after at least one transmission operation:
selecting, by a signature extraction unit, a first subset of pixels of a first image of the video signal and a corresponding second subset of pixels of a second image of the video signal, each of the first subset and second subset excluding one or more pixels of the corresponding image,
incrementing, by a comparator of the signature extraction unit for each pixel of the first subset of pixels, a counter value responsive to a difference between pixel data of a pixel of the first subset of pixels and pixel data of a corresponding pixel of the second subset of pixels exceeding a threshold,
dividing, by the signature extraction unit, the counter value by a value proportional to the number of the plurality of pixels, and
generating, by the signature extraction unit, a video signature comprising the divided counter value;
identifying a delay between the first video signal and second video signal based on a comparison of the video signature of the first video signal and the video signature of the second video signal; and
automatically setting a signal delay based on the identified delay.

US Pat. No. 10,116,837

SYNCHRONIZED LOOK-UP TABLE LOADING

Hewlett-Packard Developme...

1. A printing device comprising:a processor to process a print job that is received from a computing device;
processor memory operatively connected to the processor and comprising multiple buffers, each buffer to store a look-up table;
additional memory configured to store a plurality of look-up tables for processing the print job; and
a memory controller operatively connected to the additional memory, the memory controller to:
in response to processing of the print job reaching a buffer trigger row of the print job, use look-up metadata stored in the additional memory to identify a next look-up table from among the plurality of look-up tables, wherein the processing of the print job is performed using an initial look-up table of the plurality of look-up tables;
dynamically load the next look-up table into a next buffer of the processor memory while processor continues to process the print job using the initial look-up table in a current buffer of the processor memory; and
continue processing the print job using the next look-up table after a target row of the print job is reached.

US Pat. No. 10,116,835

INFORMATION PROCESSING APPARATUS AND METHOD THAT MANAGE LOG INFORMATION

Ricoh Company, Ltd., Tok...

1. An information processing apparatus, comprising:a first memory; and
a processor coupled to the first memory, and configured to
obtain log information related to a job having been executed in response to an instruction by a user, the log information including a management code selected by the user and user identification information of the user;
modify the obtained log information by modifying the user identification information included in the obtained log information such that the user is not specified by the log information; and
store the modified log information in a second memory,
wherein the modified user identification information is included in the modified log information stored in the second memory, and
wherein the user identification information that is not modified is not included in the modified log information stored in the second memory.

US Pat. No. 10,116,833

IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD AND PROGRAM FOR ANIMATION DISPLAY

Sony Semiconductor Soluti...

1. An image processing device comprising:a memory unit storing image data;
a reduction scaler unit configured to reduce image data of an input image or maintain a current size of the image data, and store the image data into the memory unit; and
an enlargement scaler unit configured to enlarge the image data stored in the memory unit or maintain a current size of the image data, and output the image data as image data of an output image,
wherein
the reduction scaler unit converts a resolution of the input image to an intermediate resolution in accordance with first parameters related to an image to be supplied from the enlargement scaler unit, the intermediate resolution being a resolution for performing writing on the memory unit, and
the enlargement scaler unit converts the intermediate resolution of a memory-held image read from the memory unit to a resolution of the output image, in accordance with second parameters related to an image to be supplied from the reduction scaler unit.

US Pat. No. 10,116,832

INFORMATION PROCESSING DEVICE, CONTROL METHOD, AND RECORDING MEDIUM

Canon Kabushiki Kaisha, ...

1. A control method of an information processing device that communicates with a communication device and includes at least one processor configured to execute the control method, the method comprising:accepting a predetermined operation by a user;
not executing control to execute newly transmission processing for transmitting wirelessly, to the communication device by a first communication standard, information about an external device outside the communication device and outside the information processing device, and communicating with the communication device via the external device in a case where the predetermined operation is accepted, in a state that the external device is connected to the information processing device by a second communication standard different from the first communication standard and the communication device is connected to the external device by the second communication standard, and
communicating with the communication device via the external device after the control to execute newly the transmission processing is executed based on the predetermined operation in a case where the predetermined operation is accepted, in a state that the communication device is not connected to the external device by the second communication standard, the external device being connected to the information processing device by the second communication standard
wherein after the control to execute newly the transmission processing is executed, the communication device connects to the external device by the second communication standard based on the information about the external device, the information being transmitted to the communication device as a result of the transmission processing being executed newly.

US Pat. No. 10,116,831

MANAGEMENT SERVER CONFIGURED TO EXTRACT INFORMATION INDICATING AN AVAILABILITY OF AN IDENTIFIED IMAGE FORMING APPARATUS, INFORMATION PROCESSING METHOD, SYSTEM AND RECORDING MEDIUM

Ricoh Company, Ltd., Tok...

1. A management server comprising:a memory and a processor, the memory containing computer readable code that, when executed by the processor, configures the processor to,
authenticate a user of at least one image forming apparatus based on information on the user from an information processing apparatus,
accumulate print data from the information processing apparatus,
acquire availability information and history information from the at least one image forming apparatus, the availability information indicating whether the at least one image forming apparatus is online and idle, and the history information indicating a tally of past usage of the at least one image forming apparatus by the user,
generate a preferred list of preferred image forming apparatuses from among a plurality of image forming apparatuses connected to the management server based on the availability information and the history information,
acquire device information from the preferred image forming apparatuses,
transmit the device information to the information processing apparatus prior to receiving a printing request to print the accumulated print data such that the user is provided with the device information of the preferred image forming apparatuses prior to executing location-free (LF) printing from a user interface of one of the plurality of image forming apparatuses, and
perform the location-free (LF) printing by transmitting the accumulated print data to the one of the plurality of image forming apparatuses in response to receipt of the printing request from the one of the plurality of image forming apparatuses.

US Pat. No. 10,116,830

DOCUMENT DATA PROCESSING INCLUDING IMAGE-BASED TOKENIZATION

ACCENTURE GLOBAL SOLUTION...

1. A system to perform image-based tokenization, the system comprising:a memory storing a processing application comprising machine readable instructions;
a processor that executes the stored processing application to:
prompt a user via a display to capture a first image of a physical item;
control an imaging sensor to capture the first image; and
measure pixel parameters of pixels in a region of the captured first image to identify a unique feature of the physical item;
a tokenization server to:
receive the unique feature identified by the processing application;
apply the unique feature to a cryptographic function to generate a token for the physical item;
capture a second image of the physical item,
the physical item rendered unusable for an intended purpose after a modification of the physical item;
verify whether the physical item is rendered unusable for the intended purpose based on the captured second image of the physical item; and
in response to the verifying that the physical item is rendered unusable for the intended purpose, release the token for use by the user; and
a token database to store the token and information related to the physical item.

US Pat. No. 10,116,829

INFORMATION PROVIDING SYSTEM BY DATA RELAYING APPLICATION

STAR MICRONICS CO., LTD.,...

1. An information providing system using a data relaying application comprising a printing application which receives a first data generated by another application executed by a mobile, converts the first data into a second data for printing, and outputs the second data to a printer, the information providing system comprising:an application activating unit which issues an application binding command to activate the printing application in response to a print instruction given by a user of the mobile, the application binding command designating the printing application and including a predetermined information acquiring command designated according to an information acquisition parameter set by the user of the mobile;
a printing execution controlling unit of the printing application which controls execution of printing by the printer;
a print result information acquiring unit of the printing application which acquires a print result information from the printer, the print result information representing success or failure of the execution of printing:
an additional information acquiring unit of the printing application which acquires an additional information on at least one of the printer and the printing application according to the predetermined information acquiring command included in the application binding command; and
an information providing unit of the printing application which provides the mobile with the print result information acquired by the print result information acquiring unit and the additional information acquired by the additional information acquiring unit by displaying the print result information and the additional information together on a screen of the mobile.

US Pat. No. 10,116,828

IMAGE COMMUNICATION APPARATUS, CONTROL METHOD THEREOF, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM

CANON KABUSHIKI KAISHA, ...

1. An image communication apparatus capable of image communication to an external line and an extension line, the apparatus comprising:a memory device that stores a set of instructions; and
at least one processor that executes the instructions to:
designate a transmission destination of image data,
append an external line number if the designated transmission destination is the external line,
transmit the image data in accordance with one of the designated transmission destination and the transmission destination to which the external line number is appended,
when a transmission of the image data is performed in accordance with the transmission destination to which the external line number is appended, individually record, as history information for the transmission, the designated transmission destination and the external line number,
set a number as the external line number;
register the transmission destination included in the history information to an address book if the external line number included in the history information and the set number match, and
display that the external line number is changed if the external line number included in the history information and the set number do not match when the history information is selected for the address book.

US Pat. No. 10,116,827

IMAGE FORMING APPARATUS

KYOCERA Document Solution...

1. An image forming apparatus comprising:a reading section configured to read a plurality of images from a document; and
an image forming section configured to form the plurality of read images on a sheet, wherein
the plurality of images include a first image having a first color and a second image having a second color that is different from the first color,
the image forming section forms the first image on a first main side of the sheet and the second image on a second main side of the sheet, the first main side being one of two opposite sides of the sheet, the second main side being the other of the two opposite sides of the sheet,
the first image shows a question,
the second image shows an answer to the question,
when the sheet is folded such that a part of the sheet covers the second image, the second image is visible at the first main side through the part of the sheet,
the image forming section forms the first image on the first main side and the second image on the second main side in such a manner that the first image and the second image visible at the first main side through the part of the sheet when the sheet is folded such that the part of the sheet covers the second image do not overlap each other and form the same content as the plurality of images, and
the image forming section forms a third image in a region of the first main side to prevent a mirror image of the second image from being visible at the first main side through the sheet when the sheet is not folded, the region of the first main side overlapping a region where the second image is formed, the third image covering and hiding the entirety of the mirror image of the second image.

US Pat. No. 10,116,826

METHOD AND APPARATUS FOR AUTOMATICALLY RESUMING A PRINT JOB FROM PORTABLE MEMORY DEVICE

Xerox Corporation, Norwa...

1. A method for automatically printing a document in a document printing system, comprising:detecting, by a processing device of a print device, a trigger event by determining that a portable memory device has become communicatively connected to a port of the print device;
upon detecting the trigger event, by the processing device:
accessing a document file stored in the portable memory device, wherein the document file comprises a digital representation of a document to be printed,
detecting whether a configuration file associated with the document file is stored in the portable memory device,
if the configuration file exists in the portable memory device, automatically printing the document file by:
determining that the configuration file contains information about an interrupted print job of the document file,
extracting, from the configuration file, at least a page number of the document at which an interruption of the interrupted print job occurred, and
causing a print engine of the print device to automatically resume the interrupted print job from the page number of the document at which the interruption occurred.

US Pat. No. 10,116,824

METHOD AND IMAGE FORMING APPARATUS FOR GENERATING WORKFLOW OF IMAGE FORMING JOB

S-Printing Solution Co., ...

1. A method of generating a workflow of an image forming job, the method comprising:providing a first list of selectable first functions;
receiving a user input for selecting a first function from the first list;
running an application for executing the selected first function to provide a user interface (UI) for receiving setting values for the selected first function;
storing the received setting values for the selected first function;
determining output data of the selected first function;
determining, based on the output data of the selected first function, a second list of selectable second functions that are continuously executable to the selected first function;
providing the second list;
receiving a user input for selecting a second function from the second list; and
generating a workflow to sequentially execute the selected first function based on the received setting values for the selected first function and the selected second function,
wherein the second list of selectable second functions is determined based on whether input data of the second functions corresponds to the output data of the selected first function.

US Pat. No. 10,116,823

CLEANING DEVICE THAT REMOVES TONER AND PAPER POWDER, AND IMAGE FORMING APPARATUS

KYOCERA Document Solution...

1. A cleaning device comprising:a removal roller rotating around a first rotary shaft extending widthwise of an image carrier while making contact with the image carrier to remove a toner and a paper powder remaining on the image carrier;
a collecting roller making contact with the removal roller while rotating around a second rotary shaft parallel to an axial direction of the first rotary shaft to collect the toner and the paper powder on the removal roller;
a blade extending in parallel to an axial direction of the second rotary shaft, the blade making contact with the collecting roller to scrape off the toner and the paper powder on the collecting roller; and
a toner storage section being partitioned from the removal roller and the collecting roller by a seal extending in parallel to the first rotary shaft and the second rotary shaft, the toner storage section storing the toner and the paper powder collected by the collecting roller and scraped off by the blade, wherein
the removal roller and the collecting roller have no relationship such that a rotation speed or a diameter of one of the removal roller and the collecting roller is an integral multiple of a rotation speed or a diameter of another one of the removal roller and the collecting roller,
provided on an outer circumferential surface of the collecting roller in a circumferential direction of the collecting roller are: a first outer circumferential region having a predefined first surface roughness and extending in the axial direction of the second rotary shaft; and a second outer circumferential region having a greater predefined second surface roughness than the first surface roughness and extending in the axial direction of the second rotary shaft, and
a width of the second outer circumferential region in the circumferential direction is smaller than a width of the first outer circumferential region in the circumferential direction.

US Pat. No. 10,116,822

OPTICAL SCANNING DEVICE AND IMAGE FORMING APPARATUS INCLUDING THE SAME

KYOCERA DOCUMENT SOLUTION...

1. An optical scanning device comprising:a housing having light emitting ports extending in a predetermined direction;
a transparent cover that closes the light emitting ports;
a cleaning member that slidably contacts with a surface of the transparent cover to clean the surface;
a holding member that holds the cleaning member; and
a movement mechanism that allows the holding member to reciprocally move along the transparent cover in the predetermined direction,
wherein the holding member has an inside/outside double structure including an inner boss member that receives power from the movement mechanism and an outer boss member that internally receives the inner boss member and is longer than the inner boss member, and
the outer boss member reaches a moving end and stops earlier than the inner boss member, and subsequently the inner boss member moves in the outer boss member, reaches the moving end and stops.

US Pat. No. 10,116,821

IMAGE FORMING APPARATUS WHICH CAN REDUCE POWER CONSUMPTION

Konica Minolta, Inc., Ch...

1. An image forming apparatus comprising:a hardware circuit for image forming, which includes an image forming unit to form images and an image forming control unit to control the image forming unit, and
a hardware circuit for communication, which includes a communication unit to perform communication with external devices and a communication control unit to control the communication unit, wherein
both the circuit for image forming and the circuit for communication have a common IP (Internet Protocol) address as an IP address published to users of the image forming apparatus, and
the circuit for communication further includes an electric power control unit to control electric power supply to the circuit for image forming and electric power supply to the circuit for communication, being independent of each other.

US Pat. No. 10,116,820

IMAGE FORMING APPARATUS, METHOD FOR CONTROLLING SAME, AND STORAGE MEDIUM

Canon Kabushiki Kaisha, ...

1. An image forming apparatus comprising:a processor; and
a memory storing instructions, when executed by the processor, causing the image forming apparatus to function as:
an input unit configured to input image data;
a printing unit configured to print an image based on the image data input by the input unit;
a control unit configured to determine if the image forming apparatus is operating in a first mode or a second mode,
wherein if the control unit determines the image forming apparatus is operating in a first mode then perform control to print by the printing unit an image generated from the image data input by the input unit, and
wherein if the control unit determines the image forming apparatus is operating in a second mode then print by the printing unit an image obtained by adding a predetermined pattern image to the image generated from the image data input by the input unit; and
an operation unit including a display and accepting unit,
wherein if the image forming apparatus operates in the second mode, display a confirmation screen to a user prior to printing to accept selection regarding whether to perform printing in the second mode in response to operation performed to start printing the image.

US Pat. No. 10,116,819

DOCUMENT CONVEYING APPARATUS

PFU LIMITED, Kahoku-Shi,...

1. A document conveying apparatus comprising:a document tray;
a driving module for generating a first driving force;
a first conveying roller for conveying a document stacked at a lowermost position, which is one of a plurality of documents stacked on the document tray;
a second conveying roller, provided at a downstream side with respect to the first conveying roller in a document conveying direction for conveying said document stacked at the lowermost position;
a separation roller provided at the downstream side with respect to the second conveying roller in the document conveying direction for separating the document from the plurality of stacked documents;
a driving force transmission mechanism for transmitting the first driving force to a driving shaft of the first conveying roller, a driving shaft of the second conveying roller, and a driving shaft of the separation roller;
a first blocking mechanism provided between the first conveying roller and the driving shaft of the first conveying roller for blocking a second driving force transmitted to the first conveying roller by the driving shaft of the first conveying roller so that the second driving force is not transmitted to the first conveying roller, after a rear edge of the document conveyed by the first conveying roller passes the first conveying roller and a next document to be subsequently conveyed comes into contact with the first conveying roller; and
a third conveying roller provided at the downstream side with respect to the separation roller in the document conveying direction, wherein
a period of time for blocking the second driving force is set to be equal to or less than a period of time for conveying the document for a distance between the separation roller and the third conveying roller.

US Pat. No. 10,116,818

INFORMATION PROCESSING APPARATUS WITH OPERATION UNIT, CONTROL METHOD THEREFOR, AND STORAGE MEDIUM STORING CONTROL PROGRAM THEREFOR

CANON KAUSHIKI KAISHA, T...

1. An image processing apparatus comprising:a scanner scanning a document and generating image data;
a display displaying a first display area for selecting an image processing function to be executed for the image data generated by the scanner;
a memory storing instructions; and
at least one processor that executes the instructions causing the image processing apparatus to:
display a plurality of standard icons corresponding to a plurality of image processing functions in the first display area;
in a case where a plurality of extension applications are installed and a total number of the plurality of standard icons and a plurality of additional icons corresponding to the plurality of extension applications does not exceed a display upper limit of the first display area, display the plurality of standard icons and the plurality of the additional icons in the first display area; and
in a case where the plurality of extension applications are installed and the total number of the plurality of standard icons and a plurality of additional icons corresponding to the plurality of extension applications exceeds the display upper limit of the first display area:
display the plurality of standard icons and a predetermined icon in the first display area; and
display a second display area on the display, in which the plurality of additional icons corresponding to the plurality of extension applications are arranged, when the predetermined icon is selected from among the icons in the first display area.

US Pat. No. 10,116,817

IMAGE FORMING APPARATUS AND IMAGE FORMING SYSTEM INCORPORATING SAME

RICOH COMPANY, LTD., Tok...

1. An image forming apparatus comprising:a display including a touch panel display screen to display a preview image before an image is formed on a recording medium;
an operation position detector to detect a series of operation positions on the touch panel display screen displaying the preview image, the detected series of operation positions forming a handwritten additional image;
a display controller to display on the display screen, a composite image including both the preview image and the handwritten additional image superimposed on the preview image; and
an image forming unit to form, on the recording medium, a post-addition image corresponding to the composite image, including both the preview image and the handwritten additional image, displayed on the display screen,
wherein each of a vertical length and a horizontal length of the display screen is equal to or greater than a length of a long side of a maximum size recording medium on which an image is to be formed by the image forming unit.

US Pat. No. 10,116,816

INFORMATION PROCESSING APPARATUS THAT PERFORMS TWO SEPARATE AND DIFFERENT SEARCH OPERATIONS FOR A DEVICE, INFORMATION PROCESSING METHOD, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM

Canon Kabushiki Kaisha, ...

1. An information processing apparatus connected to an external access point, the information processing apparatus comprising:one or more processors operating to:
cause a first search to be performed so that a first device that is not in a state of being connected to the external access point and that has a function of an access point is searched for;
cause a second search to be performed so that a second device that is already in a state of being connected to the external access point is able to be searched for, wherein the second search is a different search operation than the first search; and
cause a display unit to display first information regarding the first device found by the first search and second information regarding the second device found by the second search,
wherein, in a case where the first information displayed on the display unit is designated, processing for connecting the first device to the external access point is performed based on the designation of the first information,
wherein, in a case where the second information displayed on the display unit is designated, processing for connecting the second device to the external access point is not performed based on the designation of the second information, and
wherein the external access point is provided outside of the information processing apparatus, the first device, and the second device.

US Pat. No. 10,116,813

COMPOSITE APPARATUS

Konica Minolta, Inc., To...

1. A composite apparatus comprising:a first apparatus and a second apparatus that operate independently of each other, the first apparatus and the second apparatus each comprising:
a display memory that stores display data; and
a drawing processor;
a single console display that is shared by the first apparatus and the second apparatus and displays the display data upon an instruction by the drawing processor of the first or second apparatus;
a selector that selectively connects the drawing processor of the first or second apparatus to the single console display; and
a switch processor that receives a connection request from the first or second apparatus, wherein the connection request includes a request to connect the drawing processor of the first or second apparatus to the single console display and to instruct the selector to connect the drawing processor of either the first or second apparatus to the single console display,
wherein, while connected to the single console display, the drawing processor of either the first or second apparatus that issued the connection request instructs the single console display to display the display data, and
wherein the first apparatus and second apparatus operate independently of each other while sharing the single console display to display the display data.

US Pat. No. 10,116,812

IMAGE FORMING APPARATUS, METHOD FOR CONTROLLING THE SAME, AND NON-TRANSITORY COMPUTER-READABLE DATA RECORDING MEDIUM HAVING CONTROL PROGRAM STORED THEREON

KONICA MINOLTA, INC., Ch...

1. An image forming apparatus, comprising:a display; and
a hardware processor configured to:
accept an operation indicating that display of an image currently displayed on said display is unnecessary, wherein said operation includes at least an input to close said currently displayed image;
generate, based on said operation, a menu showing image candidates to which transition from said currently-displayed image can be made;
display said generated menu on said currently-displayed image;
accept an operation for selecting a particular image from said candidates shown in said generated menu displayed on said currently-displayed image; and
display said selected particular image on said display based on said operation for selecting.

US Pat. No. 10,116,809

IMAGE PROCESSING APPARATUS, CONTROL METHOD, AND COMPUTER-READABLE STORAGE MEDIUM, WHICH OBTAINS CALIBRATION IMAGE INFORMATION WITH WHICH TO CORRECT IMAGE DATA

Canon Kabushiki Kaisha, ...

1. An image processing apparatus comprising:an image capturing unit configured to capture an image of a document placed on a document board;
a processor; and
a memory storing instructions, when executed by the processor, causing the apparatus to function as:
a determination unit configured to determine a correction parameter for correcting a first image of the document placed on the document board, the first image being captured by the image capturing unit, using a value corresponding to each pixel in a second image that is captured by imaging the document board by the image capturing unit; and
a correction unit configured to correct the first image of the document placed on the document board, the first image being captured by the image capturing unit, using the correction parameter determined by the determination unit,
wherein the determination unit modifies the parameter by modifying a value corresponding to each pixel in a first region containing an edge portion extracted based on an edge extraction filter from the second image of the document board using a value corresponding to each pixel surrounding the first region,
wherein in a case where the first region surrounding the edge portion is larger than a predetermined size, the determination unit is configured to change a coefficient of the edge extraction filter, extract the edge portion from the second image of the document board, and determine, as a second region, a region surrounding an edge portion extracted using the changed coefficient.

US Pat. No. 10,116,808

MOVING AMOUNT DETECTOR AND IMAGE FORMING APPARATUS INCLUDING THE SAME

KONICA MINOLTA, INC., Ch...

1. A moving amount detector that sets a movable member included in a device or an object conveyed by the device as a detection target and detects a moving amount of the detection target, the moving amount detector comprising:an imaging unit that repeatedly captures a series of images of the detection target at a constant sampling period while the detection target moves; and
a hardware processor configured to function as a moving amount calculating unit that selects every Nth image of the series of images and compares each pair of adjacent selected images with each other from among the series of images of the detection target captured by the imaging unit;
wherein N is determined based on an intended moving speed of the detection target; and
the moving amount calculating unit calculates a moving amount of the detection target based on a movement of the detection target during a time period between when the two compared images were taken.

US Pat. No. 10,116,807

METHOD AND APPARATUS FOR MANAGING SUBSCRIPTION TO POLICY COUNTERS

Telefonaktiebolaget LM Er...

1. A method, performed in a Policy and Charging Rules Function (PCRF), for managing subscription to policy counters maintained at an Online Charging System (OCS), wherein the PCRF is operable to communicate with the OCS over an Sy reference point, the method comprising:receiving a Multiple Users subscription trigger from a network operator, the Multiple Users subscription trigger identifying a reference network policy and a subject network policy; and
sending a Spending Limit Request (SLR) command to the OCS, the SLR command specifying an identifier of a subject policy counter for the subject network policy and specifying application of the SLR command with respect to the subject policy counter to all ongoing Sy sessions between the PCRF and the OCS which already include a subscription to a policy counter for the reference network policy.

US Pat. No. 10,116,806

BANDWIDTH AWARE NETWORK STATISTICS COLLECTION

QUALCOMM Innovation Cente...

1. A method of controlling data usage statistics in a computing device, comprising:suppressing, via a minimum window component of the computing device, triggering data usage stats collection during a minimum window;
performing, via a network status component of the computing device, at least one instance of data usage stats collection after termination of the minimum window;
incrementally decreasing, via a minimum window adjustment function of the minimum window component, the minimum window as data usage approaches a warning limit:
wherein the minimum window is a function of (1) a communications channel link speed, and (2) a proximity of data usage to the warning limit;
wherein the triggering is caused by either expiration of a timer or data usage that meets a buffer threshold, and wherein a length of the timer and a size of the buffer threshold are based on the communication channel link speed; and
wherein the buffer threshold is a function of the proximity of the data usage to the warning limit.

US Pat. No. 10,116,805

APPARATUSES AND METHODS FOR DETERMINING USAGE OF A WIRELESS COMMUNICATION SERVICE

10. A method comprising:receiving user input at a user interface displayed by a wireless device, the wireless device configured to access a communication service, wherein the user input designates a user profile; and
after receiving the user input, receiving a selection at the wireless device to initiate a session of the communication service, and responsive to the selection:
generating, at the wireless device, a message associated with the session of the communication service, wherein the message includes a particular identifier of the user profile, wherein the particular identifier indicates that the session is to be billed to a first billing account of a plurality of billing accounts associated with the wireless device, each billing account of the plurality of billing accounts associated with a respective identifier; and
transmitting the message from the wireless device via a wireless network to a network element, wherein the messages is configured to instruct the network element to initiate the session and to cause the session to be billed to the first billing account based on the particular identifier in the message.

US Pat. No. 10,116,804

SYSTEMS AND METHODS FOR POSITIONING A USER OF A HANDS-FREE INTERCOMMUNICATION

Elwha LLC, Bellevue, WA ...

1. A hands-free intercommunication system for automatically connecting a user to an entity of interest, the system comprising:a user-tracking sensor that determines a location of the user;
a directional microphone that measures vocal emissions by the user, wherein the measured vocal emissions include identifying the entity of interest with which the user would like to communicate;
a communication interface that communicatively couples the directional microphone and a directional sound emitter to a communication device of the entity of interest, wherein the communication interface determines whether to couple the communication device of the entity of interest to the user based on the location of the user; and
a directional sound emitter that delivers audio received at the communication device of the entity of interest to the user, wherein the directional sound emitter emits audio received the entity of interest using a plurality of inaudible ultrasonic sound waves that frequency convert to produce audible audio corresponding to the received audio the entity of interest for the user at the location of the user.

US Pat. No. 10,116,802

IP CARRIER PEERING

1. A system to interconnect carrier communication systems, the system comprises:a communication client, the communication client configured to:
receive a request, including an e.164 number, to connect an IP (Internet protocol) call from equipment of a first carrier to equipment of a second carrier;
modify a query to a private ENUM (tElephone NUmber Mapping) to include an intercarrier ENUM apex-based domain with an associated DNS (domain name server) forwarding zone, wherein the associated DNS forwarding zone includes a primary internet address of a tier 2 ENUM of the second carrier;
automatically forward the modified query to the equipment of the second carrier to retrieve a routing record from the second carrier; and
route the IP call to the equipment of the second carrier using the routing record.

US Pat. No. 10,116,801

CONFERENCE CALL PLATFORM CAPABLE OF GENERATING ENGAGEMENT SCORES

Shoutpoint, Inc., Newpor...

1. A conference call management system, comprising:a call processing system comprising one or more computing devices, said call processing system comprising telecommunication hardware configured to initiate and process telephonic calls, including conference calls, and comprising a processor and a memory, said call processing system programmed with at least:
a conference call management module that provides functionality for initiating a conference call and for enabling conference call participants to interactively participate on the conference call, said conference call management module configured to monitor, and maintain participant-specific records of, the interactive participation by the participants;
a scoring module configured to use at least the participant-specific records of interactive participation to generate participant-specific engagement scores reflective of levels of engagement of the participants on the conference call; and
a ranking module configured to rank participant-submitted requests for consideration based on the participant specific engagement scores.

US Pat. No. 10,116,799

ENHANCING WORK FORCE MANAGEMENT WITH SPEECH ANALYTICS

1. A method for generating an agent work schedule, the method comprising:performing, by a speech or text analytics module hosted on a processor, analytics on a plurality of recorded interactions with a plurality of contact center agents;
detecting, based on the analytics, specific utterances in the recorded interactions;
classifying, on the processor, the recorded interactions into a first plurality of interaction reasons and a first plurality of interaction resolution statuses, wherein the classifying is based on the detected specific utterances;
computing, on the processor, based on the classifying of the recorded interactions, a first agent effectiveness of a first agent and a second agent effectiveness of a second agent of the plurality of agents, wherein the first agent effectiveness and the second agent effectiveness correspond to an interaction reason of the first interaction reasons, the first agent effectiveness being higher than the second agent effectiveness;
forecasting, on the processor, a demand of the contact center agents for a first time period for handling interactions classified with the interaction reason;
generating, on the processor, the agent work schedule for the first time period based on the forecasted demand and the first agent effectiveness and the second agent effectiveness, wherein the agent work schedule includes a first number of agents scheduled to work during the first time period that is larger than a second number of agents scheduled to work during the first time period, the first number of agents including the first agent with the first agent effectiveness, and the second number of agents including the second agent with the second agent effectiveness;
detecting an interaction having the interaction reason during the first time period;
routing, by an electronic switch, the detected interaction to a particular agent selected from the first and second number of agents;
analyzing, on the processor, a second plurality of recorded interactions, the analyzing including classifying the second plurality of recorded interactions into a second plurality of interaction reasons and a second plurality of interaction resolution statuses; and
forecasting, on the processor, a demand of the contact center agents for a second time period for handling the second interaction reasons without forecasting a demand for handling an obsolete interaction reason included in the first plurality of interactions reasons, the second time period being different from the first time period.

US Pat. No. 10,116,798

QUEUEING COMMUNICATIONS FOR A CONTACT CENTER

Noble Systems Corporation...

1. A method for routing a communication in a contact center comprising:deriving a communication value distribution by a computer processor from communication values for a set of communications that was applied a treatment, the treatment being from a plurality of treatments supported by the contact center in which each treatment in the plurality of treatments (1) is applicable to at least one of a reason and an opportunity for conducting a communication with a remote party and (2) comprises a plurality of sub-queues;
deriving a value range for each sub-queue of the plurality of sub-queues for the treatment by the computer processor based on the communication value distribution and a percentage of communication volume to be handled by the sub-queue; and
assigning a number of agents to at least one sub-queue of the plurality of sub-queues for the treatment by the computer processor to handle communications placed in the at least one sub-queue, the number of agents is based on the percentage of communication volume to be handled by the at least one sub-queue and a service level requirement identifying a level of service that is to be maintained by the number of agents, wherein the communication is placed in the at least one sub-queue and connected to an agent in the number of agents based on a communication value determined for the communication falling within the value range derived for the at least one sub-queue.

US Pat. No. 10,116,797

TECHNIQUES FOR BENCHMARKING PAIRING STRATEGIES IN A CONTACT CENTER SYSTEM

Afiniti Europe Technologi...

1. A method for benchmarking pairing strategies in a contact center system comprising:cycling, by at least one computer processor communicatively coupled to and configured to operate in the contact center system, among at least two pairing strategies, wherein the cycling comprises establishing, by a routing engine of the contact center system, a connection between communication equipment of a contact and communication equipment of an agent based upon at least one pairing strategy of the at least two pairing strategies;
determining, by the at least one computer processor, a differential value attributable to the at least one pairing strategy of the at least two pairing strategies;
determining, by the at least one computer processor, a difference in performance between the at least two pairing strategies, wherein the difference in performance provides an indication that pairing contacts and agents using a first pairing strategy of the at least two pairing strategies results in a performance gain for the contact center system attributable to the first pairing strategy, wherein the difference in performance also provides an indication that optimizing performance of the contact center system is realized using the first pairing strategy instead of another of the at least two pairing strategies; and
outputting, by the at least one computer processor, the difference in performance between the at least two pairing strategies for benchmarking the at least two pairing strategies.

US Pat. No. 10,116,794

DETERMINING AN ACTIVE STATION BASED ON MOVEMENT DATA

1. A method for determining an active contact center station for an agent in a contact center system, wherein the contact center system comprises a plurality of contact center stations, based on sensor data, the method comprising the steps of:receiving, by a processor of the contact center system, movement data from a mobile device associated with the agent;
matching, by the processor of the contact center system, the movement data from the mobile device associated with the agent with a previously stored pattern of movement associated with one of the plurality of contact center stations associated with the agent; and
automatically updating, by the processor of the contact center system, one of the plurality of contact center stations to active, wherein the update is based on the movement data and matched pattern of movement, and wherein the agent is not logged into the contact center system.

US Pat. No. 10,116,793

METHOD AND SYSTEM FOR LEARNING CALL ANALYSIS

1. A method for communication learning in a telecommunication system, wherein the telecommunication system comprises at least an automated dialer, a telephony service module, a database, and a media server operatively coupled over a network for exchange of data there between, the method comprising the steps of:a. selecting, by the automated dialer, a contact from the database, the contact being associated with a telephone number and one or more acoustic fingerprints;
b. retrieving, by the telephony service module, from the database, the one or more acoustic fingerprints and the telephone number associated with the contact;
c. initiating, by the automated dialer, a communication with the contact based on the telephone number, the communication generating audio;
d. analyzing, by the media server, the audio for matches to any of the one or more of the acoustic fingerprints, wherein matches are not identified;
e. routing, via an electronic routing device by the telephony service module, the communication to an agent device associated with an agent for determining whether or not the communication comprises a speech recording;
f. receiving, from the agent device, a signal indicating the communication comprises a speech recording;
g. requesting, by the automated dialer, new acoustic fingerprints from the media server for the speech recording and associating the new acoustic fingerprints with the contact in the database; and
h. disconnecting the communication with the contact after receiving the signal indicating the communication comprises the speech recording.

US Pat. No. 10,116,791

METHODS AND APPARATUS FOR TRANSMITTING DATA

Samsung Electronics Co., ...

1. A method of transmitting data performed by an apparatus, the method comprising:receiving a request for a call signal, from a sender device to a receiver device, including sender information and receiver information associated with the call signal, from the sender device;
confirming a relationship between the sender and the receiver that exists in at least one external server, based on the received sender information and the receiver information;
requesting content associated with the sender which is uploaded on the at least one external server to which the sender is subscribed based on the relationship between the sender and the receiver, to the at least one external server;
receiving the requested content from the at least one external server; and
transmitting the call signal together with the received content, to the receiver device,
wherein the content is displayed on the receiver device while the call signal is being output on the receiver device, and
wherein the sender and the receiver are filtered based on an order of call frequency.

US Pat. No. 10,116,790

METHOD, SYSTEM AND APPARATUS FOR COMMUNICATING DATA ASSOCIATED WITH A USER OF A VOICE COMMUNICATION DEVICE

BCE INC., Verdun (CA)

1. A method executable by a server within a communication system, the method comprising:receiving a first identifier associated with a first communication device further to a connection request by the first communication device;
determining a second identifier of a second communication device based on the first identifier;
establishing a connection with the second communication device using the second identifier;
receiving data from the second communication device over the established connection;
identifying a profile for a user of the first communication device based on the received data; and
authenticating the user of the first communication device based on comparison of additional information obtained from the first communication device provided from the user to information contained in the identified profile,
wherein the connection request is for an outbound call, wherein the method further comprises authorizing the outbound call based on at least one of the received data associated and destination information,
wherein the received data comprises a user identifier and the destination information comprises a destination telephone number associated with a destination device; and wherein authorizing the outbound call comprises accessing a database comprising a list of user identifiers with one or more allowed destination telephone numbers corresponding to each user identifier; and confirming that the user of the first device is authorized to place the outbound call to the destination device.

US Pat. No. 10,116,786

APPARATUS FOR CONTROLLING A MULTIMEDIA MESSAGE IN A USER EQUIPMENT OF A WIRELESS COMMUNICATION SYSTEM AND METHOD THEREOF

LG ELECTRONICS INC., Seo...

1. A mobile terminal for controlling at least two message interfaces, comprising:a touchscreen: and
a controller configured to:
cause the touchscreen to display a first message interface displaying messages transmitted from the mobile terminal to a first device and displaying messages received at the mobile terminal from the first device, wherein the messages of the first message interface are enumerated in a chat format in accordance with a time sequence;
cause the touchscreen to display a second message interface displaying messages transmitted from the mobile terminal to a second device and displaying messages received at the mobile terminal from the second device wherein each of the first and second message interfaces is each of individual message windows;
cause the touchscreen to display in a queue region a first item representative of content associated with a selected message displayed in the first message interface; and
cause the touchscreen to display in the queue region a second item representative of content associated with a selected message displayed in the second message interface,
wherein the queue region is displayed to be adjacent to the first and second message interfaces,
wherein the first and second message interfaces are each independently scrollable in first and second opposing directions,
wherein the first and second items in the queue region are displayed chronologically according to when they are copied from a respective one of the first or second message interface to the queue region, regardless of which of the first or second message interface they are copied from,
wherein the first item displayed in the queue region includes a text of the selected message of the first message interface, and
wherein the second item displayed in the queue region includes a text of the selected message of the second message interface.

US Pat. No. 10,116,784

CAMERA CAPABLE OF COMMUNICATING WITH OTHER COMMUNICATION DEVICE

NIKON CORPORATION, Tokyo...

1. A cellular phone capable of telephone-calling with an external device, the cellular phone comprising:an antenna by which the cellular phone communicates with the external device;
a lens;
an image sensor that outputs an image signal from an image formed on the image sensor by the lens;
a display;
a loudspeaker; and
a processer electrically connected to the antenna, the image sensor, the display and the loudspeaker, wherein:
the processor controls the display to display an announcement of an incoming call from the external device after receiving a calling signal via the antenna, and
in a case that the calling signal is received during operation of the image sensor, the processor permits communication between the cellular phone and the external device via the antenna and using the loudspeaker and a microphone of the cellular phone after the announcement of the incoming call is displayed by the display and after the processor receives an instruction from an input device of the cellular phone to allow starting of the telephone-calling with the external device.

US Pat. No. 10,116,783

PROVIDING AND USING A MEDIA CONTROL PROFILE TO MANIPULATE VARIOUS FUNCTIONALITY OF A MOBILE COMMUNICATION DEVICE

1. A mobile communication device comprising:a processor; and
a memory storing instructions that, when executed by the processor, cause the processor to perform operations comprising
sending, to a network node via a communications network, a query for a media control profile associated with the mobile communication device,
in response to the query, receiving, from the network node via the communications network, the media control profile associated with the mobile communication device, wherein the media control profile comprises a first audible volume setting assigned to a first calling party and a second audible volume setting assigned to a second calling party, and wherein the first audible volume setting is different from the second audible volume setting,
changing a functionality of the mobile communication device to comply with the media control profile,
in response to receiving an incoming call from the first calling party, altering, in compliance with the media control profile, a volume of a media file playing on the mobile communication device to be in accordance with the first audible volume setting assigned to the first calling party as set forth in the media control profile while playing an audible notification of the incoming call from the first calling party, and
in response to receiving an incoming call from the second calling party, altering, in compliance with the media control profile, the volume of the media file playing on the mobile communication device to be in accordance with the second audible volume setting assigned to the second calling party as set forth in the media control profile while playing an audible notification of the incoming call from the second calling party.

US Pat. No. 10,116,782

TELEPHONE DEVICE AND MOBILE-PHONE LINKING METHOD

PANASONIC INTELLECTUAL PR...

1. A telephone device, comprising:a landline telephone line interface unit;
a master-device control unit that controls the telephone device;
a short-distance wireless communication control unit that controls short-distance wireless communication of data with a mobile-phone;
an audio speaker; and
an audio processing unit, which, in operation, receives audio data from the mobile-phone using the short-distance wireless communication and causes the audio speaker to perform music playback by outputting the audio data from the mobile phone,
wherein, when the master-device control unit detects a caller operation of placing a call to a mobile-phone network by the telephone device during output of the audio data from the audio speaker, the master-device control unit notifies the short-distance wireless communication control unit of information on the caller operation of placing the call to the mobile phone network, and in response to the short-distance wireless communication control unit receiving the notification on the caller operation of placing the call to the mobile phone network, the short-distance wireless communication control unit starts processing that releases a radio resource for communicating the audio data, used for the music playback, from the mobile phone to the telephone device and sets, for the call, a radio resource for an audio path between the mobile phone and the telephone device.

US Pat. No. 10,116,781

METHOD, DEVICE AND COMPUTER-READABLE MEDIUM FOR CONTROLLING A DEVICE

XIAOMI INC., Beijing (CN...

1. A method for controlling a device, applied to a control device, the method comprising:receiving an identifier display instruction, the identifier displaying instruction being generated when a lock screen of the control device is touched along a predetermined path;
acquiring a device identifier of a corresponding controlled device according to log-in status of a user account on the control device, wherein acquiring a device identifier of a corresponding controlled device according to log-in status of a user account on the control device comprises:
transmitting a first request for acquiring an identifier to a router connected to the control device, the first request for acquiring an identifier being used to trigger the router to feed back a device identifier of each controlled device connected to the router;
receiving the device identifier fed back by the router;
transmitting a second request for acquiring an identifier to a cloud server if the user account has logged-in on the control device, the second request for acquiring an identifier being used to trigger the cloud server to feed back a device identifier of each controlled device bound to the user account; and
receiving the device identifier fed back by the cloud server;
performing a duplication removing operation to the device identifier fed back by the router and the device identifier fed back by the cloud server;
displaying, after the duplication removing operation, the acquired device identifier of each controlled device on the lock screen; and
transmitting a control instruction to a controlled device corresponding to a selected device identifier after the selected device identifier is determined.

US Pat. No. 10,116,778

MOBILE TERMINALS AND COMBINED TERMINAL EQUIPMENT

ZHEJIANG GEELY HOLDING GR...

1. A mobile terminal for splicing a plurality of said mobile terminals into a combined terminal device, said mobile terminal comprising:a display screen;
a first side surface and a second side surface located at two opposing sides of the display screen;
a first conductive contact arranged at said first side surface;
a second conductive contact arranged at said second side surface, wherein said first and second side surfaces are planes parallel to each other;
a first magnetic adsorbing element arranged at said first side surface and a second magnetic adsorbing element arranged at said second side surface, wherein the positions of said first and second magnetic adsorbing elements are arranged such that: when the other one of said mobile terminals and a current one of said mobile terminals are spliced, said first magnetic adsorbing element at said first side surface of the current one of said mobile terminals and said second magnetic adsorbing element at said second side surface of the other one of said mobile terminals can attract each other, so that said first side surface of the current one of said mobile terminals and said second side surface of the other one of said mobile terminals are bonded in alignment with each other,
wherein the positions of the first and second conductive contacts are arranged such that: when said first side surface of the current one of said mobile terminals and said second side surface of the other one of said mobile terminals are bonded in alignment with each other, said first conductive contact at said first side surface of the current one of said mobile terminals and said second conductive contact at said second side surface of the other one of said mobile terminals can make electrical contact with each other,
wherein at least one of said first conductive contact and said second conductive contact is made of an elastic material or biased by a spring so as to be able to move in a direction perpendicular to the side surface where it is located, said first conductive contact is composed of a plurality of bow-shaped metal sheets, said second conductive contact is recessed into said second side surface; and
said first conductive contact of the current one of said mobile terminals and said second conductive contact of the other one of said mobile terminals abut each other, so that said second side surface of the other one of said mobile terminals and said first side surface of the current one of said mobile terminals are bonded in alignment with each other by deforming said first conductive contact in the direction perpendicular to the side surface where it is located.

US Pat. No. 10,116,777

MOBILE TERMINAL

LG Electronics Inc., Seo...

1. A mobile terminal comprising:a frame including a front surface in which a display device is provided;
a window disposed on a surface of the display device; and
a front case configured to cover a predetermined area of the window,
wherein the window includes:
a first window layer having a front surface and a rear surface, the front surface being exposed to outside of the mobile terminal;
a second window layer, larger than the first window layer, and the second window layer having a front surface and a rear surface, wherein the front surface of the second window layer includes a first area disposed to face the rear surface of the first window layer and a second area, the front case to cover the second area of the front surface of the second window layer around the first area, and the rear surface of the second window layer to face a surface of the display device; and
an optical clear adhesive (OCA) provided between the first window layer and the second window layer;
wherein a thickness of the first window layer is approximately twice a thickness of the second window layer.

US Pat. No. 10,116,774

HARDWARE PROTOCOL STACK WITH USER-DEFINED PROTOCOL APPLIED THERETO AND METHOD FOR APPLYING USER-DEFINED PROTOCOL TO HARDWARE PROTOCOL STACK

LSIS CO., LTD., Anyang-s...

1. A hardware protocol stack to which a user-defined protocol is applied, comprising:a register unit in which header information is stored;
a comparison unit configured to compare header information of a received frame with the header information stored in the register unit to determine whether the header information is matched to the other;
an interface logic unit configured to determine a process of the received frame on the basis of a comparison result of the comparison unit; and
a logic process unit configured to process data of the received frame based on a logic according to the header information when the frame process method, which is determined in the interface logic according to the header information stored in the register unit and being matched to the header information of the received frame, is a processing of a frame,
wherein the logic according to the header information includes a unit designation of the data according to the header information;
wherein the unit designation of the data is performed such that the logic process unit sets a basic offset and a size unit of the data when receiving a request for writing payload data in a specific region of the data and then stores payload in the basic offset by expanding the payload to be corresponded to the set size unit of the data.

US Pat. No. 10,116,773

PACKET PROCESSING METHOD AND RELATED DEVICE THAT ARE APPLIED TO NETWORK DEVICE

HUAWEI TECHNOLOGIES CO., ...

1. A packet processing method applied to a network device, wherein K classifiers and S network service processors are loaded into a memory of the network device, wherein the K classifiers comprise a classifier x and a classifier y, wherein K and S are integers greater than 1, and wherein the method comprises:acquiring, by the classifier x, P packet identifiers from a queue area a corresponding to the classifier x and is in a network adapter receiving queue;
acquiring, by the classifier x and based on the P packet identifiers, P packets corresponding to the P packet identifiers;
determining, by the classifier x and based on the P packets, flow queue identifiers corresponding to the P packets;
distributing, by the classifier x, packet description information corresponding to the P packets to flow queues corresponding to the determined flow queue identifiers corresponding to the P packets, wherein packet description information corresponding to a packet i in the P packets is distributed to a flow queue corresponding to a determined flow queue identifier corresponding to the packet i, wherein the packet i is any one packet in the P packets, and wherein the packet description information corresponding to the packet i comprises a packet identifier of the packet i;
processing, by Si network service processors in the S network service processors and based on the packet description information corresponding to the P packets and is distributed to the flow queues, the P packets;
sending the P processed packets;
acquiring, by the classifier y, Q packet identifiers from a queue area b corresponding to the classifier y and is in the network adapter receiving queue;
acquiring, by the classifier y and based on the Q packet identifiers, Q packets corresponding to the Q packet identifiers;
determining, by the classifier y and based on the Q packets, flow queue identifiers corresponding to the Q packets;
distributing, by the classifier y after the classifier x distributes the packet description information corresponding to the P packets to the flow queues corresponding to the determined flow queue identifiers corresponding to the P packets, packet description information corresponding to the Q packets to flow queues corresponding to the determined flow queue identifiers corresponding to the Q packets, wherein packet description information corresponding to a packet m in the Q packet is distributed to a flow queue corresponding to a determined flow queue identifier corresponding to the packet m, wherein the packet m is any one packet in the Q packets, wherein the packet description information corresponding to the packet m comprises a packet identifier of the packet m, wherein Q and P are positive integers, and wherein a time at which the Q packets are enqueued to the queue area b in the network adapter receiving queue is later than a time at which the P packets are enqueued to the queue area a in the network adapter receiving queue;
processing, by Sj network service processors in the S network service processors and based on the packet description information corresponding to the Q packets and is distributed to the flow queues, the Q packets; and
sending the Q processed packets, wherein an intersection set between the Si network service processors and the Sj network service processors is a null set or a non-null set.

US Pat. No. 10,116,772

NETWORK SWITCHING WITH CO-RESIDENT DATA-PLANE AND NETWORK INTERFACE CONTROLLERS

Cavium, Inc., San Jose, ...

1. A network interface apparatus, comprising:a semiconductor chip comprising a packet input processor, a packet output processor, and a network interface controller; wherein
a network facing inbound interface of the network interface controller is communicatively coupled to a network facing interface of the packet output processor via a first hardware loopback entity;
a network facing outgoing interface of the network interface controller is communicatively coupled to a network facing interface of the packet input processor via a second hardware loopback entity; and
at least one medium access controller, communicatively coupled to network facing inbound and outgoing interfaces of the network interface controller, the network facing interface of the packet output processor, and the network facing interface of the packet input processor.

US Pat. No. 10,116,771

DATA TRANSMISSION VIA FRAME RECONFIGURATION

Sprint Spectrum L.P., Ov...

1. A method for transmitting data via frame reconfiguration, the method comprising:mapping, by a source node, a plurality of data bits to a corresponding plurality of frame configurations, each of the plurality of frame configurations comprising a sequence of uplink and downlink subframes;
generating, by the source node, a pattern of frame configurations based on a data string to be transmitted to a target node, the pattern comprising one or more frame configurations of the plurality of frame configurations corresponding to bits within the data string; and
broadcasting, from the source node, the pattern of frame configurations,
wherein the target node is configured to identify the pattern of frame configurations and decode the data string.

US Pat. No. 10,116,769

COMMERCE ORIENTED UNIFORM RESOURCE LOCATER (URL) SHORTENER

PAYPAL, INC., San Jose, ...

1. A system comprising:a non-transitory memory; and
one or more hardware processors coupled to the non-transitory memory and configured to read instructions from the non-transitory memory to cause the system to perform operations comprising:
identifying a graphical token indicator;
identifying a token associated with the graphical token indicator;
selecting a template associated with the graphical token indicator, comprising selecting a template with a token indicator type matching that of the identified graphical token indicator and a token type matching that of the identified token; and
generating a uniform resource locator (URL) in a computer-readable form based on the template, wherein the graphical token indicator indicates the start of the token.

US Pat. No. 10,116,768

CONTROL SYSTEM, CONTROL METHOD, AND COMMUNICATION DEVICE

FUJITSU LIMITED, Kawasak...

1. A control system comprising:a server including a first processor and a first memory; and
a plurality of communication devices including a second processor, respectively,
wherein
the first memory is configured to store first mode information for each user,
the first mode information is associated with a mode of an application, the first mode information being selected from among a plurality pieces of mode information of the mode for distinguishing a function executed by same operation from another function for the application,
the first processor is configured to transmit a respective first mode information of a user to the plurality of communication devices operated by the user, and
the second processor is configured to:
receive the first mode information,
obtain second mode information set to the mode of the application installed to a communication device among the plurality of communication devices,
record an operation content related to mode information change performed on the communication device, and
determine whether the second mode information set to the mode of the application is switched to the first mode information, based on the operation content, the first mode information, and the second mode information.

US Pat. No. 10,116,767

SCALING CLOUD RENDEZVOUS POINTS IN A HIERARCHICAL AND DISTRIBUTED MANNER

Furturewei Technologies, ...

1. A service provider (SP) cloud rendezvous point (CRP-SP) in a fixed cloud rendezvous point (CRP) hierarchy, the CRP-SP comprising:a memory comprising a cloudcasting information base (CCIB);
a receiver configured to receive a Register request from a first site CRP (CRP Site) in an SP network, the Register request indicating a first portion of a virtual extensible network (VXN) is reachable by the SP network at the first CRP Site;
a processor coupled to the receiver and the memory, the processor configured to query the CCIB to determine that a second portion of the VXN is reachable by the SP network at a second CRP Site; and
a transmitter coupled to the processor and configured to transmit Report messages to both the first CRP Site and the second CRP Site, the Report messages indicating the VXN is reachable at both the first CRP Site and the second CRP Site.