US Pat. No. 10,922,945

BED OR CHAIR EXIT SENSING DEVICE, AND USE OF A BED OR CHAIR EXIT SENSING DEVICE

1. A bed or chair exit sensing device, comprising:an accelerometer sensor;
a piezoelectric sensor; and
a data processing device,
wherein when the bed or chair exit sensing device is placed in, on, or in related to a bed or chair, said data processing device is adapted to determine a bed or chair exit of a person with respect to the bed or chair on the basis of combined processing of a signal from said accelerometer sensor and a signal from said piezoelectric sensor,
wherein the bed or chair exit of a person with respect to the bed or chair is determined if a condition of the signal from the accelerometer sensor and a condition of the signal from the piezoelectric sensor are fulfilled,
wherein a first condition of said condition of the signal from the accelerometer sensor or said condition of the signal from the piezoelectric sensor is monitored, and if said first condition of a corresponding first sensor of the accelerometer sensor or the piezoelectric sensor is fulfilled, sampling values of the signal from the second sensor of the accelerometer sensor and the piezoelectric sensor is started in order to determine whether a corresponding second condition is fulfilled,
wherein the bed or chair exit sensing device is fully battery powered, and
wherein the accelerometer sensor, the piezoelectric sensor, and the data processing device are arranged inside a housing of the bed or chair exit sensing device.

US Pat. No. 10,922,944

METHODS AND SYSTEMS FOR EARLY DETECTION OF CAREGIVER CONCERN ABOUT A CARE RECIPIENT, POSSIBLE CAREGIVER IMPAIRMENT, OR BOTH

Hill-Rom Services, Inc., ...

1. A method for detecting the status of a caregiver with respect to one or more patients or detecting possible caregiver impairment comprising:monitoring an environmental aspect of the patient, the environmental aspect including:
A) caregiver physical activity including frequency with which the caregiver consults the patient's medical record and duration of time the caregiver spends consulting the patient's medical record and depth of caregiver inquiry into the patient's medical record:
the environmental aspect also including at least one of:
B) caregiver physiological state; and
C) patient surroundings;
assessing conformance/nonconformance of each monitored aspect relative to a specified norm for that aspect; and
in response to the step of assessing conformance/nonconformance indicating an intuitive concern of the caregiver or a possible impairment of the caregiver, issuing a signal to a destination, the signal indicating at least one of:
a) a caregiver concern about the patient, and
b) a caregiver impairment.

US Pat. No. 10,922,943

MULTI-SENSOR INPUT ANALYSIS FOR IMPROVED SAFETY

HONEYWELL INTERNATIONAL I...

1. An apparatus comprising:a processor; and
a memory storing an ototoxicity application, wherein the ototoxicity application is operable, when executed by the processor, to cause the processor to:
determine, based on at least a signal received from a sensor disposed within an area, a noise level experienced by a user in said area during a first time, said noise level being correlated with an ototoxic effect for said user;
determine, based on said ototoxic effect for said user, a threshold ototoxic noise level for said user;
determine, during a second time, based on one or more other signals received from the sensor, one or more other noise levels experienced by the user; and
in an instance in which at least one of the one or more other noise levels exceed the threshold ototoxic noise level, cause transmission of an alert or notification.

US Pat. No. 10,922,942

INTERACTIVE SMART SEAT SYSTEM

1. A method executing on a seating receptacle system, the method comprising:receiving, by one or more processors of the seating receptacle system, an input indicative of an occupancy of a seating receptacle;
receiving, by the one or more processors, data from a vehicle system of a vehicle, the data associated with the occupancy of the seating receptacle;
determining, by the one or more processors, a first output type based on a level of previous interactions responsive to previous data using the first output type;
transmitting, by the one or more processors to the vehicle system, an output of a first output type based on the determination.

US Pat. No. 10,922,941

SYSTEM AND METHOD FOR DETECTING SMOKE USING A PHOTOELECTRIC SENSOR

4Morr Enterprises IP, LLC...

1. A smoke detector comprising:a photoelectric sensor comprising:
a low-frequency light source,
a high-frequency light source, and
a light sensor;
a smoke detector memory comprising:
a smoke detector application,
a plurality of low-frequency smoke signatures, wherein each of said low-frequency smoke signatures relates to how a low-frequency light interacts with one of a plurality of particulates,
a plurality of high-frequency smoke signatures, wherein each of said high-frequency smoke signatures relates to how a high-frequency light interacts with one of a said plurality of particulates, each of said particulates indicative or non-indicative of a fire;
a microprocessor that, according to instructions from said smoke detector application:
receives light data from said light sensor,
extracts low-frequency light data and high-frequency light data from said light data,
compares said low-frequency light data to said plurality of low-frequency smoke signatures to determine if said low-frequency light data matches any of said plurality of low-frequency smoke signatures,
compares said high-frequency light data to said plurality of high-frequency smoke signatures to determine if said high-frequency light data matches any of said plurality of high-frequency smoke signatures; and
initiates an alarm sequence if
said low-frequency light data matches a low-frequency smoke signature related to a fire-indicative particulate of said plurality of particulates, and
said high-frequency light data matches a high-frequency smoke signature related to said fire-indicative particulate;
wherein each of said low-frequency smoke signatures comprises stored low-frequency power-transfer-ratio (PTR) data, and each of said high-frequency smoke signatures comprises stored high-frequency PTR data.

US Pat. No. 10,922,940

BATTERY-POWERED RADIO FREQUENCY MOTION DETECTOR

AMAZON TECHNOLOGIES, INC....

1. A radio frequency (RF) motion sensing circuit, comprising:a transistor;
a constant voltage source coupled to a collector of the transistor;
a first antenna coupled to an emitter of the transistor;
an output resistor coupled to the emitter and coupled to a processor;
a microcontroller coupled to the output resistor; and
a pulsed voltage source coupled to a base of the transistor, wherein the pulsed voltage source is effective to generate a periodic voltage pulse, the periodic voltage pulse effective to cause a current to flow from the constant voltage source through the collector to the emitter, the current effective to cause the first antenna to transmit a first RF signal having a first frequency;
wherein the first antenna is effective to receive a second RF signal having a second frequency, the second RF signal representing the first RF signal reflected off one or more surfaces in an environment of the RF motion sensing circuit;
wherein the RF motion sensing circuit generates a difference component signal at a third frequency, wherein the third frequency is the difference between the first frequency and the second frequency;
wherein a voltage across the output resistor during the periodic voltage pulse corresponds to the difference component signal;
the microcontroller effective to:
determine a first voltage across the output resistor sampled during a first pulse of the pulsed voltage source;
determine a second voltage across the output resistor sampled during a second pulse of the pulsed voltage source;
determine a difference value between the first voltage and the second voltage;
determine that the difference value exceeds a threshold value indicating motion in the environment; and
generate a motion detection signal effective to cause a camera device to initiate video capture.

US Pat. No. 10,922,939

INFORMATION MANAGEMENT SYSTEM FOR TAGGED GOODS

NEXITE LTD., Tel Aviv-Ja...

1. A system for providing access to information associated with electronically tagged goods, the system comprising:at least one processor configured to:
store tag IDs of a plurality of tags;
receive a pairing between at least one particular tag ID and a product ID;
store information associated with the at least one particular tag ID and the product ID;
receive a pairing between the at least one particular tag ID and at least one authorized entity associated with the at least one particular tag ID, wherein the at least one authorized entity is associated with at least one of a current owner of a product corresponding to the product ID, a seller of the product, a manufacturer of the product, or a user of the product;
receive, from a requester, a query to identify at least one of the product ID, the information associated with the at least one particular tag ID, the information associated with the product ID, or the at least one authorized entity, the query including an encrypted tag ID of the particular tag;
decrypt the encrypted tag ID, to thereby look up the decrypted tag ID of the particular tag;
determine if the requester is the at least one authorized entity associated with the decrypted tag ID of the particular tag;
fulfill the query, if the requester is the at least one authorized entity; and
deny the query if the requester is not the at least one authorized entity.

US Pat. No. 10,922,938

SYSTEMS AND METHODS FOR PROVIDING AN IMMERSIVE EXPERIENCE OF A FACILITY CONTROL ROOM USING VIRTUAL REALITY

Honeywell International I...

1. A video surveillance system comprising:a plurality of video surveillance cameras each for producing a corresponding video stream;
a server configured to receive and store the video streams from the plurality of video surveillance cameras;
a first control room having a video wall, wherein the video wall is operatively coupled to the server and is configured to concurrently display two or more of the video streams from two or more of the plurality of video surveillance cameras in a first arrangement on the video wall;
a remote virtual reality headset with a display; and
a virtual reality controller operatively coupled to the virtual reality headset and the server, wherein the virtual reality controller is configured to receive the same two or more video streams that are displayed on the video wall in the first control room from the server and to concurrently display the received two or more video streams on the display of the virtual reality headset in the same first arrangement as on the video wall.

US Pat. No. 10,922,937

METHODS AND APPARATUS TO LOCATE AND TRACK MOBILE DEVICE USERS FOR SECURITY APPLICATIONS

WIRELESS GUARDIAN, INC., ...

1. A security system to track one or more mobile devices, comprising:a telemetry system to transmit and receive wireless signals to track a mobile device;
a digital system to wirelessly communicate with the tracked mobile device to obtain identification information of the mobile device;
a camera system to obtain an image of a person carrying the tracked mobile device, wherein the camera system provides a location of the person that appears in a field of view of one or more cameras associated with the camera system;
a server system in communication with the telemetry system, the digital system, and the camera system, the server system configured to:
determine, based on the transmitted and received wireless signals, a location of the mobile device at or near a venue;
process the identification information of the tracked mobile device to determine an attribute associated with a potential suspect;
obtain, from the camera system, the image of the person carrying the tracked mobile device;
analyze the obtained image to identify the person as a potential suspect;
perform a determination, at different instances, that the determined location of the mobile device is same or similar to the location of the person provided by the camera system; and
track, using the camera system and in response to the determination, the person that continues to appear in the field of view of the one or more cameras.

US Pat. No. 10,922,936

METHODS AND SYSTEMS FOR DETECTING PROHIBITED OBJECTS

CERNER INNOVATION, INC., ...

1. One or more non-transitory computer storage media having embodied thereon instructions that, when executed by one or more computer processors, causes the processors to perform a method comprising:receiving image data from one or more 3D motion sensors located to provide the one or more 3D motion sensors with a view of an individual to be monitored in a monitored area;
based on information received from the one or more 3D motion sensors, detecting an object that is prohibited within the monitored area, wherein detecting the object that is prohibited comprises identifying the object using a plurality of reference points of the object;
determining that a position of the object is consistent with an imminent use by comparing a known position of an improper use of the object identified to the position of the object; and
initiating an alert protocol upon determining that the position of the object is consistent with the imminent use.

US Pat. No. 10,922,935

DETECTING A PREMISE CONDITION USING AUDIO ANALYTICS

Vivint, Inc., Provo, UT ...

1. A method for detecting a premise condition, comprising:detecting a sound with a processor;
determining, with the processor, one or more attributes of the sound;
determining, with the processor, a degree of correlation between the one or more attributes and one or more sound signatures;
determining, with the processor, that the sound belongs to a first recognized class of sounds and a second recognized class of sounds based at least in part on the degree of correlation; and
causing, with the processor, a first predetermined response to occur based at least in part on the first recognized class to which the sound belongs and a second predetermined response to occur based at least in part on the second recognized class to which the sound belongs.

US Pat. No. 10,922,934

VIBRATION SYSTEM AND METHOD OF ADDING TACTILE IMPRESSION

AAC Technologies Pte. Ltd...

1. A vibration system, comprising:an external device,
a vibrator detachably fixed to the external device,
a processor in signal transmission with the vibrator, and
a control device configured to control the processor,
wherein the processor is configured to receive a wireless signal containing vibration characteristic data sent by the control device, and to parse the wireless signal to obtain a drive signal, and the vibrator is configured to receive the drive signal to generate vibration;
wherein the vibrator is fixed to the external device by an adhesive layer; or
the vibrator is detachably fixed to the external device via a body; and the body includes a body portion attached to a rear cover of the external device and a catch portion extending from the body portion to catch the external device.

US Pat. No. 10,922,933

SYSTEMS AND METHODS FOR EFFICIENT SEATING IN AMUSEMENT PARK VENUES

Universal City Studios LL...

1. A system for providing seating guidance for an attraction ride vehicle of an amusement park attraction, comprising:a computing device associated with a display;
one or more processors of the computing device; and
one or more memory devices of the computing device storing information about the attraction ride vehicle and storing instructions corresponding to a seating application, wherein the seating application, when executed by the one or more processors, causes the computing device to:
provide an interface to receive seating input data from an operator for a guest or a group of guests;
process the seating input data in combination with the information about the attraction ride vehicle based on seating logic to provide seat location information in the attraction ride vehicle for the guests or the group of guests
output the seat location information via the display;
receive data indicative of unoccupied seats in the attraction ride vehicle; and
generate a metric for the operator based on the data indicative of unoccupied seats in the attraction ride vehicle.

US Pat. No. 10,922,932

ACOUSTIC USER INTERFACE

ROCHE DIABETES CARE, INC....

1. A method for guiding a user in interaction with an insulin pump comprising:guiding the user by providing to the user an acoustic signal in response to a first user interaction and an acoustic signal in response to a second user interaction, each acoustic signal representing either a state which the insulin pump is in or a state which the insulin pump is in a process of assuming, each acoustic signal comprising at least one of:
(i) a first acoustic signal comprising at least five descending halftones, wherein a duration of each halftone is independently selected from a range of from 0.025 s to 5 s;
(ii) a second acoustic signal comprising at least five ascending halftones, wherein the duration of each halftone is independently selected from a range of from 0.025 s to 5 s;
(iii) a third acoustic signal comprising at least seven ascending halftones, wherein the duration of each halftone is independently selected from a range of from 0.025 s to 5 s;
(iv) a fourth acoustic signal comprising at least two alternating tritones, wherein the duration of each tritone is independently selected from a range of from 0.025 s to 5 s; and
(v) a fifth acoustic signal of four tones, wherein the duration of each tone is independently selected from a range of from 0.025 s to 0.5 s, followed by a single tone with a duration of at least twofold of any one of the preceding four tones,
wherein at least one tone of the acoustic signals of (i) to (v) has a frequency in a range of from 1500 Hz to 4000 Hz.

US Pat. No. 10,922,931

INFORMATION PROCESSING APPARATUS, RECEIPT PRINTER, AND INFORMATION PROCESSING METHOD

Seiko Epson Corporation, ...

1. An information processing apparatus comprising:a processor, wherein the processor is configured to:
acquire print data from a receipt printer that prints a printed matter based on the print data,
determine a type of the printed matter based on text data included in the print data,
select a script corresponding to the type from a plurality of scripts,
apply the selected script to the text data, and
determine the type by a learning model that is machine-learned based on teacher data where the text data and the type of the printed matter are associated with each other.

US Pat. No. 10,922,930

SYSTEM FOR PROVIDING ON-DEMAND RESOURCE DELIVERY TO RESOURCE DISPENSERS

BANK OF AMERICA CORPORATI...

1. A system for providing on-demand resource delivery to resource dispensers, the system comprising:a memory device; and
one or more processing devices operatively coupled to the memory device, wherein the one or more processing devices are configured to execute computer-readable program code to:
monitor, continuously, a plurality of sets of resource notes at a plurality of nodes across a physical geographic region, wherein each of the plurality of sets of resource notes are within physical containers each comprising an associated unique identifier tag, and wherein monitoring comprises:
scanning the associated unique identifier tag for each of the plurality of sets of resource notes received to determine, based on the associated unique identifier tag for each of the plurality of sets of resource notes, denomination data associated with each of the plurality of sets of resource notes in real-time as they are received;
determining when each of the plurality of sets of resource notes are dispensed from each of the plurality of nodes and automatically scanning the associated unique identifier tag for each of the plurality of sets of resource notes to determine the denomination data associated with each of the plurality of sets of resource notes in real-time as they are dispensed, wherein determining the denomination data associated with an individual set of resource notes of the plurality of sets of resource notes at an individual node of the plurality of nodes comprises:
receiving an indication that a first entity associated with the individual node has scanned a unique identifier tag associated with the individual set of resource notes;
in response to receiving the indication that the unique identifier tag associated with the individual set of resource notes has been scanned, transmitting a deposit alert over a communication channel to a computing device of the first entity, wherein the deposit alert activates a deposit application stored on the computing device of the first entity to display a deposit portal comprising input fields and a request for the first entity to provide entity input associated with contents of the container that includes the denomination data associated with the individual set of resource notes; and
receiving, from the computing device of the first entity, the entity input associated with the contents of the container and associating the denomination data associated with the individual set of resource notes with the unique identifier for the individual set of resource notes; and
storing monitored data for each of the plurality of nodes in a resource grid database, wherein the stored monitored data comprises nodal location data, time data associated with each of the plurality of sets of resource notes that are received and dispensed at each of the plurality of nodes, and the denomination data for each of the plurality of the received and the dispensed sets of resource notes;
determine, based on the stored monitored data, that a first node of the plurality of nodes requires an adjustment of an amount of resource notes present at the first node;
transmit, in response to determining that the first node requires the adjustment of the amount of resource notes present at the first node, an instruction to cause a delivery vehicle to provide the adjustment to the amount of resource notes present at the first node;
determine a trend of the amount of resource notes that are present at the first node over a period of time;
determine, based on the trend, a predicted amount of resource notes that are expected to be present at the first node at a future point in time; and
determine that the predicted amount of resource notes that are expected to be present at the first node at the future point in time is below or above a threshold amount of resource notes.

US Pat. No. 10,922,929

RAPID PLAY POKER GAMING DEVICE

ACRES TECHNOLOGY, Las Ve...

1. At least one non-transitory computer readable medium that stores a plurality of instructions, which when executed by at least one processor causes the at least one processor to:receive a player input via a game actuating button associated with a poker gaming device to activate a poker game on the poker gaming device;
randomly select a plurality of cards to be used in the poker game;
display on a video display associated with the poker gaming device a first portion of the plurality of cards to the player as a dealt poker hand;
analyze the plurality of randomly selected cards to determine if the plurality of randomly selected cards can result in a minimum winning poker hand;
inform the player of at least one of the possible winning poker hands and allow the player to draw cards from a second portion of the plurality of cards not used in the dealt poker hand to replace cards used in the dealt poker hand when a minimum winning poker hand is determined to be possible from the plurality of randomly selected cards; and
prevent the player from drawing additional cards from the second portion of the plurality of cards not used in the dealt poker hand when a minimum winning poker hand is determined to not be possible from the plurality of randomly selected cards.

US Pat. No. 10,922,928

LOTTERY DEVICE AND LOTTERY METHOD

SEGA SAMMY CREATION INC.,...

1. An electronic lottery device comprising an electronic control unit including an electronic data processor and an electronic data memory, an electronic data storage unit, a random data generating unit and at least one electronic display, the electronic control unit executes a program stored in the storage unit to thereby implement:an associating function for randomly associating, based on data from the random data generating unit, a plurality of different marks on a plurality of different positions on a virtual object in which a lottery result is to be indicated;
an operation control function for simulating a motion of the virtual object in a virtual space by physical operation in response to a user input to the electronic lottery device;
a determining function for determining a lottery result on the basis of the mark associated with a particular position on the virtual object as determined according to a state of the virtual object in the simulation result; and
a displaying function for displaying the simulated motion of the virtual object in the virtual space and the lottery result on the electronic display,
wherein the timing of the performance of the associating function is variable and includes at least a timing where the marks associated with the positions of the virtual object are displayed at the moment the user input is received.

US Pat. No. 10,922,927

HYBRID CASINO DICE GAME

ARUZE GAMING (HONG KONG) ...

1. A dice game system comprising:a play field comprising:
at least one play field display device arranged horizontally so that images displayed by the at least one play field display device are viewable from directly above the at least one play field display device;
at least one of a rigid protective material, a flexible protective layer, or a play surface, positioned directly above the at least one play field display device so that images displayed by the at least one play field display device are displayed through the at least one of the rigid protective material, the flexible protective layer, or the play surface;
a bumper wall positioned adjacent to the play field, the bumper wall comprising a bumper surface comprised of padding material;
a plurality of player stations, each player station comprising;
at least one player station memory device;
a player station input device; and
at least one player station processor in communication with the at least one player station memory device and the player station input device;
a dealer station, the dealer station comprising:
at least one dealer station memory device;
a dealer station input device; and
at least one dealer station processor in communication with the at least one dealer station memory device and the dealer station input device;
at least one game controller memory device; and
at least one game controller processor, which is configured, with the play field, the plurality of player stations, the dealer station, and the at least one game controller memory device to:
cause the at least one play field display device to display a dice game wagering area;
receive a communication from at least one player station of the plurality of player stations indicating a wager on a next play of the dice game;
cause the at least one play field display device to display a representation of the received wager;
cause an indication that physical dice may be thrown by a player to be communicated to the dealer station;
receive a communication that indicates the results of thrown physical dice; and
determine the results of the received wager based on the results of the thrown physical dice and cause a credit meter associated with the at least one player station to increment when the determined results of the received wager is a winning determination.

US Pat. No. 10,922,926

METHOD, DEVICE, AND COMPUTER-READABLE MEDIUM FOR WAGERING ON A SKILLS-BASED DIGITAL GAMING COMPETITION WITH AN OUT-OF-GAME PEER WAGERING MODULE

OKLETSPLAY INCORPORATED, ...

1. One or more transactional servers, comprising:processing circuitry configured to:
communicate with a computing device storing a peer-wagering module for wagering on a skills-based digital gaming competition;
send, to the peer-wagering module, potential game data that includes information on at least one third party game a player can play;
receive, from the peer-wagering module, selection information from a player that includes at least one selected game instance of at least one selected third party game from among the at least one third party game and at least one wager amount the player wishes to wager on the at least one selected game instance, wherein the at least one selected third party game is accessed by the computing device, and the peer-wagering module is external and distinct from the at least one selected game instance and the at least one selected third party game; and
generate game instance match ID data or receive game instance match ID data generated by the peer wagering module, wherein the game instance match ID data includes at least one of:
credential data associated with the player,
a wager amount of the player, and
at least one selected game instance; and
wherein the at least one selected game instance is activated for the computing device or another computing device for use by the player.

US Pat. No. 10,922,925

GAMING MACHINE WITH A FIXED WILD SYMBOL

ARISTOCRAT TECHNOLOGIES A...

1. A gaming machine, comprising:a display unit; and
a game controller executing a program, which causes the game controller to at least:
for a base game:
cause the display unit to display a base game outcome comprising a first plurality of symbols at a plurality of symbol positions of the display unit; and
award a series of free games in response to the base game outcome comprising at least a first threshold number of scatter symbols;
for each free game in the series of free games:
cause the display unit to display a feature game outcome comprising a second plurality of symbols at the plurality of symbol positions of the display unit;
hold a bonus symbol of the feature game outcome in a superimposed representation over its respective symbol position for remaining free games in the series of free games; and
increase a quantity of remaining free games in the series of free games in response to at least a second threshold number of scatter symbols in the feature game outcome, including any scatter symbol underlying the bonus symbol held in the superimposed representation.

US Pat. No. 10,922,924

SELECTABLE INTERMEDIATE RESULT INTERLEAVED WAGERING SYSTEM

Gamblit Gaming, LLC, Mon...

1. An electronic gaming machine for selectable intermediate result interleaved wagering, comprising:a processor;
and a memory storing processor-executable instructions that when executed by the processor cause the processor to:
determine application telemetry;
generate a wager request based on the application telemetry;
generate a wager outcome based on the wager request;
generate an intermediate result based on the wager outcome;
determine application resources associated with the intermediate result;
integrate the application resources associated with the intermediate result into an interactive application;
generate a visual display of a wager process; and
generate a visual display of the wager outcome.

US Pat. No. 10,922,923

SYSTEMS AND METHODS FOR DISPLAYING AN OVERSIZED SYMBOL ACROSS MULTIPLE REELS

Aristocrat Technologies A...

1. An electronic gaming machine comprising:a display device;
a game controller; and
a memory device storing instructions, which when executed by the game controller, cause the game controller to at least:
cause the display device to display a primary matrix, the primary matrix including a trigger column and a plurality of combinable columns, the trigger column and the plurality of combinable columns corresponding to a first plurality of virtual reels having respective pluralities of symbols defined thereon, the trigger column being predesignated as a column within the primary matrix for evaluation to determine if a trigger condition has been met,
cause the first plurality of virtual reels to spin and stop,
evaluate the trigger column to determine whether a stack of at least three symbols appears in the trigger column when the first plurality of virtual reels stops, wherein the stack of at least three symbols are defined on a single reel strip corresponding to the trigger column, a stack of at least three wild symbols appearing in the trigger column being the trigger condition,
replace, in response to the appearance of the stack of at least three wild symbols in the trigger column, two or more reel strips corresponding to two or more columns of the plurality of combinable columns with an oversized reel strip corresponding to a combined column, the combined column defined by the two or more columns and excluding the trigger column,
cause the display device to display a secondary matrix, the secondary matrix including the trigger column and the combined column, the trigger column and the combined column corresponding to a second plurality of virtual reels having respective pluralities of symbols defined thereon, wherein the respective symbols appearing in the combined column are oversized to span a space otherwise occupied by the plurality of combinable columns in the primary matrix, and
while the stack of at least three symbols remain fixed in the trigger column, cause the remaining virtual reels of the second plurality of virtual reels to spin and stop, wherein at least a portion of at least one oversized symbol appears in the combined column.

US Pat. No. 10,922,922

GAMING SYSTEM WITH FEATURE VARIATION BASED ON PLAYER INPUT

Aristocrat Technologies A...

1. A game controller for a gaming system, the game controller configured to execute instructions stored in a memory, which, when executed by the game controller, cause the game controller to at least:receive an input to initiate a play of a game;
in response to a trigger event, display a plurality of selectable games from one of a first table of feature games and a second table of feature games, the displayed plurality of selectable games is the first table of feature games when the input is a first input, the displayed plurality of selectable games is the second table of feature games when the input is a second input, each of the first table and second table includes a plurality of games, each game of the plurality of games includes a corresponding set of prize modifiers capable of being applied to a prize awarded during the respective game of the plurality of games when a trigger condition is satisfied during the respective game;
receive a selection indicative of a selected game selected from the displayed plurality of selectable games; and
conduct play of the selected game using the corresponding set of prize modifiers.

US Pat. No. 10,922,921

GAMING SYSTEM AND METHOD FOR ADDING PLAYER INFLUENCE TO GAME OUTCOMES

IGT, Las Vegas, NV (US)

1. A gaming system comprising:a processor; and
a memory device which stores a plurality of instructions, which when executed by the processor, cause the processor to:
receive data associated with a request to generate random numbers,
obtain player controlled data, the player controlled data comprising any input made by the player in which the player does not know that the input will influence a game outcome of a play of a game,
modify a random number generator, wherein the modification is based on the obtained player controlled data, wherein the modification of the random number generator comprises a modification of a quantity of adjacent bits of the random number generator; and
determine, via the modified random number generator, a plurality of random numbers, wherein the plurality of random numbers are deterministic of the game outcome of the play of the game.

US Pat. No. 10,922,920

METHOD OF GAMING, A GAMING SYSTEM AND A GAME CONTROLLER

Aristocrat Technologies A...

1. An electronic gaming machine comprising:a credit input operable to receive an input credit to establish a credit balance;
at least one display device providing a first plurality of display positions and a second plurality of display positions; and
a game controller comprising a processor, and a memory, the memory storing a symbol set and instructions, which, when executed, cause the game controller to at least:
generate a random outcome by a random number generator,
select a first plurality of symbols from the symbol set for display at the first plurality of display positions and a second plurality of symbols from the symbol set for display at the second plurality of display positions based on the random outcome,
display, after selecting the first plurality of symbols and the second plurality of symbols, the first plurality of symbols selected at the first plurality of display positions in a first graphical appearance, and the second plurality of symbols selected at the second plurality of display positions in a second graphical appearance that is different from the first graphical appearance,
evaluate the first plurality of symbols selected for the first plurality of display positions in the first graphical appearance for awards,
move a moveable symbol visually from a first display position of the first plurality of display positions to a second display position of the second plurality of display positions after displaying the first plurality of symbols selected and the second plurality of symbols selected, and
modify visually at least one symbol of the second plurality of display positions from the second graphical appearance to the first graphical appearance as the moveable symbol moves from the first display position to the second display position.

US Pat. No. 10,922,918

PLAYER-ENTRY ASSIGNMENT AND ORDERING

Rational Intellectual Hol...

1. A computer-implemented method, comprising:assigning a first player-entry to a table so that said first player-entry can participate in a hand of a particular card game at said table, wherein there is a plurality of players each having one or more respective player-entries for participating in a respective hand of said card game,
when game play terminates for a player-entry that is actively participating in a hand of said card game at a first virtual table by a request to fold received from a first player out of turn from said hand, moving the first player's player-entry to a second virtual table selected by:
identifying other player(s) having player-entries at the first virtual table at a time in which game play terminates at the first virtual table for the first player-entry;
for a first player-entry of a first player, identifying an assignable table for said first player-entry from a plurality of new tables for said card game, wherein a new table is an assignable table for the first player-entry if the assignment of the first player-entry to said new table cannot itself provide the first player with further information about a hand in which any other player-entry of said first player is actively participating in addition to information about said hand that is available to said first player only by virtue of the participation of said already assigned first player-entry in said hand; and
assigning the first player-entry to the identified assignable table.

US Pat. No. 10,922,917

SYSTEM AND METHOD FOR ON-LINE GAME BASED ON CONSUMER WISH LIST

Transform SR Brands LLC, ...

1. A method of operating a system for assessing personal preferences and interests of end-users by engaging one or more end-users in a game in which the end-user may be given a chance to win a product item from a collection of product items selected by the end-user, the method comprising:creating, in memory of a computer system configured to communicate with user devices of a plurality of end-users, a representation of each of one or more first games of chance, wherein each representation of a first game of chance comprises a respective starting point in time, a respective ending point in time, a respective maximum number of prizes to be awarded, and a respective set of criteria;
for each first game of chance of the one or more first games of chance, creating in memory of the computer system, a representation of a prize instance for each of the respective maximum number of prizes to be awarded, wherein the representation of each prize instance comprises a point in time assigned randomly within a time interval defined by the respective starting point in time and the respective ending point in time of the first game of chance, and an indicator representative of whether the prize instance is available to be awarded;
receiving an end-user request to participate in one first game of chance of the one or more first games of chance at a particular point in time, wherein the request comprises information identifying the end-user and information identifying a product item having associated characteristics;
determining whether the characteristics of the product item meet the criteria of any of the one or more first games of chance;
storing the information identifying the product item in the collection of product information of the end-user, if the characteristics of the product item do not meet the criteria of any of the one or more first games of chance;
selecting one first game of chance with the fewest criteria from the one or more first games of chance based upon the characteristics of the product item, if the characteristics of the product item meet the criteria of at least one of the one or more first games of chance; and
awarding the product item to the end-user, if the particular point in time is the same as or after the point in time assigned to at least one prize instance of the selected first game of chance and the indicator of the at least one prize instance indicates the at least one prize instance is available to be awarded,
wherein the collection of product information of the end-user comprises a collection of information for specific product items selected by the end-user while browsing a particular e-commerce web site, for persistent storage by the particular e-commerce web site after the end-user leaves the particular e-commerce web site.

US Pat. No. 10,922,916

ELECTRONIC GAMING MACHINE AND METHOD FOR ADDING ONE OR MORE ROWS OF SYMBOL POSITIONS TO AN ARRAY OF SYMBOL POSITIONS IN AN ELECTRONIC WAGERING GAME

Aristocrat Technologies A...

1. An electronic gaming machine comprising:a display device;
a processor; and
a memory storing instructions which when executed by the processor, cause the processor to at least:
control the display device to display an initial array of symbol positions, the initial array of symbol positions defining first symbol positions including a plurality of rows of symbol positions;
populate each symbol position in the initial array of symbol positions with a first plurality of symbols;
evaluate the first plurality of symbols displayed in the initial array of symbol positions to determine a first outcome;
add, in response to a first designated symbol being populated at a symbol position of the first symbol positions, a new row of symbol positions to the initial array of symbol positions to create an updated array of symbol positions;
populate each symbol position of the new row of symbol positions with a second plurality of symbols, wherein the first designated symbol is populated at a symbol position of the new row of symbol positions adjacent to the first designated symbol at the symbol position of the first symbol positions; and
repopulate, other than symbol positions where the first designated symbol is populated, each symbol position of the first symbol positions and the new row of symbol positions with symbols, such that the first designated symbol is persistently overlaid upon the symbol position of the first symbol positions and the symbol position of the new row of symbol positions.

US Pat. No. 10,922,915

APPARATUS FOR CONTROLLING ACCESS TO AND USE OF PORTABLE ELECTRONIC DEVICES

Renovo Software, Inc., R...

1. A method for controlling access to a portable electronic device, the method comprising:securing the portable electronic device to a dispenser system;
receiving a request from a user to access the portable electronic device;
querying a portable electronic device management system to retrieve rights and restrictions of the user;
enabling a program on the portable electronic device, wherein the program corresponds to the rights of the user;
locking a second program on the portable electronic device, wherein the second program corresponds to the restrictions of the user; and
in response to the enabling and locking, dispensing the portable electronic device from the dispenser system.

US Pat. No. 10,922,914

PAPER SHEET PROCESSING DEVICE

UNIVERSAL ENTERTAINMENT C...

1. A paper sheet processing apparatus comprising:a stand;
a main body comprising a conveyer that conveys a paper sheet, the main body mountable and demountable from the stand;
a housing that stores the paper sheet from the conveyer, the housing mountable and demountable from the stand;
a transmitter that is disposed on the main body and sends wirelessly information of the paper sheet; and
a controller,
wherein the housing comprises:
an antenna that receives wirelessly the information from the transmitter and is installed on an upper wall of the housing;
a storage that stores the information of the paper sheet received through the antenna;
a first plate on which the paper sheet is to be stacked;
a second plate; and
a pair of regulatory members disposed on both sides of the first plate,
wherein when detecting insertion of the paper sheet in a state that the second plate is brought between the pair of regulatory members, the controller moves the second plate to form an opening between the pair of regulatory members such that the paper sheet passes through the opening, and
wherein in the state that the second plate is brought at between the pair of regulatory members, the second plate is brought between the pair of regulatory members such that the opening through which the paper sheet passes is occluded,
wherein the apparatus further comprises:
a magnet that is disposed on the first plate and generates a magnetic field when a number of paper sheets stacked in the housing reaches a threshold value; and
a magnetic sensor that is disposed on the main body and receives the magnetic field,
wherein the controller:
starts counting a number of paper sheets that are stacked in the housing after the magnetic sensor receives the magnetic field, and
stops storing the paper sheet into the housing when the counted number is ten or more, and
wherein the antenna includes a loop antenna having a loop surface such that a communication direction of the antenna is perpendicular to the loop surface,
wherein the transmitter has a first surface substantially parallel to the loop surface and broader than a second surface perpendicular to the loop surface,
wherein the housing is slidably mounted and demounted on the stand,
wherein an upper wall of the housing is parallel to an opposing surface side of the stand during the sliding of the housing into the stand such that the loop surface and the first surface of the transmitter are substantially parallel to each other,
wherein the first surface of the transmitter and the loop surface of the antenna are located substantially parallel to a direction of the movement of the housing when mounting and demounting,
wherein the loop surface is positioned such that the antenna receives the information from the transmitter even when the housing is not located at a predetermined position after the housing is mounted on the frame,
wherein the loop surface of the antenna and the first surface of the transmitter entirely overlap each other when the housing is located at the predetermined position, and
wherein the upper wall of the housing entirely overlaps the first surface of the transmitter when the housing is at the predetermined position.

US Pat. No. 10,922,913

METHOD AND APPARATUS FOR DETECTING A SECURITY THREAD IN A VALUE DOCUMENT

1. A method for detecting a security thread in a value document, in whichmagnetic data for sites on the value document are employed which data represent a magnetic property of the value document at the site,
check sites on the value document are determined employing the sites, and
from the check sites a straight line is determined, along which or on which at least some of the check sites lie and which represents a location of the security thread.

US Pat. No. 10,922,912

GRADING CONTROL SYSTEM AND CONTROL DEVICE FOR MERCHANDISE SECURITY

Hangzhou Langhong Technol...

1. A grading control system for merchandise security, comprising:at least two controllers and a plurality of monitor devices, the at least two controllers and the plurality of monitor devices communicating via a specific system identification code;
the controllers comprise at least one main controller and at least one auxiliary controller;
the monitor device has an identifiable communication interface communicable with the controller, and is configured to be communicable with the controller via the identifiable communication interface, and to be initialized by a controller in communication therewith;
the monitor device initialized by the auxiliary controller can be controlled by the main controller and the auxiliary controller;
the controller is configured to be capable of locking, unlocking, placing an operational state, and/or placing an inoperable state on the monitor device that is initialized therewith;
wherein the auxiliary controller controls a certain type of the monitor device, and the main controller controls a plurality of types of monitor devices;
wherein the system identification code comprises a channel number and an address number;
the channel number is a communication code used for communication between devices; and
the address number comprises a communication code, a device code, and a privilege code, and the communication code is lower 2 Bytes of a serial number built into the controller.

US Pat. No. 10,922,911

COLLAPSIBLE AND DEPLOYABLE INTERACTIVE STRUCTURE AND SYSTEM OF USE

BANK OF AMERICA CORPORATI...

1. A system for use of a collapsible and deployable interactive structure, the system comprising:an interactive structure that is collapsible and deployable, wherein the interactive structure comprises at least a door with a locking mechanism, an interior display, and a physical presence sensor;
a memory device; and
a processing device operatively coupled to the memory device, wherein the processing device is configured to execute computer-readable program code to:
initiate communication with a user;
authenticate the user based on the communication with the user;
unlock the locking mechanism of the door in response to authenticating the user;
receive an indication from the physical presence sensor that the user is inside the interactive structure, wherein the indication from the physical presence sensor further comprises an instruction to the interior display to enter into an active state;
cause the interior display to communicate with the user to determine a desired action associated with the user, wherein the desired action performed by the interior display comprises a predicted desired action, and wherein determining the desired action comprises:
determining a purpose of the user for engaging with the interactive structure; and
predicting a desired action for the user based on the purpose of the user for engaging with the interactive structure; and
perform the predicted desired action associated with the user and display information associated with the predicted desired action to the user.

US Pat. No. 10,922,910

KEY INFORMATION SHARING SYSTEM, DELIVERY DEVICE AND USER TERMINAL

TOYOTA JIDOSHA KABUSHIKI ...

1. A key information sharing system that allows key information as first information to be shared, the first information being associated with an object equipped with a control device, the control device performing a predetermined control to the object when the control device receives the first information from an external terminal, the key information sharing system comprising:a server configured to deliver the first information; and
a first portable terminal possessed by a user, the first portable terminal configured to receive the first information delivered from the server, wherein
the server includes a processor configured to add second information to the first information that is delivered to the first portable terminal, the second information being information that allows the first information to be transferred between the first portable terminal and a second portable terminal possessed by a third-party without the server,
the first portable terminal includes a terminal-to-terminal communication interface circuit configured to transmit the first information to the second portable terminal in response to an input operation by the user, when the first portable terminal receives the first information to which the second information has been added, from the server, and
the processor of the server sets a restriction content for the predetermined control, based on a function restriction request transmitted from the first portable terminal to the server, and adds third information to the first information, the restriction content being contained in the third information.

US Pat. No. 10,922,908

SYSTEM AND METHOD FOR VEHICLE SENSOR DATA STORAGE AND ANALYSIS

THE BOEING COMPANY, Chic...

1. A method comprising:obtaining, by a processor, first data associated with operation of a vehicle, wherein the first data comprises sensor data from one or more sensors onboard the vehicle, and wherein the first data indicates one or more parameter values of a first parameter measured by the one or more sensors and one or more timestamps associated with the one or more parameter values;
estimating, by the processor, a first amount of storage space that would be used to store a first portion of the first data in accordance with a first storage scheme;
estimating, by the processor, a second amount of storage space that would be used to store the first portion of the first data in accordance with a second storage scheme that is different than the first storage scheme; and
storing the first portion of the first data in a memory in accordance with the first storage scheme based on the first amount of storage space satisfying a first threshold, wherein the first threshold is based on the second amount of storage space.

US Pat. No. 10,922,907

INTERACTIVE AUGMENTED REALITY FUNCTION

eBay Inc., San Jose, CA ...

1. An apparatus comprising:a vehicle tracking portal executable by a processor on a mobile device of a user associated with a vehicle and configured to:
obtain, via a camera of the mobile device, a live video feed of a vehicle component;
obtain diagnostic information related to the vehicle component currently displayed in the live video feed;
cause diagnostic information related to the vehicle component to be overlaid on the live video feed of the vehicle component on the mobile device;
based at least upon the diagnostic information, provide step-by-step directions on how to replace or repair the vehicle component; and
cause the step-by-step directions to be displayed as an overlay on the live video feed of the vehicle component.

US Pat. No. 10,922,906

MONITORING AND DIAGNOSING VEHICLE SYSTEM PROBLEMS USING MACHINE LEARNING CLASSIFIERS

GM GLOBAL TECHNOLOGY OPER...

1. A system for monitoring operation of a vehicle, comprising:a processing device including an interface configured to receive measurement data from a plurality of sensing devices, each sensing device of the plurality of sensing devices configured to measure a parameter of a vehicle system, the processing device including a plurality of machine learning classifiers, each classifier of the plurality of machine learning classifiers associated with a different vehicle subsystem of a plurality of vehicle subsystems, the processing device configured to perform:
receiving measurement data from each of the plurality of sensing devices, the measurement data having a plurality of subsets;
in response to detection of a malfunction in the vehicle, inputting a respective subset of the plurality of subsets to each classifier, wherein each classifier is configured to define a class associated with normal operation of a respective vehicle subsystem;
determining by each classifier whether the respective subset of the measurement data belongs to the class; and
based on one or more classifiers determining that at least a selected amount of the respective subset of the measurement data is outside of the class, outputting a fault indication, the fault indication identifying which of the plurality of vehicle subsystems has a contribution to the malfunction.

US Pat. No. 10,922,905

TRACKED VEHICLE

CLAAS Industrietechnik Gm...

1. A tracked vehicle comprising: a ground drive including at least two ground drive wheels, ground engagement elements assigned to individual ground drive wheels and/or units comprising several ground drive wheels, and an evaluation device configured for ascertaining an operating state of at least one of the ground engagement elements based at least on one state variable of the surroundings and vehicle-independent, device-specific sensor data, wherein the evaluation device is configured for evaluating vehicle speed, GPS position or geoposition of the tracked vehicle and acceleration data of the tracked vehicle as the vehicle-independent, the device-specific sensor data, wherein the evaluation device is configured for ascertaining the at least one state variable of the surroundings and wherein the at least one state variable comprises ambient temperature and weather data.

US Pat. No. 10,922,904

METHOD AND APPARATUS FOR REMOTELY COMMUNICATING VEHICLE INFORMATION TO THE CLOUD

Aeris Communications, Inc...

1. A method for communicating and storing vehicle information from a vehicle across a remote network to one or more remote devices utilizing at least one communication protocol of the vehicle, comprising the steps of:receiving vehicle data from the vehicle through a device protocol system in communication arrangement with the vehicle, wherein the device protocol system includes: a protocol adapter including a processor, wherein the protocol adapter communicates with a vehicle communication system of the vehicle; a device controller for managing any one or more of: data requests, transmission frequency, and event triggers; and a device communications module for communicating the received vehicle data across the remote network to a service broker system;
transmitting at least one message having a device ID, an endpoint ID, and the received vehicle data, from the device protocol system across the remote network to the service broker system capable of mapping the device ID to the endpoint ID,
wherein the service broker system includes: a broker network module including a transmitter, a device profile module including a database, a data store for storing received vehicle data, an access control module including a processor for providing a data use profile rule set and an applications service module including a processor for processing requests from software applications to access the received vehicle data, and
wherein the service broker system resides on the remote network,
remote from the vehicle and the device protocol system; and
decoding and storing the received vehicle data of the at least one transmitted message on the remote network at the data store according to the transmitted at least one message.

US Pat. No. 10,922,903

METHODS AND SYSTEMS FOR PROVIDING REMOTE ASSISTANCE TO A STOPPED VEHICLE

Waymo LLC, Mountain View...

1. A method of providing remote assistance for an autonomous vehicle, the method comprising:determining, at a computing system, that the autonomous vehicle has stopped based on sensor data received from the autonomous vehicle, wherein the computing system is positioned remotely from the autonomous vehicle;
determining, by the computing system using the sensor data, one or more review criteria have been met, wherein the one or more review criteria includes an indication that the autonomous vehicle has stopped for at least a threshold period of time awaiting the pickup of a passenger;
in response to the one or more review criteria being met, providing at least one image to an operator corresponding to a time prior to determining that one or more review criteria have been met;
receiving, at the computing system, an operator input; and
in response to the operator input, providing an instruction to the autonomous vehicle for execution by the autonomous vehicle via a network.

US Pat. No. 10,922,902

DISPLAY CONTROL DEVICE, DISPLAY CONTROL METHOD, AND RECORDING MEDIUM

SONY CORPORATION, Tokyo ...

1. A display control device comprising:an image capturing device configured to capture an image of a real space;
a touch sensor configured to detect a user operation;
a display configured to display the image captured by the image capturing device; and
a display controller configured to place a virtual object within an augmented reality space corresponding to the real space in accordance with a recognition result of a real object in the real space, the virtual object displayed by being superimposed on the image,
wherein, when the user operation is an operation specifying the virtual object, the display controller moves the virtual object within the augmented reality space on a basis of an environment of a destination of the real space corresponding to a predetermined position of the augmented reality space where the virtual object is moved.

US Pat. No. 10,922,901

SYSTEMS, METHODS, AND COMPUTER-READABLE MEDIA FOR PLACING AN ASSET ON A THREE-DIMENSIONAL MODEL

Apple Inc., Cupertino, C...

1. A method, comprising:identifying an initial position and an initial orientation of an asset on a three-dimensional model;
determining an orientational relationship between the asset and the three-dimensional model based on the initial orientation;
receiving an instruction to displace the asset from the initial position on the three-dimensional model;
identifying at least one new contact point corresponding to the received instruction;
identifying at least one new surface normal associated with the at least one new contact point; and
displaying the asset at one or more new positions corresponding to the at least one new contact point, wherein displaying the asset at the one or more new positions comprises:
positioning a pivot point of the asset at the at least one new contact point; and
orienting the asset to maintain the orientational relationship with respect to the at least one new surface normal of the three-dimensional model,
wherein the method is performed by an electronic device.

US Pat. No. 10,922,900

SYSTEMS AND METHODS FOR COMPETITIVE SCENE COMPLETION IN AN APPLICATION

1. A method, comprising:at a client device comprising a display, one or more processors and memory:
in an application running on the client device associated with a first user:
responsive to selection of a first scene completion challenge, displaying an initial scene of the first scene completion challenge and displaying within the initial scene a first plurality of markers, wherein each respective marker in the first plurality of markers has corresponding predefined designated coordinates that are not modifiable by the first user, wherein each respective marker corresponds to a furnishing unit type within a plurality of furnishing unit types, and wherein each respective marker is not modifiable by the user;
in response to each sequential user selection of a corresponding marker in the first plurality of markers within the initial scene of the first scene completion challenge, performing a procedure that comprises:
displaying a first plurality of virtual furnishing units that match the furnishing unit type of the corresponding marker, wherein each furnishing unit in the first plurality of virtual furnishing units is displayed as a corresponding three-dimensional graphic outside the initial scene, wherein the first plurality of virtual furnishing units comprises renditions of furnishing units and includes (i) one or more first virtual furnishing units purchased by the first user and (ii) one or more second virtual furnishing units not purchased by the first user, and wherein each marker in the plurality of markers does not display a virtual furnishing unit in the first plurality of virtual furnishing units;
receiving a user selection, outside the initial scene, of the corresponding three-dimensional graphic of a selected virtual furnishing unit in the first plurality of virtual furnishing units; and
responsive to the user selection, replacing the corresponding marker within the initial scene with the three-dimensional graphic of the selected virtual furnishing unit at the corresponding predefined designated coordinates;
wherein the performing the first procedure results in a first augmented scene that comprises the initial scene with a plurality of three-dimensional graphics of selected virtual furnishing units, including displaying each respective three-dimensional graphic at the corresponding predefined designated set of coordinates belonging to a corresponding marker in the first plurality of markers within the initial scene, and wherein the first augmented scene contains no marker in the first plurality of markers;
storing a user profile for the first user, wherein the user profile comprises indications of: the first scene completion challenge, the first plurality of markers, and a plurality of selected virtual furnishing units based on the performing the first procedure, each of the selected virtual furnishing units corresponding to a respective marker in the first plurality of markers;
in response to and based on a determination that a predefined completion criterion is satisfied, the predefined completion criterion comprising populating the initial scene with a predetermined number of virtual furnishing units specified by the first scene completion challenge, enabling the first user to submit the first augmented scene or the indications of the plurality of selected furnishing units to a remote server; and
responsive to submitting the first augmented scene or the indications of the plurality of selected furnishing units to the remote server, providing the first user a first reward.

US Pat. No. 10,922,899

METHOD OF INTERACTIVE QUANTIFICATION OF DIGITIZED 3D OBJECTS USING AN EYE TRACKING CAMERA

1. The method of interactive quantification of digitized 3D objects using an eye tracking camera, characterized in that, in the following steps:a) coordinates of observed screen space are determined using a camera that senses a position of pupils of an operator gazing on a screen surface;
b) dimensions of a volume of interest (“VOI”) are defined, wherein the VOI is a block or a cylinder;
c) a particle and a position of the VOI is selected by the operator on a reference level by means of pressing of a mouse button or by gaze or by touching the screen surface, where the VOI base lies on the reference level, while from a side and from a top it bounds the particle;
d) the VOI is visualized, wherein two-dimensional images of the VOI are simultaneously displayed side by side on the screen surface;
e) observed screen space is corrected by VOI visualization, where an observed one of the two-dimensional images of the individual levels of the VOI is visually distinguished from other of the two-dimensional images of the individual levels of the VOI;
f) a last of the two-dimensional images of the individual levels is selected by gaze of the operator, and the gaze is focused on the last of the two-dimensional images of the individual levels of the VOI on which the particle is still visible, and identification of the last of the two-dimensional images of the individual levels is confirmed by gaze fixation for a certain period of time or by voice command or by release of the mouse button pressed in phase c) or by eyewink; and
g) the particle is analyzed by an algorithm to verify a property of the particle in 3D space, and the particle is marked in the two-dimensional images with a color mark, while a mark position on levels between marked ones of the two-dimensional images of the VOI is determined by interpolation or by finding a representative point by analyzing the two-dimensional images.

US Pat. No. 10,922,898

RESOLVING VIRTUAL APPAREL SIMULATION ERRORS

Soul Vision Creations Pri...

1. A method of apparel simulation, the method comprising:determining a body construct used for generating a shape of a virtual representation of a user;
determining that points on a virtual apparel are within the body construct;
determining, for each of the points, a respective normal vector, wherein each respective normal vector intersects each respective point and is oriented towards the body construct;
extending each of the points to corresponding points on the body construct based on each respective normal vector;
generating primitives of the virtual apparel based at least in part on the extended points to the corresponding points on the body construct;
determining primitives of the virtual apparel that intersect with primitives of the body construct;
dividing the determined primitives of the virtual apparel into sub-primitives, wherein each of the sub-primitives is (1) smaller in size than a primitive of the determined primitives that is divided and is (2) within the primitive of the determined primitives;
determining that one or more vertices of the sub-primitives are within the body construct;
based on the determination that one or more vertices of the sub-primitives are within the body construct, extending the one or more vertices of the sub-primitives to be external to the body construct;
repeatedly dividing the sub-primitives and extending the further divided sub-primitives to be external to the body construct until determined that the further divided sub-primitives do not intersect with the primitives of the body construct; and
generating graphical information of the virtual apparel based on the extension of the one or more vertices of the further divided sub-primitives to be external to the body construct.

US Pat. No. 10,922,897

MEDICAL INFORMATION PROCESSING APPARATUS, X-RAY DIAGNOSTIC SYSTEM, AND MEDICAL INFORMATION PROCESSING METHOD

CANON MEDICAL SYSTEMS COR...

1. A medical information processing apparatus comprising:processing circuitry configured to:
display a three-dimensional medical image on a display,
receive a rotation operation corresponding to rotating a direction of the three-dimensional medical image on the display,
in response to reception of the rotation operation, create a figure indicating, based on the direction of the three-dimensional medical image on the display before the rotation operation is executed, whether a movable member of an X-ray diagnostic apparatus is able to reach a position corresponding to the direction of the three-dimensional medical image after the rotation operation is executed, and
superimpose a display of the figure on the three-dimensional medical image on the display.

US Pat. No. 10,922,896

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM FOR CORRECTING DISPLAY INFORMATION DRAWN IN A PLURALITY OF BUFFERS

SONY CORPORATION, Tokyo ...

1. An information processing apparatus comprising:an acquisition unit that acquires information related to a result of recognizing a real object in a real space;
a drawing unit that draws, in each buffer of a plurality of buffers, display information directly or indirectly associated with the buffer from among a plurality of pieces of display information; and
a display control unit that corrects a display of the display information drawn in each buffer of the plurality of buffers on a basis of the result of recognizing the real object, and causes a predetermined output unit to present each piece of the display information whose display has been corrected according to a positional relationship with the real object,
wherein the drawing unit draws each piece of the display information in a corresponding buffer of the plurality of buffers, the corresponding buffer being determined according to a position of the piece of the display information along a depth direction with respect to a viewpoint of a user,
wherein the drawing unit draws each piece of display information in the corresponding buffer determined based on a result of comparing the position of the piece of the display information in the depth direction with at least one threshold distance, and
wherein the acquisition unit, the drawing unit, and the display control unit are each implemented via at least one processor.

US Pat. No. 10,922,895

PROJECTION OF CONTENT LIBRARIES IN THREE-DIMENSIONAL ENVIRONMENT

Microsoft Technology Lice...

1. A method of projecting a content library of objects in a computer-based three-dimensional (3D) environment when authoring content using a computing device having a display and processor, the method comprising:with the processor of the computing device,
providing, on the display of a computing device, a template of a 3D environment having a background;
receiving, a user input selecting a content library containing multiple models individually representing a two-dimensional (2D) or 3D content item to be inserted as an object into the template of the 3D environment; and
in response to receiving the user input selecting the content library,
automatically determining a location to place the individual objects along at least a portion of a circle that is planar with respect to depth and longitudinal dimensions, at an elevation along a height dimension in the 3D environment, and is coplanar to a line of sight of a viewer of the individual objects in the 3D environment, the at least a portion of the circle having a center at the elevation along the height dimension and spaced apart from the viewer at a preset distance along the line of sight of the viewer of the 3D environment in the depth dimension; and
rendering and placing a graphical representation of the individual 2D or 3D content items as the objects at the determined locations along the at least a portion of the circle in the 3D environment such that one of the objects closest to the viewer would appear larger than others in the 3D environment due to a depth perception of the viewer.

US Pat. No. 10,922,894

METHODOLOGY AND SYSTEM FOR MAPPING A VIRTUAL HUMAN BODY

Biodigital, Inc., New Yo...

1. A method of operating a computing system to generate computer-readable data representing a three-dimensional visualization of at least a portion of anatomy of a human body, the method comprising:receiving, at least one server computing device from at least one client computing device and via at least one Application Programming Interface (API) call made by the at least one client computing device and via at least one network, a specification of an adjustment to be made to a base three-dimensional visualization of the at least the portion of the anatomy of the human body, wherein the specification of the adjustment comprises an identification of the adjustment to make and an indication of at least one anatomical feature of the human body to which the adjustment relates;
wherein receiving the specification of an adjustment comprises receiving data in a format of a plurality of formats, wherein the data corresponds to the adjustment to make; determining a method to process the data based on the format of the data;processing the data; and identifying the adjustment to make;mapping, with at least one processor of the at least one server computing device, the indication of the at least one anatomical feature of the human body to one or more objects of a hierarchy of objects, wherein each object of the hierarchy of objects corresponds to one or more anatomical features of the human body and to one or more elements of the base three-dimensional visualization, wherein the one or more objects are related to one or more related objects via one or more connections;
generating, with at least one processor of the at least one server computing device, an adjusted three-dimensional visualization of the at least the part of the anatomy of the human body by adjusting the one or more elements of the base three-dimensional visualization based at least in part on the adjustment, wherein the one or more elements of the base three-dimensional visualization that are adjusted correspond to the one or more objects that were mapped to the at least one anatomical feature indicated by the specification of the adjustment, and wherein the adjusted three-dimensional visualization includes the adjustment at the one or more elements that correspond to the one or more objects, wherein generating the adjusted three-dimensional visualization comprises distributing the information on how to render the feature to the one or more related objects; and
communicating, as at least one response to the at least one API call made by the at least one client computing device and via the at least one network, the adjusted three-dimensional visualization of the at least the part of the anatomy of the human body from the at least one server computing device to the at least one client computing device.

US Pat. No. 10,922,893

AUGMENTED REALITY SYSTEM

PTC Inc., Boston, MA (US...

1. One or more non-transitory machine-readable storage devices storing instructions that are executable by one or more processing devices to perform operations comprising:obtaining information about an instance of a device;
recognizing the instance of the device based on the information;
selecting a digital twin for the instance of the device, the digital twin being unique to the instance of the device;
generating augmented reality content based on the digital twin and a captured actual graphic of the instance of the device;
wherein the augmented reality content comprises computer graphics representing one or more virtual controls for the instance of the device, wherein generating the augmented reality content comprises associating the one or more virtual controls with one or more corresponding actual controls on the instance of the device shown in the captured actual graphic, and wherein the one or more actual controls are remotely controllable;
receiving data indicating that one or more of the virtual controls have been selected on a screen of a computing device; and
in response to receipt of the data, remotely controlling one or more of the actual controls that corresponds to the one of more of the virtual controls that have been selected.

US Pat. No. 10,922,892

MANIPULATION OF VIRTUAL OBJECT POSITION WITHIN A PLANE OF AN EXTENDED REALITY ENVIRONMENT

SPLUNK INC., San Francis...

1. A computer-implemented method, comprising:receiving, via a client device, a selection of a virtual object located at a first location within a plane parallel to a display screen of the client device in an extended reality (XR) environment;
determining a displacement based on user input detected via an input device of the client device; and
moving the virtual object, within the plane parallel to the display screen of the client device, from the first location within the plane to a second location within the plane based on the displacement.

US Pat. No. 10,922,891

METHOD FOR GENERATING AN AUGMENTED REPRESENTATION OF A REAL ENVIRONMENT, CORRESPONDING DEVICE, COMPUTER PROGRAM PRODUCT, AND COMPUTER-READABLE CARRIER MEDIUM

INTERDIGITAL CE PATENT HO...

1. A method for generating an augmented representation of a real environment comprising at least one real object, the method comprising:obtaining a virtual boundary as a function of data associated with the at least one real object of the real environment and as a function of a difference between a scale of the real environment and a scale of a default virtual scene comprising virtual content, the virtual boundary dividing the real environment into an activity space and a mixed space;
generating the augmented representation comprising:
obtaining a first part of the augmented representation corresponding to a representation of at least one part of the activity space being free of virtual content;
obtaining a second part of the augmented representation corresponding to an augmented representation of at least one part of the mixed space, in which the virtual content is combined with a representation of at least one part of the mixed space.

US Pat. No. 10,922,890

MULTI-USER VIRTUAL AND AUGMENTED REALITY TRACKING SYSTEMS

WorldViz, Inc., Santa Ba...

1. A cluster system, comprising:a first computing device configured as a master computing device;
a first plurality of motion tracking cameras, wherein the first-plurality of motion tracking cameras is configured to reside in a physical space coincident with a first user and a second user, the first plurality of motion tracking cameras configured to detect infrared light providing position data associated with the first user and infrared light providing position data associated with the second user;
a first camera associated with the second user;
non-transitory media storing instructions readable by the cluster system, that when executed by the cluster system, cause the cluster system to:
access configuration information comprising information indicating what types of operations are locally privileged and what types of operations are non-privileged;
based at least in part on the accessed configuration information comprising the information indicating what types of operations are locally privileged and what types of operations are non-privileged, cause at least one computer operation to be performed locally and at least one computer operation to be performed non-locally;
cause, by the master computing device, an image corresponding to the first user to be rendered at a first virtual position in a display device associated with the second user, wherein the first virtual position is determined at least in part on position data corresponding to a position of the first user in the physical space and on a determined viewpoint of the second user;
cause, by the master computing device, an image corresponding to the second user to be rendered at a second virtual position in a display device associated with the first user, wherein the second virtual position is based at least in part on position data corresponding to a position of the second user in the physical space and on a determined viewpoint of the first user;
receive at the master computing device from the first camera associated with the second user, a facial expression image associated with the second user; and
at least partly in response to receiving the facial expression image, cause a corresponding indication to be rendered in the display device associated with the first user.

US Pat. No. 10,922,889

DIRECTING USER ATTENTION

Google LLC, Mountain Vie...

1. A method comprising:receiving an image;
identifying content to display over the image;
identifying a location within the image to display the content;
identifying a point of interest located on the content; and
triggering display of the content overlaid on the image by:
identifying a portion of the content based on the point of interest located on the content;
inserting the portion of the content including the point of interest in the image using first shading parameters; and
inserting the content other than the portion in the image using second shading parameters, wherein the first shading parameters correspond to a higher level of lighting than the second shading parameters.

US Pat. No. 10,922,888

SENSOR FUSION AUGMENTED REALITY EYEWEAR DEVICE

1. An augmented reality eyewear device to operate augmented reality applications, comprising:a frame supporting an optical display configured to be worn by a user, wherein said frame is associated with:
a processor,
a sensor assembly coupled to the processor comprises at least two inertial measurement unit (IMU) sensor configured to transmit a raw IMU data of at least one IMU sensor and an android connected IMU data of at least one IMU sensor, wherein the at least two IMU sensor configured to rotate to match with an axis of at least two wide angle cameras wherein the sensor assembly further comprises a light sensor coupled to the processor is configured to input environmental condition, and wherein the processor is configured to provide a display characteristic based on the environmental condition, and the sensor assembly further comprises a thermal sensor, a flashlight sensor, a 3-axis accelerometer, a 3-axis compass, a 3-axis gyroscope, a magnetometer sensor and a light sensor,
a camera assembly coupled to the processor comprises at least two wide angle cameras synchronized with one another configured to transmit camera feed data from the camera assembly to the processor, wherein the camera feed data from at least two wide angle cameras are combined prior to processing via an I2C electrical connection, wherein a placement and angle of the camera assembly is customizable for simultaneous localization and mapping of the environment, and
a user interface control assembly coupled to the processor,
wherein the processor is configured to dually synchronize raw Mai data and android connected IMU data with the camera feed data providing a seamless display of three-dimensional (3D) content of the augmented reality applications, and
further comprises visual odometry tracking, environment meshing, dominant plane detection and dynamic occlusion.

US Pat. No. 10,922,887

3D OBJECT RENDERING USING DETECTED FEATURES

Magic Leap, Inc., Planta...

1. An augmented reality display system configured to align 3D content with a real object, the system comprising:a frame configured to mount on the wearer;
an augmented reality display attached to the frame and configured to direct images to an eye of the wearer;
an eyepiece disposed on the frame, at least a portion of said eyepiece being transparent and disposed at a location in front of the wearer's eye when the wearer wears said display system such that said transparent portion transmits light from the environment in front of the wearer to the wearer's eye to provide a view of the environment in front of the wearer;
a light source configured to illuminate at least a portion of the skin of the wearer or of a person other than the wearer by emitting invisible light;
a light sensor configured to form an image of said at least a portion of the skin illuminated by said light source using said invisible light;
a depth sensor configured to detect a location of the skin; and
processing circuitry configured to:
detect a feature in the image formed using a reflected portion of the invisible light, the feature corresponding to a preexisting feature of said skin;
select and store said feature to be used as a marker for rendering augmented reality content onto a view of the skin through the eyepiece while the eyepiece is disposed between the wearer's eye and the skin; and
determine information regarding the location of said at least a portion of the skin, the orientation of said at least a portion of the skin, or both based on one or more characteristics of the feature in the image formed using a reflected portion of the invisible light;
wherein the processing circuitry is configured to monitor the location or the orientation of the skin using the depth sensor periodically at a first frequency, and to monitor the location of the feature using the light sensor at a second frequency less frequent than the first frequency.

US Pat. No. 10,922,886

AUGMENTED REALITY DISPLAY

Apple Inc., Cupertino, C...

1. A system, comprising:a display device; and
a controller comprising:
one or more processors; and
memory storing instructions that, when executed on or across the one or more processors, cause the one or more processors to:
obtain pre-generated 3D data for at least part of a real-world scene, wherein the pre-generated 3D data includes pre-generated 3D meshes for respective regions of the scene, wherein the pre-generated 3D meshes include occluded portions of the scene that are occluded from view of one or more sensors by objects or terrain in the scene, and wherein the pre-generated 3D meshes include distant portions of the scene that are out of range from the one or more sensors;
determine one or more of the pre-generated 3D meshes that include portions of a local region that is within range of the one or more sensors;
use point cloud data obtained from the one or more sensors to generate a local 3D mesh for portions of the local region that are not included in the pre-generated 3D meshes;
generate a 3D model of the scene including the occluded portions of the scene and the distant portions of the scene using the local 3D mesh and the pre-generated 3D meshes;
render virtual content for the scene at least in part according to the 3D model, wherein the virtual content for the scene includes the occluded portions of the scene with corresponding indications that the occluded portions of the scene in the rendered virtual content are occluded by objects or terrain in the scene, and wherein the virtual content for the scene includes the distant portions of the scene with corresponding indications that the distant portions of the scene in the rendered virtual content are out of range from the one or more sensors; and
provide the rendered virtual content to the display device.

US Pat. No. 10,922,885

INTERFACE DEPLOYING METHOD AND APPARATUS IN 3D IMMERSIVE ENVIRONMENT

Beijing Pico Technology C...

1. An interface deploying method based on virtual reality in a 3D immersive environment, wherein the method comprises:arranging an interface element displaying layer and a real scene displaying layer in turn from near to far in a direction of a user's sight line;
using a camera device to collect real scenes of a real environment where the user is located, and displaying them to the user by the real scene displaying layer; and
setting interface elements to a translucent state, and displaying the interface elements via the interface element displaying layer; and
dividing a circular area in the interface element displaying layer exactly facing to the user's sight line as the primary operation area by using lines and graphs, dividing an area in the interface element displaying layer out of the primary operation area as the secondary operation area by using lines and graphs, using the secondary operation area to display application tags currently not selected by the user, and using the primary operation area to display content corresponding to the application tag currently selected by the user;
wherein when the user selects a certain application tag in the secondary operation area, the content corresponding to the application tag is displayed in the primary operation area in a highlighted manner;
wherein the method further comprises: providing the user with an interface self-defining interface by which the user, according to his own needs, adjusts the number and positions of application tags displayed in the secondary operation area;
wherein the method further comprises: using the secondary operation area to display user operation information and system prompt information; and
while displaying the application tag, the secondary operation area displays the user operation information and the system prompt information to the user in real time so that the user knows an updated dynamic state of the system in time.

US Pat. No. 10,922,884

SHAPE-REFINEMENT OF TRIANGULAR THREE-DIMENSIONAL MESH USING A MODIFIED SHAPE FROM SHADING (SFS) SCHEME

SONY CORPORATION, Tokyo ...

1. An electronic apparatus, comprising:circuitry configured to:
generate a flat two-dimensional (2D) mesh of an object portion based on an orthographic projection of an initial three-dimensional (3D) triangular mesh on an image plane that comprises a plurality of square grid vertices;
estimate a final grid depth value for each square grid vertex of the plurality of square grid vertices of the flat 2D mesh based on a modified shape from shading (SFS) scheme,
wherein the modified SFS scheme corresponds to an objective relationship among a reference grid image intensity value, an initial grid depth value, and a grid albedo value for each square grid vertex of the plurality of square grid vertices; and
estimate a final 3D triangular mesh as a shape-refined 3D triangular mesh based on the initial 3D triangular mesh and the estimated final grid depth value for each square grid vertex of the plurality of square grid vertices.

US Pat. No. 10,922,883

SYSTEM AND METHOD FOR READING ARRAYS OF DATA BY REBUILDING AN INDEX BUFFER WHILE PRESERVING ORDER

Parallels International G...

1. A method for reading input data into a geometry shader by rebuilding an index buffer, the method comprising:constructing T-vectors for one-element ranges of the index buffer by defining each T-vector as a 4-component vector;
calculating T-vectors for ranges [0; i] for all vertices of the index buffer by prefix scanning using a modified prefix scan algorithm, the modification being for performing the prefix scanning using a non-commutative prefix scanning algorithm, where i represents a number of a current vertex;
for each vertex and for each primitive featuring the vertex:
determining whether a respective primitive featuring the vertex is complete; and
responsive to determining that the respective primitive featuring the vertex is complete, calculating an offset in an output index buffer using a component of the T-vector used to indicate, for the vertex, a number of complete primitives inside the range and a component that indicates a number of vertices since a last primitive restart, and writing an index value in an output index buffer; and
reading input data into the geometry shader in accordance with the index values written in the output index buffer.

US Pat. No. 10,922,882

TERRAIN GENERATION SYSTEM

Electronics Arts Inc., R...

1. A method for generating game terrain data of a game application within a graphical user interface, wherein the method includes:generating instructions to display a graphical user interface on a user computing system, the graphical user interface comprising a drawing interface for a user to generate graphical inputs, wherein each type of graphical input is associated with a terrain characteristic;
receiving, from the user system, a two-dimensional terrain drawing through the drawing interface for generation of a first terrain area, the two-dimensional terrain drawing including, at least, a first graphical input and a second graphical input, wherein the first graphical input corresponds to a first terrain characteristic and the second graphical input corresponds to a second terrain characteristic;
receiving, from the user system, a selection of a first style of terrain for the first terrain area;
inputting the two-dimensional terrain drawing into a neural network, wherein the neural network is trained to translate the two-dimensional terrain drawing into height field data for the first style of terrain, wherein the height field data comprises first height field data associated with the first terrain characteristic based on the first graphical input and second height field data associated with the second terrain characteristic based on the second graphical input;
receiving an output of the neural network that includes a first height field for the first terrain area generated based at least in part on the first height field data and the second height field data, wherein the first height field for the first terrain area corresponds to a relationship between a first height associated with the first terrain characteristic and a second height associated with the second terrain characteristic; and
generating a three dimensional game terrain model based on the first height field and the first style of terrain.

US Pat. No. 10,922,881

THREE DIMENSIONAL/360 DEGREE (3D/360°) REAL-TIME FULL INFORMATION SMART MANAGEMENT INTEGRATED MAPPING SYSTEM (SMIMS) AND PROCESS OF GENERATING THE SAME

STAR GLOBAL EXPERT SOLUTI...

1. A method of generating and managing a three-dimensional-360 degrees (3D/3600) map of a geographical area, comprising:generating said 3D/3600 map of said geographical area using at least one imaging devices and at least one flying means, wherein said 3D/3600 map comprises a plurality of objects and network-based devices different from said plurality of objects, wherein said plurality of objects and network-based devices are stationary and permanent elements of said geographical area captured in said 3D/3600 map;
using an application interface (API) to access remote client databases via a network and to obtain real-time properties associated with said plurality of objects and said network-based devices owned by said clients wherein using said API is based on determining data format of said plurality of objects and said network-based devices;
enabling controls of said network-based devices via said network;
embedding said real-time properties into said plurality of objects and said network-based devices located within said 3D/3600 map;
providing an interactive graphic user interface (GUI) to display said real-time properties and to control said network-based devices; and
displaying said real-time properties of any of said plurality of objects or said network-based devices when a user uses a pointing device to select any of said particular object or said particular network-based device.

US Pat. No. 10,922,880

LADAR AND POSITIONAL AWARENESS SYSTEM AND METHODS USING A LINE AT THE INTERSECTION OF MULTICOLOR PLANES OF LASER LIGHT WITH DETECTOR

The United States of Amer...

3. A system comprising:a first laser line projector to project a first laser beam of a first color;
a second laser line projector to project a second laser beam of a second color;
a laser source adjuster to adjust a laser plane of the first laser beam and a laser plane of the second laser beam to create a beam intersection line plane of a third color on an object at a predetermined intersection range from the first laser line projector and the second laser line projector;
an image or color capture device to create a three-dimensional (3D) representation of objects in a field of view of the image or color capture device; and
a motor to rotate the first laser line projector and the second laser line projector to create a rotating plane of light of the third color on the object, wherein the motor comprises a shaft, and the first laser line projector and the second laser line projector are mounted on a rail that is perpendicular to the shaft.

US Pat. No. 10,922,879

METHOD AND SYSTEM FOR GENERATING AN IMAGE

Sony Interactive Entertai...

1. A method of generating an image, the method comprising:receiving a video stream, the video stream comprising a two-dimensional video of a three-dimensional scene captured by a video camera;
determining a mapping between locations in the two-dimensional video of the scene and locations in a three-dimensional representation of the scene, the mapping being determined based on a known parameter of the video camera and a known size of a feature in the three-dimensional scene;
generating a three-dimensional graphical representation of the scene based on the determined mapping;
determining a virtual camera angle from which the three-dimensional graphical representation of the scene is to be viewed;
rendering an image corresponding to the graphical representation of the scene viewed from the determined virtual camera angle;
outputting the rendered image for display; and
detecting at least one object in the two-dimensional video of the scene,
wherein generating the three-dimensional graphical representation comprises generating a graphical representation of the at least one detected object, and
the video stream comprises a video of a sporting event and wherein detecting the at least one object comprises detecting at least one player in the scene; and wherein generating the three-dimensional graphical representation comprises generating a graphical representation of the at least one player.

US Pat. No. 10,922,878

LIGHTING FOR INSERTED CONTENT

GOOGLE LLC, Mountain Vie...

14. A system comprising:a camera;
at least one processor; and
memory storing instructions that, when executed by the at least one processor, cause the system to:
capture an image;
determine a location within the image to insert content based on identifying a representation of a surface within the image, the content associated with a data structure that includes one or more values for lighting parameters;
identify a first region of the image and a second region of the image based on the determined location to insert the content to determine at least one upper lighting parameter and at least one lower lighting parameter, the second region being different than the first region;
extract image properties from the identified first region by applying a filter to the identified first region and determine the at least one upper lighting parameter based on the extracted image properties from the identified first region;
extract image properties from the identified second region by applying a filter to the identified second region and determining the at least one lower lighting parameter based on the extracted image properties from the identified second region;
scale the determined at least one upper lighting parameter and the at least one lower lighting parameter by applying the one or more scaling values from the data structure associated with the content to the particular lighting characteristic;
render the content using the determined at least one upper lighting parameter to shade an upper surface region of the content and the determined at least one lower lighting parameter to shade a lower surface region of the content, wherein the determined at least one upper lighting parameter is different from the determined at least one lower lighting parameter;
insert the rendered content into the image to generate an augmented image based on the determined location, the identified surface, and dimensions of the identified first region and the identified second region; and
cause the augmented image to be displayed.

US Pat. No. 10,922,877

HIGHER-ORDER FUNCTION NETWORKS FOR LEARNING COMPOSABLE THREE-DIMENSIONAL (3D) OBJECT AND OPERATING METHOD THEREOF

SAMSUNG ELECTRONICS CO., ...

1. An apparatus for representing a three-dimensional (3D) object, the apparatus comprising:a memory storing instructions; and
a processor configured to execute the instructions to:
transmit a two-dimensional (2D) image to an external device;
based on the 2D image being transmitted, receive, from the external device, mapping function parameters that are obtained using a first neural network;
set a mapping function of a second neural network, based on the received mapping function parameters; and
based on 3D samples, obtain the 3D object corresponding to the 2D image, using the second neural network of which the mapping function is set.

US Pat. No. 10,922,876

SACCADIC REDIRECTION FOR VIRTUAL REALITY LOCOMOTION

NVIDIA Corporation, Sant...

1. A computer-implemented method, comprising:detecting a temporary visual suppression event when a user's eyes move relative to the user's head while viewing a display device;
modifying an orientation of a virtual scene relative to the user to direct the user to physically move along a planned path through a virtual environment corresponding to the virtual scene, wherein the orientation is modified by a greater amount as a duration of the temporary visual suppression event increases; and
displaying the virtual scene on the display device according to the modified orientation.

US Pat. No. 10,922,875

ULTRASOUND SYSTEM AND METHOD OF DISPLAYING THREE-DIMENSIONAL (3D) IMAGE

SAMSUNG MEDISON CO., LTD....

1. An ultrasound system comprising:a display; and
a controller configured to generate, from ultrasound volume data acquired from an object, a three-dimensional (3D) ultrasound image,
wherein the controller is further configured to:
acquire first position information and first orientation information of the display,
control the display to display the generated 3D ultrasound image in a first orientation based on the first orientation information,
acquire second position information and second orientation information of an auxiliary display,
identify a second orientation with respect to the generated 3D ultrasound image based on a first difference between the first orientation information and the second orientation information, and
control the auxiliary display to display the generated 3D ultrasound image in the second orientation with a changed size based on a second difference between the first position information and the second position information.

US Pat. No. 10,922,874

MEDICAL IMAGING APPARATUS AND METHOD OF DISPLAYING MEDICAL IMAGE

SAMSUNG MEDISON CO.. LTD....

1. A medical imaging apparatus comprising:a user interface configured to receive an input for setting a region of interest (ROI) in a first medical image and an input for setting first volume rendering properties for the ROI and second volume rendering properties for a remaining region of the first medical image, wherein the remaining region is identified as a region other than the ROI set by a user in the first medical image;
a display; and
one or more processors configured to generate a second medical image by performing volume rendering on the ROI and the remaining region other than the ROI based on the first and second volume rendering properties, respectively, and control the display to display the second medical image,
wherein the one or more processors are further configured to:
display the ROI in the first medical image and at least one item related to the first volume rendering properties via the display, and receive the input for setting the first volume rendering properties via the user interface while displaying the first medical image and the at least one item related to the first volume rendering properties, and
display the remaining region of the first medical image and at least one item related to the second volume rendering properties via the display, and receive the input for setting the second volume rendering properties via the user interface while displaying the second medical image and the at least one item related to the second volume rendering properties.

US Pat. No. 10,922,873

RENDERING A 3-D SCENE USING SECONDARY RAY TRACING

Imagination Technologies ...

1. A computer-implemented method of rendering an image of a 3-D scene using a ray tracing system, the method comprising:subsequent to an identification of an intersection between a first ray and a primitive located in the 3-D scene, emitting a secondary ray having an origin which lies on an implicit curved surface associated with the primitive, wherein said intersection between the first ray and the primitive is at an intersection point, and wherein the origin of the secondary ray is offset from the intersection point; and
processing the secondary ray for use in rendering the image of the 3-D scene.

US Pat. No. 10,922,872

NOISE REDUCTION ON G-BUFFERS FOR MONTE CARLO FILTERING

Disney Enterprises, Inc.,...

1. A computer-implemented method of selectively removing rendering noise from a geometric buffer prior to filtering, the computer-implemented method comprising:generating the geometric buffer for rendering an image of a three-dimensional scene from a viewpoint, the geometric buffer comprising at least one of a texture buffer, a depth buffer, and a normal buffer and including the rendering noise;
determining, for each of a plurality of pixels in the image being rendered, a respective world position sample value based on the three-dimensional scene and a position and orientation of the viewpoint;
performing, by operation of one or more computer processors, a pre-filtering operation on the geometric buffer in order to selectively remove the rendering noise from the geometric buffer using a respective filtering weight function for each of the plurality of pixels, wherein the respective filtering weight function is based on an optimal bandwidth value, variances of world position sample values of the three-dimensional scene, and the world position sample value for the respective pixel; and
subsequent to removal of the rendering noise, performing a filtering operation on the geometric buffer to produce a filtered geometric buffer, wherein the image is rendered based on the filtered geometric buffer and output.

US Pat. No. 10,922,871

CASTING A RAY PROJECTION FROM A PERSPECTIVE VIEW

BAMTECH, LLC, New York, ...

1. A method, comprising:identifying a first object in a first frame captured by a first camera using a semantic segmentation operation, the semantic segmentation operation including a branched training process that convolutes image data associated with the first object in the first frame along a plurality of branched paths having respective convolution parameters, the convoluted image data from each of the branched paths being combined into a combination convoluted data set;
determining a first mask for the first object based on the first frame captured by the first camera and the combination convoluted data set;
determining a second mask for the first object based on a second frame captured by a second camera;
generating a 3D mask by associating the first mask and the second mask;
determining a location of the 3D mask; and
generating a ray projection of the 3D mask from a perspective of a second object.

US Pat. No. 10,922,870

3D DIGITAL PAINTING

1. A method of digital continuous and simultaneous three-dimensional objects navigating, said method comprising:providing a digital electronic display having a physical surface and a geometrical surface and configured for presenting two images: one for a right eye and the other for a left eye of a user;
providing means for creating a continuous 3D virtual canvas comprising the geometrical surface of the digital electronic display and a virtual volume that includes the geometrical surface of the digital electronic display in said 3D virtual canvas by digitally changing a value and a sign of horizontal disparity between two images for the right eye and the left eye and their scaling on the digital electronic display corresponding to instant virtual distance between the user's eyes and an instant (3D) image of the object within the virtual 3D canvas;
wherein a resolution ? of continuity of changing of the virtual distance Z between the user and the virtual images within 3D virtual canvas is defined by a size p of a pixel on the digital electronic display in horizontal direction and by a distance d between pupils of the user's eyes according to an expression: ??2p Z/d;
providing at least one input control device comprising: a system of sensors that provide input information about free 3D motion of at least one part of the user's body into the at least one input control device for digital objects navigating within 3D virtual canvas;
providing at least one kind of a coupling between at least part of the at least one input control device and the at least one part of the user's body;
moving the at least one part of the user's body while the system of sensors within the at least one input control device is providing information for recording change of vectors of mechanical motion parameters of the at least one part of the user's body, said system of sensors provide simultaneous appearance of similar images of the objects for the right and the left eye for any instant position within 3D virtual canvas;
wherein a simultaneousness of appearance of said similar scaled images of the objects for the right and the left eye is limited by a smallest time interval equal to an inverted frequency of refreshment of frames on the digital electronic display and wherein a motion of the objects in the process of navigating in all dimensions is provided simultaneously and continuously in all directions of a 3D virtual space by free moving the at least one part of the user's body.

US Pat. No. 10,922,869

SCATTER GATHER ENGINE

INTEL CORPORATION, Santa...

1. A general-purpose graphics processing device comprising:a plurality of graphics processing cores to execute graphics processing instructions;
a memory communicatively coupled to at least one of the plurality of graphics processing cores; and
a processor to:
create a scatter gather list in the memory;
collect a plurality of operating statistics for the plurality processing cores using the scatter gather list;
create, in the memory, a descriptor list comprising register addresses corresponding to one or more registers of the plurality of graphics processing cores;
allocate a space in the memory for outputs from the one or more registers;
program a scatter gather source attributes register to point to the descriptor list;
program a scatter gather description configuration register with a poll period, a number of iterations, and an interrupt enable/disable;
initiate a series of register read requests to obtain contents of the one or more registers for the register addresses in the descriptor list; and
write the contents of the one or more registers for the register addresses in the descriptor list to the space allocated in the memory.

US Pat. No. 10,922,868

SPLIT FRAME RENDERING

Advanced Micro Devices, I...

1. A method for sharing graphics processing work among multiple accelerated processing devices (“APD”) of a set of APDs, the method comprising:transmitting a set of draw calls to each APD of the set of APDs;
splitting the set of draw calls into a set of primitive groups;
for each primitive group of the set of primitive groups, designating an input assembler to receive that primitive group, wherein for each primitive group, the designated input assembler is the same for each APD;
at each APD, for a first set of primitive groups designated to be received by input assemblers of that APD, transmitting the first set of primitive groups to the designated input assemblers; and
at each APD, for a second set of primitive groups designated to be received by input assemblers outside of that APD, discarding the second set of primitive groups.

US Pat. No. 10,922,867

SYSTEM AND METHOD FOR RENDERING OF AN ANIMATED AVATAR

1. A method for rendering of an animated avatar on one or more computing devices using animated delay clips between responses of the animated avatar, the method comprising:generating an avatar delay graph (ADG) by associating each of the animated delay clips with a directed edge in the ADG, associating a playing length of the animated delay clip with the respective edge, each edge connected to at least one other edge via a node, each particular node associated with a point at which the animated delay clip associated with the edge terminating at the particular node can be stitched together with other animated delay clips associated with the edges emanating at the particular node;
selecting a node of the ADG labelled as an initial node to be a current node;
determining whether one of the responses of the animated avatar is being processed, and while there is no response being processed:
rendering the one or more animated delay clips using the ADG, the rendering comprising:
stochastically selecting one of the edges emanating from the current node;
updating the current node to be the node at which the selected edge is terminated; and
rendering the animated delay clip associated with the selected edge; and
communicating the rendered one or more animation delay clips to be displayed.

US Pat. No. 10,922,866

MULTI-DIMENSIONAL PUPPET WITH PHOTOREALISTIC MOVEMENT

Artificial Intelligence F...

1. A computer system, comprising:a computation device;
a memory configured to store program instructions, wherein, when executed by the computation device, the program instructions cause the computer system to perform one or more operations comprising:
providing, based at least in part on predetermined parameters, configuration information, and a group of behavioral agents, a dynamic virtual representation comprising a multi-dimensional puppet having one or more attributes of an individual, wherein the dynamic virtual representation is configured to automatically mimic one or more attributes of the individual in a context,
wherein the providing of the dynamic virtual representation comprising the multi-dimensional puppet involves rendering of the multi-dimensional puppet,
wherein the multi-dimensional puppet comprises stereopsis information, and has a photorealistic movement corresponding to movement behaviors of the individual, and
wherein the multi-dimensional puppet comprises: a 3D rig having a shape corresponding to at least a shape of a head and neck of the individual; a neutral layer corresponding to a look and color of at least the face and the neck of the individual; a core region overlay layer with 2D bitmaps for portions of the face and the neck of the individual; and a specular overlay layer that reproduces specular highlights of the individual;
receiving an input corresponding to a user mood and corresponding to user spatial manipulation of or interaction with the multi-dimensional puppet, wherein the input does not explicitly indicate the user mood;
determining, using at least a behavioral agent in the group of behavioral agents, the user mood based at least in part on the input; and
providing, based at least in part on the predetermined parameters, the configuration information, the group of behavioral agents, the determined user mood, and the input, the dynamic virtual representation comprising a revised multi-dimensional puppet having the one or more attributes.

US Pat. No. 10,922,865

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM

SONY CORPORATION, Tokyo ...

1. A mobile apparatus comprising:an imaging unit configured to capture a through image;
a display unit configured to display the through image; and
a processor configured to:
acquire mesh data of at least one first face image included in the through image on a basis of feature points of the first face image;
acquire texture data of at least one second face image different from the first face image;
determine whether the through image includes a plurality of the first face images; and
control, on a basis of the determination that the through image includes the plurality of the first face images, the display unit to display the texture data of the second face image over each of the plurality of the first face images to correspond to the mesh data of the plurality of the first face images.

US Pat. No. 10,922,864

IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD AND PROGRAM, FOR OBJECT DETECTION IN AN IMAGE

Canon Kabushiki Kaisha, ...

1. An image processing device comprising:an obtaining unit configured to obtain image data; and
a control unit configured to cause a display unit to display an image, based on the image data, on which one or more first marks each indicating an object detected in the image are superimposed and to display a numeral corresponding to a number of the one or more first marks superimposed on the image,
wherein, if information about a user operation for adding one or more second marks to be superimposed on the image displayed by the display unit is obtained, the control unit causes the display unit to display an updated numeral corresponding to a total number of the number of the one or more first marks superimposed on the image and a number of the one or more second marks added according to the user operation and superimposed on the image.

US Pat. No. 10,922,863

SYSTEMS AND METHODS FOR EFFICIENTLY GENERATING AND MODIFYING AN OUTLINE OF ELECTRONIC TEXT

ADOBE INC., San Jose, CA...

1. A method comprising, by a processor:identifying one or more instances of each of a plurality of unique graphical glyph objects displayed via a graphical user interface, wherein the instances of a particular unique graphical glyph object include a first instance of the particular unique graphical glyph object and a second instance of the particular unique graphical glyph object;
obtaining data indicating a glyph identifier associated with the particular unique graphical glyph object;
retrieving, from a cache, an outline associated with the glyph identifier, the outline corresponding to the particular unique graphical glyph object and having fixed dimensions and an origin position relative to a reference point, the outline comprising at least a first value defining first shape information of a first region of the particular unique graphical glyph object and a second value defining second shape information of a second region of the particular unique graphical glyph object;
determining a transformation matrix, the transformation matrix used to adjust the outline, wherein for each respective instance of the particular unique graphical glyph object, the transformation matrix transforms the outline such that the fixed dimensions are scaled to match dimensions of the respective instance of the particular unique graphical glyph object and the origin position is changed to match a respective position of the respective instance of the particular unique graphical glyph object on the graphical user interface;
generating base art data associated with the particular unique graphical glyph object comprising the outline and the transformation matrix;
converting, using the base art data associated with the particular unique graphical glyph object, the first instance of the particular unique graphical glyph object into the outline;
receiving first user input indicating a selection and modification of the outline, the modification of the outline comprising at least changing the first value defining the first shape information of the first region of the particular unique graphical glyph object independently of the second value defining the second shape information of the second region; and
in response to determining that the modification of the outline is to be applied to the second instance of the particular unique graphical glyph object, modifying the outline in the base art data and updating, based on the base art data, a display of the second instance of the particular unique graphical glyph object to correspond to the modified outline.

US Pat. No. 10,922,862

PRESENTATION OF CONTENT ON HEADSET DISPLAY BASED ON ONE OR MORE CONDITION(S)

Lenovo (Singapore) Pte. L...

1. A headset, comprising:a housing;
at least one processor coupled to the housing;
a camera coupled to the housing and accessible to the at least one processor;
an at least partially transparent first display coupled to the housing and accessible to the at least one processor; and
storage coupled to the housing and accessible to the at least one processor, the storage comprising instructions executable by the at least one processor to:
receive data from a second device, the second device controlling a second display, the second device being different from the headset, the data from the second device indicating first text;
identify at least one condition, the at least one condition comprising the first text not matching, to within a threshold confidence level, second text presented on the second display as recognized by the headset using the camera and optical character recognition (OCR);
based on the first text not matching the second text to within the threshold level of confidence, transmit a request for first content; and
present, based on receipt of the first content, the first content on the first display, the first content corresponds to second content presented on the second display.

US Pat. No. 10,922,861

METHOD AND APPARATUS FOR CORRECTING DYNAMIC MODELS OBTAINED BY TRACKING METHODS

Koninklijke Philips N.V.,...

1. A method for identifying and correcting errors in dynamic models of moving structures captured in images, the dynamic models having been obtained by tracking methods, wherein the method comprises the following steps:a) providing a time series of images recorded successively in time, the moving structure being imaged at least in part in the images;
b) providing a dynamic model of the moving structure, the dynamic model having been obtained by a tracking method applied to the images of the time series and wherein the dynamic model is registered with the images;
c) following step b, determining a position time section of the images, wherein the position time section comprises a one-dimensional section of the images that intersects the moving structure in at least one of the images;
d) providing a position-time representation of the position time section in the images of the time series by ploting, over time, image values of the position time section of each image of the time series, combined with a representation of the dynamic model as at least one computer graphical object;
e) comparing the computer graphical object with surrounding image content of the position-time representation, and
f) providing an option for correcting the dynamic model by editing the at least one computer graphical object and for transferring any changes made to the computer graphical object to the dynamic model.

US Pat. No. 10,922,860

LINE DRAWING GENERATION

Adobe Inc., San Jose, CA...

1. A computer-implemented method performed by one or more processing devices, the method comprising:inputting a digital image into a first neural network;
generating, using the first neural network, a first digital line drawing based on contents of the digital image;
inputting the first digital line drawing, rather than the digital image, into a second neural network;
generating, using the second neural network, a second digital line drawing based on the first digital line drawing, wherein the second digital line drawing is a two-tone image and is different from the first digital line drawing; and
outputting the second digital line drawing as a line drawing corresponding to the digital image.

US Pat. No. 10,922,859

VECTOR ART OBJECT DEFORMATION TECHNIQUES

Adobe Inc., San Jose, CA...

1. In a digital medium vector art rendering environment, a method implemented by a computing device, the method comprising:receiving, by the computing device, a vector graph of a digital image, the vector graph including an anchor node and a leaf node, the anchor node defining an anchor point and the leaf node defining a vector art object connected to the anchor point, the anchor point configured to control curvature of the vector art object;
displaying, by the computing device, the digital image in a user interface;
receiving, by the computing device, a user input specifying movement of the anchor point in the user interface from a first location to a second location;
determining, by the computing device, a displacement value of the anchor point based on the user input;
identifying, by the computing device responsive to the receiving of the user input, the vector art object as being associated with the anchor point based on the vector graph;
determining, by the computing device responsive to the identifying, a deformation value of the vector art object based on the displacement value;
deforming, by the computing device, the vector art object based on the determined deformation value; and
outputting, by the computing device, the digital image in the user interface with the vector art object at a location based on the deformation and the anchor point at the second location.

US Pat. No. 10,922,858

DISPLAY APPARATUS, DISPLAY METHOD, AND RECORDING MEDIUM

YOKOGAWA ELECTRIC CORPORA...

1. A display apparatus comprising:an operation input device that receives an operation signal, and designates a display range of an observed value with coordinates specified by the operation signal;
an analysis condition setting device that determines, as an analysis range or an analysis parameter of the observed value, a predetermined proportion, that ranges from 5% to 20%, of a number of samples included in the display range, wherein the larger the display range, the larger the predetermined proportion;
a computation device that analyzes a waveform or a trend of the observed value based on the analysis range or the analysis parameter; and
a display screen generation device that causes a display device to display a waveform or a straight line as a computation result of the computation device.

US Pat. No. 10,922,857

ELECTRONIC DEVICE AND OPERATION METHOD FOR PERFORMING A DRAWING FUNCTION

Samsung Electronics Co., ...

1. An electronic device comprising:a display; and
at least one processor,
wherein the at least one processor is configured to:
receive input information for generating at least part of a first object in a first area of the display,
recognize the first object based on the received input information,
identify an image associated with the first object,
divide the image associated with the object into a plurality of cells,
obtain a plurality of color values with respect to each of the plurality of cells,
obtain a plurality of colors, each of which most frequently included in a corresponding cell of the plurality of cells, based on the plurality of color values,
identify a frequency for each of the plurality of colors,
store information on at least one color, among the plurality of colors, that has an identified frequency greater than or equal to a threshold frequency,
control the display to display the at least one color, wherein the at least one first color is displayed in a second area, different from the first area, of the display,
receive a user input for displaying a silhouette of the image while the at least one color is displayed, and
control the display to display, in the first area of the display, the silhouette of the image for guiding a drawing to be inputted.

US Pat. No. 10,922,856

SYSTEMS AND METHODS FOR CORRECTING PROJECTION IMAGES IN COMPUTED TOMOGRAPHY IMAGE RECONSTRUCTION

SHANGHAI UNITED IMAGING H...

1. A system, comprising:a storage device storing a set of instructions; and
at least one processor in communication with the storage device, wherein when executing the set of instructions, the at least one processor is configured to cause the system to:
obtain a plurality of projection images of a subject, the plurality of projection images being generated according to scan data acquired by a CT scanner at a plurality of gantry angles, each of the plurality of projection images corresponding to one of the plurality of gantry angles; and
correct a first projection image of the plurality of projection images according to a process for generating a corrected projection image, the first projection image corresponding to a first gantry angle of the plurality of gantry angles, the process including:
performing, based on the first projection image corresponding to the first gantry angle and a second projection image of the plurality of projection images, the second projection image corresponding to a second gantry angle of the plurality of gantry angles, a first correction on the first projection image to generate a preliminary corrected first projection image; and
performing, based on at least part of the preliminary corrected first projection image, a second correction on the preliminary corrected first projection image to generate a corrected first projection image corresponding to the first gantry angle.

US Pat. No. 10,922,855

SYSTEMS AND METHODS FOR DETERMINING AT LEAST ONE ARTIFACT CALIBRATION COEFFICIENT

SHANGHAI UNITED IMAGING H...

1. A method implemented on a computing device having at least one storage device storing a set of instructions for determining at least one artifact calibration coefficient, and at least one processor in communication with the at least one storage device, comprising:obtaining, by the at least one processor, preliminary projection values of a first object generated based on radiation rays that are emitted from a radiation emitter and passed through the first object, the radiation rays being detected by at least one radiation detector;
generating, by the at least one processor, a preliminary image of the first object based on the preliminary projection values of the first object;
generating, by the at least one processor, calibrated projection values of the first object based on the preliminary image;
determining, by the at least one processor, a relationship between the preliminary projection values and the calibrated projection values;
for each of the at least one radiation detector,
determining, by the at least one processor, a location of the radiation detector; and
determining, by the at least one processor, an artifact calibration coefficient corresponding to the radiation detector based on the relationship between the preliminary projection values and the calibrated projection values and the location of the radiation detector;
wherein the generating the calibrated projection values of the first object based on the preliminary image includes:
obtaining a cross-section equation of the first object based on the preliminary image of the first object;
obtaining a series of scanning equations, wherein each of the series of scanning equations is associated with one of the radiation rays; and
determining the calibrated projection values of the first object based on the cross-section equation and the series of scanning equations of the first object.

US Pat. No. 10,922,854

CT IMAGING

Neusoft Medical Systems C...

1. A method of Computed Tomography (CT) imaging in a CT system that includes a CT console, a CT scanner, and an image reconstruction computer, the method comprising:detecting, by the CT console, that a parameter adjustment instruction is received from a user during a scanning process in which a preview image is generated by the image reconstruction computer based on scan parameters and displayed to the user, wherein the preview image is generated based on at least a portion of raw data obtained by the CT scanner when the scanning process has not been completed;
sending, by the CT console, adjusted scan parameters to at least one of the CT scanner or the image reconstruction computer, wherein the adjusted scan parameters are determined according to the preview image; and
controlling, by the CT console, the CT scanner and the image reconstruction computer to generate a new preview image based on the adjusted scan parameters when the scanning process is re-executed and has not been completed.

US Pat. No. 10,922,853

REFORMATTING WHILE TAKING THE ANATOMY OF AN OBJECT TO BE EXAMINED INTO CONSIDERATION

SIEMENS HEALTHCARE GMBH, ...

1. A method of imaging a three-dimensional object to be examined, the method comprising:creating, via a processor, a conformal three-dimensional parameterized surface, based on an anatomical structure of the three-dimensional object to be examined;
imaging, via the processor, image points associated with the created conformal three-dimensional parameterized surface onto a two-dimensional parameterized surface; and
mapping via the processor, a two-dimensional representation that conforms to the conformal three-dimensional parameterized surface of the three-dimensional object to be examined.

US Pat. No. 10,922,852

OIL PAINTING STROKE SIMULATION USING NEURAL NETWORK

Adobe Inc., San Jose, CA...

1. A computer-implemented method to simulate a painting brush stroke, the method comprising:inferring, by a trained neural network having (i) a first input configured to receive a bristle trajectory map that represents a new painting brush stroke and (ii) a second input configured to receive a first height map of existing paint on a canvas, a second height map of existing paint on the canvas after the new painting brush stroke is applied to the canvas, wherein the first height map of existing paint on the canvas is indicative of, for at least a pixel location, a thickness of the existing paint for the pixel location that extends above the canvas surface in the z-direction before the new painting brush stroke is applied to the canvas; and
generating, by a render module, a rendering of the new painting brush stroke based on the second height map of existing paint on the canvas after the new painting brush stroke is applied to the canvas and a color map.

US Pat. No. 10,922,851

VIRTUAL REALITY ENVIRONMENT COLOR AND CONTOUR PROCESSING SYSTEM

The Boeing Company, Chic...

1. A method for generating a virtual reality environment, the method comprising:identifying data requirements for generating the virtual reality environment, wherein the virtual reality environment includes an object that is displayed on a display system using a group of computer aided design models defined by a group of standards, wherein the object comprises portions defined by continuous geometry;
generating discrete points for the object from the group of computer aided design models based on the data requirements for generating the virtual reality environment for a selected point in time;
using the discrete points to display the object on the display system;
selecting a minimum size, wherein the minimum size is a smallest visible portion of the object displayed on a surface of the object;
receiving a selection, from options displayed on the display system, comprising a group of color adjustments for a portion of the discrete points for the selected point in time while the object is displayed on the display system;
increasing a number of frames per second displayed in a training environment for the group of computer aided design models via modifying, in real time while the object is displayed on the display system, the discrete points using the group of color adjustments identified for forming modified discrete points; and
using, while the object is displayed on the display system, the modified discrete points changing the display of the object on the display system.

US Pat. No. 10,922,850

AUGMENTED REALITY SYSTEM FOR PERSONA SIMULATION

1. A computer-implemented augmented reality essence generation platform comprising:a processor;
a capture device that captures real-world interaction data at a geographical location, the real-world interaction data including a real-world experience between a first user in an active state and a second user in an active state at a first time period, the real-world interaction data including one or more data types;
an interaction and location synchronization engine that synchronizes the real-world interaction data between the active first user and the active second user at the geographical location during the first time period;
an essence generation engine that generates, via the processor, a virtual persona model of the active second user based upon the collected data from the synchronization engine at a second time period subsequent to the first time period in which the second user is now in an inactive state and generates augmented reality interaction data between the active first user and the inactive second user with location data corresponding to the geographical location that was captured from the capture device;
a neural network engine that generates, via the processor, a neural network that simulates, during the second time period, a virtual persona of the second inactive user based on the virtual persona model during a virtual interaction between the active first user and a virtual representation of the inactive second user; and
an augmented reality engine that generates, via the processor, an augmented reality experience by overlaying virtual imagery corresponding to the simulated virtual persona of the second inactive user over real-world imagery viewed through an augmented reality computing device of the first active user at the geographical location during the second time period.

US Pat. No. 10,922,849

GRID RETAINING IRREGULAR NETWORK IN 3D

The Government of the Uni...

1. A method for compressing data, the method comprising:forming, using a compressor device, a mesh based on the data, wherein the mesh comprises a plurality of triangles;
tessellating, using the compressor device, the plurality of triangles to form a right-triangulated irregular network (RTIN) structure being capable of being fully reduced to two triangles, the RTIN structure including a plurality of points and a plurality of edges;
compressing, using the compressor device, the data using the RTIN structure;
producing, using the compressor device, a unique index number for every point within the RTIN structure;
determining, using the compressor device, whether a difference between each point in the RTIN structure and a corresponding point in the data exceeds an error threshold; and
for each point in the RTIN structure, if the difference between the point and the corresponding point in the data exceeds the error threshold, record the computed index value of the point in the RTIN structure and the value of that point in the data in a keep list.

US Pat. No. 10,922,848

PIXEL STORAGE FOR GRAPHICAL FRAME BUFFERS

AVAGO TECHNOLOGIES INTERN...

1. A device comprising:at least one processor configured to:
obtain a plurality of data units containing a plurality of pixels stored in memory, each of the plurality of data units including a first pixel of the plurality of pixels packed in succession with at least a portion of a second pixel of the plurality of pixels, the plurality of pixels being represented by a number of bits;
obtain a group of pixels from the plurality of pixels;
and
store the group of pixels using a targeted number of bits,
wherein the at least one processor is configured to:
select a pixel format for the group of pixels based on a storage format;
pack the group of pixels per the selected pixel format;
determine that pixels of the group of pixels require padding to be added to conform to the pixel format;
add padding to the pixels of the group of pixels per the pixel format; and
store the pixels of the group of pixels.

US Pat. No. 10,922,847

ENCODING APPARATUS, DECODING APPARATUS, AND NON-TRANSITORY COMPUTER READABLE MEDIUM STORING PROGRAM

FUJI XEROX CO., LTD., To...

1. An encoding apparatus comprising:a processor, configured to:
perform an encoding process comprising encoding voxel data representing a solid to be modeled and determine codes, among a plurality of predictions which predict a value of a voxel of interest based on values of one or more reference voxels around the voxel of interest based on a prediction which makes a correct prediction about the value of the voxel of interest;
acquire a modeling direction, wherein the modeling direction is a direction in which the solid to be modeled grows; and
control the encoding process based on the modeling direction.

US Pat. No. 10,922,846

METHOD, DEVICE AND SYSTEM FOR IDENTIFYING LIGHT SPOT

GUANGDONG VIRTUAL REALITY...

1. A method for identifying a light spot, comprising:receiving a first image corresponding to a light spot image, wherein the first image is an image of the light spot image displayed in a first color space;
converting the first image into a second image, wherein the second image is an image of the light spot image displayed in a second color space; and
identifying the light spot with a target color in the second image according to a preset color identifying condition of the second color space, wherein the color identifying condition comprises a plurality sets of threshold intervals of color parameters, each set of the threshold intervals of the color parameters corresponds to a given color, and each set of the threshold intervals of the color parameters comprises a plurality of threshold intervals of the color parameter, the color parameter is defined by the second color space, and a threshold of each threshold interval is determined by the given color.

US Pat. No. 10,922,845

APPARATUS AND METHOD FOR EFFICIENTLY TRAINING FEATURE DETECTORS

HERE GLOBAL B.V., Eindho...

1. An apparatus comprising at least one processor and at least one non-transitory memory including computer program code instructions, the computer program code instructions configured to, when executed, cause the apparatus to:cause at least one feature from one or more images that have been labelled to be projected onto a map;
refine a representation of a path of a vehicle that carries a camera that captured the one or more images based upon registration of the at least one feature that has been projected onto the map;
based upon the representation of the path of the vehicle following refinement, project one or more other features that have not been labelled from the map into the one or more images to automatically generate a label of the one or more other features; and
train a feature detector to identify at least one of the one or more other features based upon the label automatically generated by the projection of the one or more other features from the map into the one or more other images.

US Pat. No. 10,922,844

IMAGE POSITIONING METHOD AND SYSTEM THEREOF

Industrial Technology Res...

1. An image positioning method, comprising:obtaining, by a processor, world coordinates of two reference points and image coordinates of two projection points in a first image corresponding to the two reference points, wherein the first image is obtained through a camera;
calculating, by a processor, a plurality of coordinate transformation parameters relative to transformation between any image coordinates of two dimensions in the first image and any world coordinates of three dimensions corresponding to the camera according only to the world coordinates of the two reference points, the image coordinates of the two projection points, and world coordinates of the camera, wherein the coordinate transformation parameters comprise a focal length, a first rotation angle, a second rotation angle and a third rotation angle;
obtaining, by a processor, a second image through the camera, wherein the second image comprises an object image corresponding to an object; and
positioning, by a processor, world coordinates of the object according to the coordinate transformation parameters,
wherein the step of calculating the plurality of coordinate transformation parameters relative to the transformation between any image coordinates and any world coordinates corresponding to the camera according only to the world coordinates of the two reference points, the image coordinates of the two projection points, and the world coordinates of the camera further comprises:
determining, by a processor, a plurality of reference distances between a lens of the camera and the two reference points according to the world coordinates of the two reference points and the world coordinates of the camera, the reference distances comprising the reference distance between the lens of the camera and a first reference point, the reference distance between the lens of the camera and a second reference point, and the reference distance between the first reference point and the second reference point;
determining, by a processor, a plurality of projection distances between a first image central point and the two projection points according to image coordinates of the first image central point and the image coordinates of the two projection points, the projection distances comprising the projection distance between the first image central point and a first projection point, the projection distance between the first image central point and a second projection point, and the projection distance between the first projection point and the second projection point; and
calculating, by the processor, the focal length in the coordinate transformation parameters according to the reference distances and the projection distances.

US Pat. No. 10,922,843

CALIBRATION METHOD AND CALIBRATION DEVICE OF VEHICLE-MOUNTED CAMERA, VEHICLE AND STORAGE MEDIUM

BOE TECHNOLOGY GROUP CO.,...

1. A calibration method for automatically calibrating a vehicle-mounted camera, comprising:obtaining an original image comprising a plurality of first lane lines and captured by the vehicle-mounted camera;
determining in the original image a region of interest (ROI) comprising the plurality of first lane lines;
adjusting a pitch angle of the vehicle-mounted camera by detecting a plurality of second lane lines in a first inverse perspective mapping (IPM) image corresponding to the ROI, the plurality of second lane lines corresponding to the plurality of first lane lines; and
adjusting a yaw angle of the vehicle-mounted camera by detecting an IPM binary image of a second IPM image corresponding to the ROI,
wherein the adjusting the pitch angle of the vehicle-mounted camera by detecting the plurality of second lane lines in the first IPM image comprises:
adjusting the pitch angle of the vehicle-mounted camera by detecting a relationship between the plurality of second lane lines in the first IPM image, to obtain a final first IPM image wherein a plurality of second lane lines in the final first IPM image are parallel to each other and correspond to the plurality of first lane lines;
the adjusting the pitch angle of the vehicle-mounted camera by detecting the relationship between the plurality of second lane lines in the first IPM image to obtain the final first IPM image comprises:
executing a first set operation, wherein the first set operation comprises: determining an intermediate first IPM image based on the current pitch angle and an initial yaw angle of the vehicle-mounted camera, and detecting whether a plurality of second lane lines in the intermediate first IPM image are parallel to each other, wherein the plurality of second lane lines in the intermediate first IPM image correspond to the plurality of first lane lines;
adjusting the current pitch angle and re-executing the first set operation, in a case where the plurality of second lane lines in the intermediate first IPM image are not parallel to each other, until the plurality of second lane lines in the intermediate first IPM image are parallel to each other and the final first IPM image is obtained,
wherein the intermediate first IPM image comprises a DLD feature, and the detecting whether the plurality of second lane lines in the intermediate first IPM image are parallel to each other comprises:
determining the plurality of second lane lines in the intermediate first IPM image, and further the determining the plurality of second lane lines in the intermediate first IPM image comprises:
obtaining a dark-light-dark (DLD) feature image by extracting the DLD feature from the intermediate first IPM image;
obtaining a DLD binary image by performing a binarization process on the DLD feature image; and
determining the plurality of second lane lines in the intermediate IPM image by performing straight-line detection on the DLD binary image.

US Pat. No. 10,922,842

SYSTEM AND METHOD FOR MACHINE VISION FEATURE DETECTION AND IMAGING OVERLAY

JX Imaging Arts, LLC, Co...

1. A method for generating and displaying an overlay geometric model graphic of a physical object in an image comprising:acquiring a first image of the physical object, wherein at least a portion of the physical object is spherical;
selecting a first region of interest in the first image;
performing a circular random sample consensus operation on the first region of interest to detect a reference circle;
cropping the first region of interest to the image portion inside the reference circle;
performing a first semiellipse random sample consensus operation on the cropped first region of interest to detect a first primary semiellipse;
computing a first set of orientation parameters of the object from geometric parameters of the first primary semiellipse, wherein the first primary semiellipse indicates at least a portion of a first primary great circle discernible on the physical object;
generating an overlay geometric model graphic based upon the first set of orientation parameters;
displaying the overlay geometric model graphic combined with the first image.

US Pat. No. 10,922,841

IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND IMAGE PROCESSING PROGRAM

SHARP KABUSHIKI KAISHA, ...

1. An image processing device comprising one or more processors configured to cause the device to function as:a corresponding point searching circuitry configured to search a corresponding image for a corresponding point corresponding to a measurement point on a reference image obtained by capturing an image of a subject, the corresponding image being obtained by capturing an image of the subject from a point of view different from a point of view for the reference image;
a corresponding point adjusting circuitry that causes a display device to display a measurement point peripheral image that is an image of an area in a periphery of the measurement point extracted from the reference image and a corresponding point peripheral image that is an image of an area in a periphery of the corresponding point extracted from the corresponding image, the measurement point peripheral image and the corresponding point peripheral image being displayed side by side in a direction orthogonal to an epipolar line of the reference image and the corresponding image, and is configured to adjust a position of the corresponding point based on an instruction input to an input device; and
a calculating circuitry configured to calculate three-dimensional coordinates of the measurement point on the subject based on a position of the measurement point on the reference image and the position of the corresponding point on the corresponding image.

US Pat. No. 10,922,840

METHOD AND APPARATUS FOR LOCALIZATION OF POSITION DATA

HERE Global B.V., Eindho...

15. A computer program product for position localization, the computer program product comprising at least one non-transitory computer-readable storage medium having computer-executable program code instructions stored therein, the computer-executable program code instructions comprising program code instructions for:receiving observed feature representation data, wherein the observed feature representation data represents an observed feature representation captured by a sensor at a first time, wherein the observed feature representation comprises an environment feature affected by a first feature decay;
transforming the observed feature representation data into standardized feature representation data utilizing a trained localization neural network, wherein the standardized feature representation data represents a standardized feature representation comprising the environment feature affected by a second feature decay, wherein the environment feature affected by the second feature decay approximates the environment feature affected by a third feature decay associated with a map feature representation captured at a second time;
comparing, utilizing a comparison function, the standardized feature representation data and map feature representation data, wherein the map feature representation data represents the map feature representation captured at the second time; and
identifying localized position data based on the comparison of the standardized feature representation data and the map feature representation data, wherein the localized position data represents a localized position.

US Pat. No. 10,922,839

LOCATION OBTAINING SYSTEM, LOCATION OBTAINING DEVICE, LOCATION OBTAINING METHOD, AND NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM

CASIO COMPUTER CO., LTD.,...

1. A location obtaining system comprising:first light-emitting devices configured to emit light, each first light-emitting device of the first light-emitting devices disposed in a prescribed space;
a specifier configured to specify locations within the prescribed space of the first light-emitting devices;
a plurality of imaging devices configured to capture images of the prescribed space from mutually different directions; and
an obtainer configured to obtain an installation location or a shooting direction of each imaging device of the plurality of imaging devices based on (i) positions on the images of lights of the first light-emitting devices included in common in the images captured by the plurality of imaging devices, and (ii) locations within the prescribed space of the first light-emitting devices specified by the specifier
wherein:
the first light-emitting devices emit light modulated by information by which location of the local device within the prescribed space is uniquely specifiable,
the location obtaining system further comprises a decoder configured to decode the lights of the first light-emitting devices included in common in the images captured by the plurality of imaging devices into a location of the local device for each of the first light-emitting devices, and
the specifier specifies the location of the local device of each of the first light-emitting devices decoded by the decoder as a location within the prescribed space.

US Pat. No. 10,922,838

IMAGE DISPLAY SYSTEM, TERMINAL, METHOD, AND PROGRAM FOR DISPLAYING IMAGE ASSOCIATED WITH POSITION AND ORIENTATION

NEC CORPORATION, Tokyo (...

1. An image display system comprising:at least one memory configured to store instructions; and
at least one processor configured to execute the instructions to:
acquire information including a position and an orientation of a mobile terminal; and
compare the position and the orientation of the mobile terminal directly with a position and an orientation associated with an image stored in a storage device in the past to acquire the image based on a comparison result of comparing the position and the orientation of the mobile terminal with the position and the orientation associated with an image;
calculate a similarity degree indicating resemblance between the position and the orientation of the mobile terminal and the position and the orientation associated with the image;
acquire the image based on the similarity degree; and
calculate the similarity degree so that, between the mobile terminal and the image, a shift of the orientation has more influence on the resemblance than a shift of the position and a smaller shift of the orientation results in higher resemblance.

US Pat. No. 10,922,837

CLUTTERED BACKGROUND REMOVAL FROM IMAGERY FOR OBJECT DETECTION

THE BOEING COMPANY, Chic...

1. A method comprising:receiving a digital image captured by a visual sensor;
identifying a plurality of target pixels and a plurality of background pixels in the digital image, based on a plurality of pixel velocity values relating to pixel velocity for a first plurality of pixels in the digital image, a plurality of standard deviation values relating to pixel intensity for the first plurality of pixels in the digital image, and a plurality of thresholds relating to the standard deviation values;
generating a binary image related to the digital image, based on the identified plurality of target pixels and the identified plurality of background pixels;
identifying at least one of a location or an orientation of a target in the digital image based on the binary image; and
transmitting a command to a navigation system for a vehicle, to assist in navigating the vehicle toward the target, based on the identified at least one of the location or the orientation of the target.

US Pat. No. 10,922,836

METHOD AND SYSTEM FOR DETERMINING A 3D POSITION OF AN OBJECT IN SPACE

CARL ZEISS INDUSTRIELLE M...

1. A method for determining a 3D position of an object in space, the method comprising the steps of:arranging a first specimen of a first artificial marker on the object, wherein the first artificial marker defines a first nominal marker pattern with first nominal characteristics, and wherein the first specimen embodies the first nominal marker pattern with first individual characteristics,
providing at least one camera and a coordinate system relative to the at least one camera,
obtaining a data set that is representative for the first specimen, wherein the data set comprises first measured data values representing the first individual characteristics as individually measured on the first specimen prior to the arranging,
capturing at least one image of the first specimen arranged on the object using the at least one camera, wherein the at least one image comprises an image representation of the first nominal marker pattern as embodied by the first specimen,
detecting and analyzing the image representation using the data set, thereby producing a number of first position values representing the 3D position of the first specimen relative to the coordinate system, and
determining the 3D position of the object on the basis of the first position values.

US Pat. No. 10,922,835

VEHICLE EXTERIOR ENVIRONMENT RECOGNITION APPARATUS AND VEHICLE EXTERIOR ENVIRONMENT RECOGNITION METHOD

SUBARU CORPORATION, Toky...

1. A vehicle exterior environment recognition apparatus, comprising:an object identifier configured to identify an object in a detected region ahead of an own vehicle; and
a barrier setting unit configured to set a barrier located at a closest end of the object, with a relative distance from the object to the own vehicle in a traveling direction of the own vehicle being shortest at the closest end, the barrier being unavoidable by the own vehicle with use of a traveling mode of the own vehicle,
wherein the barrier is a plane that includes the closest end and is perpendicular to the traveling direction of the own vehicle, the barrier including a right end and a left end, the right end including an intersection point with a first additional line that couples a front left end of the own vehicle to a right end of the object, and the left end including an intersection point with a second additional line that couples a front right end of the own vehicle to a left end of the object.

US Pat. No. 10,922,834

METHOD AND APPARATUS FOR DETERMINING VOLUME OF OBJECT

Hangzhou Hikrobot Technol...

1. A method for determining volume of an object, comprising:obtaining a target depth image containing a target object which is captured by a depth image capturing device;
performing segmentation based on depth data in the target depth image to obtain a target image region corresponding to the target object;
determining a target circumscribed rectangle that corresponds to the target image region and meets a predetermined condition, which comprises: determining a target circumscribed rectangle that corresponds to the target image region and has a minimum area, or, determining a target circumscribed rectangle that corresponds to the target image region and has a minimum difference between its area and a predetermined area threshold;
extracting image coordinates of each vertex of the target circumscribed rectangle in a binarized frame difference image, wherein the frame difference image is an image that is obtained based on the difference between the depth data of the target depth image and depth data of a predetermined background depth image,
projecting the extracted image coordinates of each vertex into the target depth image to generate a reference point located in the target depth image;
calculating three-dimensional coordinates for each reference point in a camera world coordinate system according to a principle of perspective projection in camera imaging; and
with the three-dimensional coordinates of the reference points, calculating Euclidean distance between every two of the reference points, determining a length and a width of the target object as two distances other than the longest distance in the calculated Euclidean distances, subtracting a depth value of a region corresponding to the reference points from a depth value of the predetermined background depth image to obtain a height of the target object, and determining a volume of the target object as the product of the length, width and height of the target object.

US Pat. No. 10,922,833

IMAGE PROCESSING

Apical Ltd., Cambridge (...

1. A method of processing image data representative of an image using a multi-stage system comprising a first neural network for identifying a first image characteristic and a second neural network for identifying a second image characteristic different from the first image characteristic, the method comprising:processing the image data using the first neural network, the processing the image data using the first neural network comprising:
i) processing the image data using a first at least one layer of the first neural network to generate feature data representative of at least one feature of the image, wherein the feature data represents a feature map; and
ii) processing the feature data using a second at least one layer of the first neural network to generate first image characteristic data indicative of whether the image includes the first image characteristic;
transferring the feature data from the first neural network to the second neural network; and
processing the feature data using the second neural network, without processing the image data using the second neural network, to generate second image characteristic data indicative of whether the image includes the second image characteristic.

US Pat. No. 10,922,832

REMOVAL OF PROJECTION NOISE AND POINT-BASED RENDERING

INTEL CORPORATION, Santa...

1. A method, comprising:dividing a first image projection into a plurality of regions, the plurality of regions comprising a plurality of points;
determining an accuracy rating for the plurality of regions;
applying one of a first rendering technique to a first region in the plurality of regions when the accuracy rating for the first region in the plurality of regions fails to meet an accuracy threshold or a second rendering technique to the first region in the plurality of regions when the accuracy rating for the first region in the plurality of regions meets an accuracy threshold;
determining a uniformity of depth parameter for the plurality of points in the plurality of regions, wherein determining a uniformity of depth parameter for a region in the plurality of regions comprises:
determining an average depth parameter for the region; and
determining a standard deviation depth parameter for the region.

US Pat. No. 10,922,831

SYSTEMS AND METHODS FOR HANDLING MULTIPLE SIMULTANEOUS LOCALIZATION AND MAPPING (SLAM) SOURCES AND ALGORITHMS IN VIRTUAL, AUGMENTED, AND MIXED REALITY (XR) APPLICATIONS

Dell Products, L.P., Rou...

1. An Information Handling System (IHS), comprising:a processor; and
a memory coupled to the processor, the memory having program instructions stored thereon that, upon execution by the processor, cause the IHS to:
apply a first Simultaneous Localization and Mapping (SLAM) algorithm to first SLAM data captured via a first camera source mounted on a Head-Mounted Device (HMD) coupled to the IHS to produce a first Signal-to-Noise (SNR) metric;
apply a second SLAM algorithm to the first SLAM data to produce a second SNR metric;
select: (i) the first SLAM algorithm in response to the first SNR metric being greater than the second SNR metric, or (ii) the second SLAM algorithm in response to the second SNR metric being greater than the first SNR metric; and
produce a map of a space where the HMD is located, at least in part, by applying the selected SLAM algorithm to subsequently captured SLAM data.

US Pat. No. 10,922,830

SYSTEM AND METHOD FOR DETECTING A PRESENCE OR ABSENCE OF OBJECTS IN A TRAILER

Zebra Technologies Corpor...

1. A method for detecting a presence or absence of objects in a trailer, comprising:capturing a three-dimensional image, the three-dimensional image comprising three-dimensional point data having a plurality of points, and the three-dimensional image defining: (1) a portion of a wall of the trailer, (2) a portion of a floor of the trailer, and (3) a top portion of the trailer;
analyzing the plurality of points to determine a first sub-plurality of points associated with the portion of the wall of the trailer, to determine a second sub-plurality of points associated with the portion of the floor of the trailer, and to determine a third sub-plurality of points associated with the top portion of the trailer;
removing the first sub-plurality of points from the plurality of points;
removing the second sub-plurality of points from the plurality of points;
removing the third sub-plurality of points from the plurality of points to obtain a modified plurality of points, wherein the modified plurality of points represents a modified three-dimensional image;
segmenting the modified three-dimensional image into a plurality of bins;
analyzing one or more of the plurality of bins to determine one or more points-bin values; and
providing at least one of: (1) a first communication representative of the presence of objects in the trailer when at least one of the one or more points-bin values exceeds a threshold value, or (2) providing a second communication representative of the absence of objects in the trailer when none of the one or more points-bin values exceeds the threshold value
wherein
determining the first sub-plurality of points comprises:
determining initial values of a set of parameters; and
based on the initial values of the set of parameters, performing a plurality of iterations of an iterative algorithm to identify the first sub-plurality of points corresponding to a first wall and a second wall.

US Pat. No. 10,922,829

ZERO ORDER LIGHT REMOVAL IN ACTIVE SENSING SYSTEMS

QUALCOMM Incorporated, S...

1. A method of image processing, the method comprising:transmitting, with an optical transmitter, towards an object, a coded pattern of light;
receiving, with an optical receiver, a reflection of the coded pattern from the object to generate an image;
determining, with processing circuitry, an estimated position of zero order light in the image;
determining, with the processing circuitry, a spatial region of the coded pattern that corresponds to a position of the zero order light in the coded pattern;
mapping, with the processing circuitry, the spatial region to the estimated position of the zero order light in the image to generate a corrected image; and
generating, with the processing circuitry, a depth map for the coded pattern based on the corrected image.

US Pat. No. 10,922,828

META PROJECTOR AND ELECTRONIC APPARATUS INCLUDING THE SAME

SAMSUNG ELECTRONICS CO., ...

1. A depth recognition apparatus comprising:a meta projector;
a first sensor disposed in a first position with respect to the meta projector, and configured to receive light from an object;
a second sensor, disposed in a second position with respect to the meta projector, different from the first position, and configured to receive light form the object; and
a processor configured to analyze the light received by at least one of the first and second sensors and thereby calculate a depth position of the object,
wherein the meta projector comprises:
an edge emitting device disposed on a substrate, the edge emitting device comprising an upper surface extending parallel to the substrate and a side surface inclined relative to the upper surface, the edge emitting device configured to emit light through the side surface;
a meta-structure layer spaced apart from the upper surface of the edge emitting device, the meta-structure layer comprising:
a support layer comprising a first surface facing the edge emitting device and a second surface opposite to the first surface;
a first plurality of nanostructures disposed on the first surface; and
a second plurality of nanostructures disposed on the second surface, each of the first plurality of nanostructures and each of the second plurality of nanostructures having a dimension that is smaller than a wavelength of the light emitted from the edge emitting device; and
a path changing member configured to change a path of the light emitted from the edge emitting device to direct the path toward the meta-structure layer,
wherein a shape distribution of the first plurality of nanostructures and the second plurality of nanostructures is configured to form structured light using the light emitted from the edge emitting device, and a pattern of the structured light is mathematically coded to uniquely designate angular position coordinates using bright and dark points.

US Pat. No. 10,922,827

DISTANCE ESTIMATION OF VEHICLE HEADLIGHTS

Aptiv Technologies Limite...

1. A method for determining the distance to a vehicle for use in an AHC-System, comprising:capturing a raw image by using a camera of an AHC-System;
determining by the AHC-System that the raw image includes a headlight of a vehicle; and
extracting by the AHC-System an image segment of the raw image including the headlight of a vehicle;
refining the image segment by applying a classifier to generate a refined image including the headlight of a vehicle;
building a feature vector based on the refined image, the feature vector not containing the refined image or an image excerpt of the refined image; and
estimating the distance to the vehicle based on the feature vector.

US Pat. No. 10,922,826

DIGITAL TWIN MONITORING SYSTEMS AND METHODS

Ford Global Technologies,...

1. A method of operating a digital twin, comprising:receiving, by a first computer, one or more images of a surveilled area;
detecting, by the first computer and in the one or more images a movement of a first object through a first portion of the surveilled area during a first period of time;
defining in the digital twin, by the first computer and based on the detection of the movement of the first object through the first portion of the surveilled area, the first portion of the surveilled area as a zone of primary interest;
detecting, by the first computer and in the one or more images, a second object that is stationary in a second portion of the surveilled area during a second period of time;
defining in the digital twin, by the first computer and based on the detection of the second object that is stationary in the second portion of the surveilled area, the second portion of the surveilled area as a zone of secondary interest, wherein the zone of primary interest and the zone of secondary interest comprise portions of the surveilled area for subsequent image capture;
receiving images of the zone of primary interest at a first rate;
receiving images of the zone of secondary interest at a second rate, wherein the second rate is lower than the first rate; and
detecting, by the first computer and based on the digital twin, a pattern of movement of at least a first object through at least the zone of primary interest.

US Pat. No. 10,922,825

IMAGE DATA PROCESSING METHOD AND ELECTRONIC DEVICE

LENOVO (BEIJING) CO., LTD...

1. An image data processing method, comprising:receiving, by a first electronic device, first image data of an environment collected by a second electronic device;
determining, by the first electronic device, one or more motion parameters, including a moving speed and a moving direction relative to the environment, of the second electronic device based on the first image data;
determining a latency between a moment the first image data being transmitted by the second electronic device and a moment the first image data being received by the first electronic device;
compensating the first image data based on the one or more motion parameters of the second electronic device and the latency as determined, to generate second image data by the first electronic device; and
displaying the second image data through the first electronic device.

US Pat. No. 10,922,824

OBJECT TRACKING USING CONTOUR FILTERS AND SCALERS

Volkswagen AG Audi AG Por...

1. An image data processing system for processing image data from at least one image sensor located on a transportation vehicle, the image processing system comprising:an affine contour filter that extracts sub-pixel contour roots that are dimensionless points consistent across a plurality of frames of image data and represent boundaries of image data that represent an object within the image, wherein the contours undergo small affine changes including at least one of translation, rotation and scale in image data included in image data collected over a period of time; and
means for performing lateral contour tracking to track movement of the object within a field of view of the at least one sensor by aligning contours associated with the object in space-time, wherein contours of each incoming image included in the plurality of frames included in image data are aligned to a map frame to map the contours using tethers to track the object,
wherein each tether provides a connection between roots of similar polarity on two different frames and enables interpolation of locations of roots on a sub-pixel basis to associate roots across successive frames in the plurality of frames of image data.

US Pat. No. 10,922,823

MOTION ANALYIS DEVICE, MOTION ANALYSIS METHOD, AND PROGRAM RECORDING MEDIUM

NEC CORPORATION, Tokyo (...

5. An image processing method comprising:by a processor,
selecting a representation image from among captured images in which a target object to be detected is captured, the representation image being a criterion for processing;
calculating a displacement amount between the representation image and a processing target image by comparing and analyzing the representation image and the processing target image after the representation image is selected, the processing target image being a captured image which differs from the captured image selected as the representation image; and
selecting, from among the captured images determined as reference images, the reference image to be used in processing of analyzing motion of the target object based on information associated with each reference image,
wherein the information is information on a displacement amount between each reference image and the representation image, and the processor selects, as a reference image to be used in processing of analyzing motion of the target object, a reference image that has an associated displacement amount within a preset range of the displacement amount between the representation image and the processing target image, or
wherein the information is information on a class based on displacement amounts between each reference image and the representation image that is associated with each reference image, the processor determines a class of the displacement amount between the representation image and the processing target image, and the processor selects, as the reference image to be used in processing of analyzing motion of the target object, a reference image belonging to the determined class, or
wherein the processor calculates the displacement amount in a predetermined movement direction of the target object, the information is information on a displacement amount in the predetermined movement direction between each reference image and the representation image that is associated with each reference image, and the processor selects the reference image to be used in processing of analyzing motion of the target object based on the calculated displacement amount in the predetermined movement direction.

US Pat. No. 10,922,822

IMAGE ANALYSIS METHOD FOR MOTION DETERMINATION, ELECTRONIC SYSTEM, AND NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM

Wistron Corporation, New...

1. An image analysis method applicable to an electronic system comprising an image capture device, the image analysis method comprising:obtaining a plurality of images captured by the image capture device;
performing a motion detection on the images to determine whether the images comprise a motion; and
determining whether a target enters a preset scenery or leaves the preset scenery in response to the determination of the motion detected on the plurality of images by the motion detection,
wherein the step of determining whether the target enters the preset scenery or leaves the preset scenery comprises:
obtaining a follow-up image captured by the image capture device and performing an image analysis on the follow-up image to determine whether the target enters the preset scenery or leaves the preset scenery,
wherein when the image analysis is performed on the follow-up image to determine whether the target enters the preset scenery or leaves the preset scenery, a capture time of the follow-up image is not earlier than a capture time of the images determined to comprise the motion.

US Pat. No. 10,922,821

VIRTUAL GENERATION OF LABELED MOTION SENSOR DATA

INTERNATIONAL BUSINESS MA...

16. A computer program product facilitating a virtual generation of motion sensor data, the computer program product comprising a non-transitory computer readable medium having program instructions embodied therewith, the program instructions executable by a processor to cause the processor to:obtain, by the processor, virtual motion sensor data corresponding to virtual location data associated with a tracked feature of a computer animated character in a virtual environment; and
based on the virtual motion sensor data, employ, by the processor, machine learning to train a predictive model to dynamically control an amount of random variation to apply to the virtual motion sensor data to identify one or more movement activities of an entity within a defined range of acceptable variation from the virtual motion sensor data.

US Pat. No. 10,922,820

DATA-DRIVEN DELTA-GENERALIZED LABELED MULTI-BERNOULLI TRACKER

17. A method of identifying birth targets for tracking a plurality of objects, comprising:receiving measurement data identifying a plurality of targets corresponding to the plurality of objects in a first time step;
generating a multi-target likelihood function using a persistent target density for the first time step, a birth target density, and a clutter density;
using the multi-target likelihood function to identify the birth targets for the first time step in the plurality of targets in the received measurement data, wherein the birth targets are targets that are not associated with already identified persistent tracks;
generating a joint posterior density from the persistent target density for the first time step, the birth target density, and the multi-target likelihood function;
performing a time-update on the joint posterior density to generate a predicted density for a second time step; and
changing variables to make the predicted density for the second time step into a persistent target density for the second time step.

US Pat. No. 10,922,819

METHOD AND APPARATUS FOR DETECTING DEVIATION FROM A MOTION PATTERN IN A VIDEO

Canon Kabushiki Kaisha, ...

1. A method for detecting deviation from a motion pattern in a video, comprising:generating a current motion grid comprising a plurality of elements by storing in each element of the current motion grid an indication of whether there is a change between corresponding elements of at least two images of a video sequence;
generating a current motion pattern grid by searching for a segment consisting of a plurality of elements in which a change has been indicated in the current motion grid and which are neighbouring to one another and, storing in each element of the segment a value corresponding to a size of the segment;
comparing a value of an element of the current motion pattern grid with a value of the corresponding element of a motion pattern model, wherein the motion pattern model is generated by obtaining a series of motion pattern grids and storing in each element of the motion pattern model a value based on accumulated information from the series of motion pattern grids;
determining whether there is deviation from the motion pattern model in accordance with the result of the comparison; and
triggering an event when it is determined that there is deviation from the motion pattern model.

US Pat. No. 10,922,818

METHOD AND COMPUTER SYSTEM FOR OBJECT TRACKING

Wistron Corporation, New...

1. A method for object tracking, applicable to a computer system, comprising:obtaining an image sequence comprising a plurality of images, wherein the image sequence includes a target object;
receiving a labelling operation corresponding to the target object in first two images and last two images in the image sequence to respectively generate four ground truth labels of the target object;
performing a forward tracking of the target object on the image sequence in time series according to the ground truth labels of the first two images to obtain a forward tracking result;
performing a backward tracking of the target object on the image sequence in time series according to the ground truth labels of the last two images to obtain a backward tracking result; and
comparing the forward tracking result and the backward tracking result to accordingly generate a final tracking result of the target object.

US Pat. No. 10,922,817

PERCEPTION DEVICE FOR OBSTACLE DETECTION AND TRACKING AND A PERCEPTION METHOD FOR OBSTACLE DETECTION AND TRACKING

Intel Corporation, Santa...

1. A perception device, comprising:at least one image sensor configured to detect a plurality of images;
an information estimator configured to estimate from each image of a plurality of images a depth estimate, a velocity estimate, an object classification estimate, and an odometry estimate, wherein estimating the odometry estimate comprises implementing a neural network trained to derive the odometry estimate from the plurality of images by: determining a ground truth odometry value, determining an error of the neural network's estimated odometry values with respect to the ground truth odometry value, and determining neural network parameters to reduce the error between the ground truth odometry value and the neural network's estimated odometry values;
a particle generator configured to generate a plurality of particles, wherein each particle of the plurality of particles comprises a position value determined from the depth estimate, a velocity value determined from the velocity estimate, and a classification value determined from the object classification estimate; and
an occupancy hypothesis determiner configured to determine an occupancy hypothesis of a predetermined region, wherein the occupancy hypothesis is determined based on each particle of the plurality of particles and comprises a dynamic occupancy grid comprising a plurality of grid cells, each grid cell of the plurality of grid cells representing an area in the predetermined region, and wherein at least one of plurality of grid cells is associated with a single occupancy hypothesis based on one or more particles of the plurality of particles in the at least one grid cell.

US Pat. No. 10,922,816

MEDICAL IMAGE SEGMENTATION FROM RAW DATA USING A DEEP ATTENTION NEURAL NETWORK

Siemens Healthcare GmbH, ...

1. A method for segmentation from raw data of a magnetic resonance imager, the method comprising:acquiring, by the magnetic resonance imager, k-space data representing response from a patient;
segmenting an object of the patient represented in the k-space data by a machine-learned neural network, the machine-learned neural network having a segmentation network recurrently applied to output an output segmentation of the object in response to input of the k-space data to the machine-learned neural network; and
generating an image based on the segmentation.

US Pat. No. 10,922,815

IMAGE PROCESSING APPARATUS, IMAGE PROCESSING PROGRAM, AND IMAGE PROCESSING METHOD

Sony Corporation, Tokyo ...

1. An information processing apparatus comprising:at least one processor configured to receive an image of an analysis target in a plurality of images of the analysis target; and
at least one storage medium configured to store processor-executable instructions that, when executed by the at least one processor, perform a method comprising:
setting at least one axial direction in the image of the analysis target;
calculating an analytical range vector in an analytical range set in the image of the analysis target, wherein calculating the analytical range vector comprises selecting a motion vector from among a plurality of motion vectors and setting the motion vector as the analytical range vector; and
determining motion information for the analysis target by projecting the analytical range vector onto the at least one axial direction to identify, at least one of a motion amount of the analysis target and a motion direction of the analysis target,
wherein the analysis target includes at least one cell body of a neuron.

US Pat. No. 10,922,814

IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND IMAGE PROCESSING PROGRAM

FUJIFILM Corporation, To...

1. An image processing apparatus comprising:a processor configured to:
acquire a radiographic image of a breast;
derive a mammary gland content rate for each pixel of a breast region in the radiographic image; and
detect a mammary gland concentrated region in which mammary glands are concentrated on the basis of a result of specifying whether a specific pixel which is each pixel of the breast region is a pixel included in the mammary gland concentrated region of the breast region on the basis of the mammary gland content rate of the specific pixel and a mammary gland content rate of a pixel around the specific pixel,
wherein the processor is further configured to derive a representative value of a mammary gland content rate of a local region which includes the specific pixel and has a predetermined size smaller than that of the breast region and specifies the specific pixel, of which a representative value of the mammary gland content rate is equal to or greater than a predetermined threshold value, in the local region as a pixel included in the mammary gland concentrated region, and
wherein the size of the local region is determined according to a size of an object of interest to be observed and the size of the local region is larger than the size of the object of interest.

US Pat. No. 10,922,813

METHOD FOR DETERMINING AT LEAST ONE OBJECT FEATURE OF AN OBJECT

SIEMENS HEALTHCARE GMBH, ...

1. A method for determining at least one object feature of an object, at least partially depicted by an object image, the method comprising:determining a respective preliminary feature for each respective object feature, of the at least one object feature, and
determining at least one acquisition feature from the object image, the respective preliminary feature depending on the respective object feature and on an imaging device used to acquire at least one of the object image and at least one imaging parameter usable for at least one of acquiring and reconstructing at least one of the object image and an additional feature of the object, the at least one acquisition feature being dependent upon at least one of the imaging device, the at least one imaging parameter and the additional feature of the object; and
determining, via a correction algorithm, the respective object feature from the respective preliminary feature and the at least one acquisition feature,
wherein the determining of the at least one acquisition feature includes determining the at least one acquisition feature by a respective determination algorithm, at least one of selected and parametrized from a group of candidate algorithms, and wherein the at least one of selection and parametrization depends upon multiple reference images.

US Pat. No. 10,922,812

IMAGE PROCESSING APPARATUS, X-RAY DIAGNOSTIC APPARATUS, AND IMAGE PROCESSING METHOD

CANON MEDICAL SYSTEMS COR...

1. An image processing apparatus comprising:processing circuitry configured to
acquire three-dimensional medical image data;
set coordinates of at least two or more elements to evaluate an evaluation target represented by the three-dimensional medical image data, on images of different positioning slices of the three-dimensional medical image data;
calculate a measured value for evaluating the evaluation target based on the coordinates; and
control a display to display the measured value;
select at least two images from the images of the slices based on the coordinates and generate a two-dimensional synthetic image in which the selected images are synthesized,
generate an auxiliary figure representing the measured value and generate a superimposed image in which the auxiliary figure is superimposed on the synthetic image, and
control the display to display the superimposed image.

US Pat. No. 10,922,811

OBJECT DIFFERENTIATION AND IDENTIFICATION

FORD GLOBAL TECHNOLOGIES,...

1. A system, comprising a computer that includes a processor and a memory, the memory storing instructions executable by the processor such that the computer is programmed to:detect a first and a second object in received image data;
determine a mesh of cells on each of first and second object surfaces;
upon identifying a cell of the mesh on the first object mismatched to a corresponding cell on the second object, refine the mismatched cell by dividing the mismatched cell into a plurality of smaller cells, wherein identifying the mismatch is based on at least one of a mismatch in a color, texture, shape, and dimensions;
stop refining the cell upon determining that a refinement of the refined cell of the first object results in a refined cell that is matched to a corresponding refined cell of the second object; and
output location data of mismatched cells of the first and second objects, wherein a mismatched cell has at least one of a color mismatch, texture mismatch, and shape mismatch.

US Pat. No. 10,922,810

AUTOMATED VISUAL INSPECTION FOR VISIBLE PARTICULATE MATTER IN EMPTY FLEXIBLE CONTAINERS

BAXTER INTERNATIONAL INC....

1. An automated visual inspection system for detecting the presence of particulate matter, comprising:a backdrop including a first side with a first portion and a second portion, the first portion of the first side having a first color and the second portion of the first side having a second color;
at least one empty, flexible container;
a light source configured to transmit light through the at least one empty, flexible container thereby impinging on the backdrop;
a detector configured to receive the light and generate image data; and
an image processor configured to:
analyze the image data,
determine whether the at least one empty, flexible container is defective, and
generate a rejection signal if the at least one empty, flexible container is defective.

US Pat. No. 10,922,809

METHOD FOR DETECTING VOIDS AND AN INSPECTION SYSTEM

APPLIED MATERIALS, INC., ...

1. A method for detecting buried voids in metal lines of a semiconductor device substrate during fabrication, the method comprising:selecting locations within the semiconductor device substrate;
with an electron beam imaging system, collecting electrons scattered from the semiconductor device substrate in response to impinging a high energy primary particle beam onto areas of the semiconductor device substrate that include the selected locations within the semiconductor device substrate;
generating a backscattered electron image, from the collected electrons, of the areas of the semiconductor device substrate;
segmenting the backscattered electron image into portions corresponding to the metal lines of the semiconductor device substrate and portions corresponding to areas outside the metal lines of the semiconductor device substrate, wherein each of the portions corresponding to the metal lines corresponds to one of the metal lines; and
identifying a gray level signature for each portion and using the gray level signature to identify the buried voids within each corresponding portion, wherein fluctuations in gray levels within the gray level signature for each portion are correlated with the buried voids within each corresponding portion.

US Pat. No. 10,922,807

WAFER MANUFACTURING SYSTEM, DEVICE AND METHOD

STMICROELECTRONICS S.R.L....

1. A device, comprising:image generation circuitry, which, in operation, generates a binned representation of a wafer defect map (WDM); and
convolutional-neural-network (CNN) circuitry, which, in operation, generates and outputs an indication of a root cause of a defect associated with the WDM based on the binned representation of the WDM and a data-driven model associating WDMs with classes of a defined set of classes of wafer defects, wherein, the CNN circuitry, in operation,
generates a feature vector associated with the WDM based on the binned representation of the WDM and the data-driven model; and
searches a root cause database based on the generated feature vector to generate the indication of the root cause of the defect associated with the WDM, wherein the root cause database stores feature vectors and associated labels and tags of WDMs of a set of training WDMs, a label identifies a class of the defined set of classes and a tag identifies a root cause of a defect.

US Pat. No. 10,922,806

SOUND-BASED FLOW CHECK SYSTEM FOR WASHER SYSTEM

GM GLOBAL TECHNOLOGY OPER...

1. A flow check system for a washer system, the flow check system comprising:an array of sound sensors configured to collect sound information originating with the washer system; and
a controller configured to:
collect the sound information from the array of sound sensors;
collect an image of the washer system;
identify a plurality of actual locations having at least a predetermined amount of flow from the washer system, based on the sound information, wherein the predetermined amount of flow is a predetermined amount of air flow;
compare the plurality of actual locations having at least the predetermined amount of flow to a plurality of predetermined expected flow locations; and
determine a location of a flow anomaly in the washer system based on the sound information and the image of the washer system, wherein the controller is configured to determine the location of the flow anomaly in the washer system as being an actual location having at least the predetermined amount of flow that is outside of the plurality of predetermined expected flow locations,
wherein the array of sound sensors is located a distance between 1 and 2 meters from the plurality of predetermined expected flow locations.

US Pat. No. 10,922,805

MICRONEEDLE ARRAY IMAGING DEVICE, MICRONEEDLE ARRAY IMAGING METHOD, MICRONEEDLE ARRAY INSPECTION DEVICE, AND MICRONEEDLE ARRAY INSPECTION METHOD

FUJIFILM Corporation, To...

1. A microneedle array imaging device comprising:a light source which irradiates a surface on a side opposite to a surface on which a plurality of microneedles whose inclination angle of a side surface with respect to a bottom surface is ?° are arranged on a sheet to form a microneedle array, with parallel light as illumination light; and
a camera which images the microneedle array from a side of the surface on which the microneedles are arranged,
wherein the illumination unit irradiates the surface with the illumination light under conditions in which an incident angle of light onto the bottom surface of the microneedle is 90-?° or greater and an incident angle of light onto the side surface of the microneedle is less than a critical angle, wherein the critical angle is a smallest angle at which a total reflection occurs on the side surface of the microneedle and is between a direction of the incident angle of light and a normal line drawn on the side surface of the microneedle.

US Pat. No. 10,922,804

METHOD AND APPARATUS FOR EVALUATING IMAGE DEFINITION, COMPUTER DEVICE AND STORAGE MEDIUM

Baidu Online Network Tech...

1. A method for evaluating image definition, characterized in that the method comprises:obtaining images of different definitions respectively, and obtaining a comprehensive image definition score of each image;
training according to the obtained images and the corresponding comprehensive image definition scores to obtain an evaluation model;
obtaining an image to be processed;
inputting the image to be processed to the pre-trained evaluation model;
obtaining a comprehensive image definition score outputted by the evaluation model, the comprehensive image definition score being obtained by the evaluation model by obtaining N image definition scores based on N different scales respectively, and then integrating the N image definition scores, N being a positive integer greater than one,
wherein the obtaining of the comprehensive image definition score of each image comprises:
for each image, respectively obtaining a manually annotated definition class to which the image belongs, the number of definition classes being greater than one, and taking a preset score corresponding to the definition class to which the image belongs as the comprehensive image definition score of the image.

US Pat. No. 10,922,803

DETERMINING CLEAN OR DIRTY CAPTURED IMAGES

FICOSA ADAS, S.L.U., Bar...

1. A method of determining whether an image captured by an image capturing device is clean or dirty, the method comprising:receiving the image captured by the image capturing device;
splitting the received image into a plurality of image portions according to a predefined-splitting-criteria;
performing, for each of at least some of the plurality of image portions, a filter to produce a feature vector for each of the at least some of the plurality of image portions;
providing each of at least some of the produced feature vectors to a first machine learning module that has been trained to produce a clean/dirty indicator depending on a corresponding feature vector, the clean/dirty indicator including a probabilistic value of cleanness or dirtiness of a corresponding image portion;
determining whether the image is clean or dirty depending on clean/dirty indicators produced by the first machine learning module;
performing a context-based filtering on corresponding image as a whole, wherein the filtering includes determining one or more dirty blocks including neighboring dirty portions, wherein a dirty portion is a portion with a clean/dirty indicator denoting dirtiness;
detecting one or more edges of the one or more dirty blocks;
determining a sharpness in the one or more edges; and
adjusting clean/dirty indicators of the neighboring dirty portions in the one or more dirty blocks depending on at least one of the sharpness or the clean/dirty indicators of clean portions that are adjacent to the edges, wherein a clean portion is a portion with a clean/dirty indicator denoting cleanness.

US Pat. No. 10,922,802

FUSION OF THERMAL AND REFLECTIVE IMAGERY

1. A system for fusing a direct image and a reflective image, the system comprising:a first image sensor for sensing the direct image from a scene, the direct image comprising direct image pixels;
a second image sensor for sensing the reflective image from the scene, the reflective image comprising reflective image pixels;
a digital processor for spatially registering the direct image pixels and the reflective image pixels;
the digital processor for fusing an intensity value of each direct image pixel and an intensity value of a registered reflective image pixel to generate a fused pixel value, a plurality of fused pixel values forming a fused image;
the digital processor for processing the fused pixel values for displaying the fused image on a monitor; and
wherein fusing the intensity values comprises multiplying the intensity value of each direct image pixel and the intensity value of a registered reflective image pixel.

US Pat. No. 10,922,801

CHANNEL-BASED BINARIZATION OF COLOR

Lockheed Martin Corporati...

1. A method comprising:acquiring a color image;
generating a first grayscale image from a first color channel in the color image and a second grayscale image from a second color channel in the color image;
generating a first color channel binary image by binarizing the first grayscale image and generating a second color channel binary image by binarizing the second grayscale image; and
generating one binary image by combining the first color channel binary image with a first predetermined weight and the second color channel binary image with a second predetermined weight, and using a pixel combination threshold to combine the binary images into the one binary image,
wherein the combining, for each pixel in the one binary image, comprises:
determining a combined pixel value by combining corresponding pixel values of the weighted first color channel binary image and of the weighted second color channel binary image, the combined pixel value corresponding to a value for a single grayscale intensity channel, and
comparing the combined pixel value to the pixel combination threshold such that a value of white is assigned to the pixel when the combined pixel value is less than the pixel combination threshold and a value of black is assigned to the pixel when the combined pixel value is equal to or greater than the pixel combination threshold, and
wherein the binarizing of each color channel separately results in an intensity difference between any given set of corresponding pixels of the first color channel binary image and the second color channel binary image.

US Pat. No. 10,922,800

IMAGE PROCESSING CIRCUIT, DISPLAY DEVICE HAVING THE SAME, AND METHOD OF DRIVING THE DISPLAY DEVICE

SAMSUNG DISPLAY CO., LTD....

1. An image processing circuit comprising:a memory;
a gamma converter which converts a first image data signal of a frame to output a current image data signal of a current frame corresponding to one of a first gamma type and a second gamma type based on a spatial distribution pattern;
a compression circuit which separates the current image data signal of the current frame into a first gamma signal corresponding to the first gamma type and a second gamma signal corresponding to the second gamma type, and compresses the first gamma signal and the second gamma signal to a first compression gamma signal and a second compression gamma signal, respectively, wherein the first compression gamma signal and the second compression gamma signal are stored in the memory;
a decompression circuit which decompresses the first compression gamma signal and the second compression gamma signal stored in the memory, and combines a first decompression gamma signal and a second decompression gamma signal to output a previous image data signal of a previous frame; and
a gamma correction circuit which performs a gamma adjustment based on the current image data signal of the current frame and the previous image data signal of the previous frame to output a second image data signal of the current frame.

US Pat. No. 10,922,799

IMAGE PROCESSING METHOD THAT PERFORMS GAMMA CORRECTION TO UPDATE NEURAL NETWORK PARAMETER, IMAGE PROCESSING APPARATUS, AND STORAGE MEDIUM

CANON KABUSHIKI KAISHA, ...

1. An image processing method comprising the steps of:acquiring a training image and a correct image;
inputting the training image into a multilayer neural network to generate an output image;
performing a gamma correction for each of the correct image and the output image and calculating an error between the correct image after the gamma correction and the output image after the gamma correction; and
updating a network parameter of the neural network using the error.

US Pat. No. 10,922,798

IMAGE PROCESSING APPARATUS, METHOD FOR PROCESSING IMAGE AND COMPUTER-READABLE RECORDING MEDIUM

Samsung Electronics Co., ...

1. An image processing apparatus, comprising:a communicator configured to receive a low-quality image of lower quality than an original image and the original image; and
a processor configured to:
generate a first image of higher quality than the low-quality image obtained by performing image processing on the received low-quality image by using a parameter for image processing,
generate a plurality of reduced images to reduce the first image at a plurality of different ratios,
extract respective visual features from the original image and the plurality of reduced images, and
select a second image which is a reduced image having a visual feature most similar to a visual feature extracted from the original image among the plurality of reduced images,
wherein the processor is further configured to adjust the parameter to allow a difference between the visual feature of the first image and the visual feature of the second image to be within a first predetermined range.

US Pat. No. 10,922,797

DISPLAY SPECULAR REFLECTION MITIGATION

Dell Products L.P.

1. A computer-implemented method, comprising:providing, for display on a display device, a graphical user interface (GUI) having a plurality of pixels representing an image;
identifying external incoming light that is incident on the display device;
measuring, by each individual sensor of a plurality of individual sensors of a sensor array that is incorporated with the pixels of the display device, a particular brightness of the incoming light for a grouping of pixels of the plurality of pixels of the display device that is associated with the individual sensor, each grouping of pixels including two or more pixels of the plurality of pixels, wherein each sensor is associated with a differing grouping of pixels of the plurality of pixels of the display device;
calculating, by a display adjustment computing module, a normal distribution of the brightness of the incoming light across the display device based on the brightness of the incident light corresponding to each pixel grouping of the plurality of pixels of the display device that is measured by the corresponding individual sensor for the pixel grouping;
determining, by the display adjustment computing module, that a brightness of the incoming light that is incident on a particular grouping of pixels of the display device is greater than the normal distribution of the brightness of the incoming light across the display device; and
in response to determining that the brightness of the incoming light that is incident on a particular grouping of pixels of the display device is greater than the normal distribution of the brightness of the incoming light across the display device, adjusting, by the display adjustment computing module, the particular brightness of one or more pixels of the image.

US Pat. No. 10,922,796

METHOD OF PRESENTING WIDE DYNAMIC RANGE IMAGES AND A SYSTEM EMPLOYING SAME

TONETECH INC., Calgary (...

1. A method of converting an input wide dynamic range (WDR) image into an output low dynamic range (LDR) image, the pixels of the input WDR image having a first dynamic range RWDR and the pixels of the output LDR image having a second dynamic range RLDR smaller than the first dynamic range, the method comprising:representing each intensity value x within the first range RWDR as x=m×rs with m being a mantissa of x, r being a radix, s being an exponent of x, and × representing multiplication;
partitioning the first range RWDR into a plurality of input intervals Xi with i being an integer and i?0, based at least on values of the exponents within the first range RWDR, the input intervals Xi being non-overlapped and spanning the first dynamic range RWDR;
obtaining a transfer function ƒ(x) over the first dynamic range RWDR, the transfer function ƒ(x) comprising a plurality of sub-functions ƒi(x), each sub-function ƒi(x) being determined over one of the input intervals Xi of the first dynamic range RWDR;
determining the intensity y(p) of each pixel p of the output LDR image by using the transfer function ƒ(x) and at least the intensity value x(p) of the corresponding pixel p of the input WDR image; and
outputting the output LDR image.

US Pat. No. 10,922,795

METHOD AND DEVICE FOR MEASURING DISTORTION PARAMETER OF VISUAL REALITY DEVICE, AND MEASURING SYSTEM

BEIJING BOE OPTOELECTRONI...

1. A method for measuring a distortion parameter of a visual reality device comprising:obtaining an anti-distortion grid image according to a first distortion coefficient;
obtaining a grid image of the anti-distortion grid image at a preset viewpoint after the anti-distortion grid image passes through a to-be-measured optical component of the visual reality device;
determining a distortion type of the grid image after passing through the to-be-measured optical component;
adjusting the first distortion coefficient according to the distortion type of the grid image, thereby obtaining an adjusted first distortion coefficient and then reducing distortion of the grid image;
repeating the above steps until the distortion of the grid image is less than or equal to a distortion threshold;
wherein the adjusted first distortion coefficient when the distortion of the grid image is less than or equal to the distortion threshold, is taken as a distortion coefficient of the to-be-measured optical component.

US Pat. No. 10,922,794

IMAGE CORRECTION METHOD AND DEVICE

BOE TECHNOLOGY GROUP CO.,...

1. An image correction method, comprising:detecting an input image to obtain boundary straight lines, wherein detecting the input image to obtain boundary straight lines comprises:
detecting the input image to obtain a horizontal straight line set and a vertical straight line set;
merging approximately parallel straight line segments in the horizontal straight line set and the vertical straight line set respectively; and
selecting two straight line segments from the horizontal straight line set and the vertical straight line set respectively as the boundary straight lines, wherein selecting two straight line segments from the horizontal straight line set and the vertical straight line set respectively as the boundary straight lines, comprises:
determining scores of merged straight line segments in the horizontal straight line set and the vertical straight line set;
selecting from the horizontal straight line set a horizontal straight line segment having a highest score, and selecting from the horizontal straight line set another horizontal straight line segment whose distance from the horizontal straight line segment having the highest score is within a range; and
selecting from the vertical straight line set a vertical straight line segment having a highest score, and selecting from the vertical straight line set another vertical straight line segment whose distance from the vertical straight line segment having the highest score is within a range, wherein the selected two horizontal straight line segments and the selected two vertical straight line segments are determined as the boundary straight lines;
determining vertices based on the obtained boundary straight lines;
determining an estimated height-to-width ratio based on the determined vertices; and
performing a perspective transformation on the input image based on the estimated height-to-width ratio.

US Pat. No. 10,922,793

GUIDED HALLUCINATION FOR MISSING IMAGE CONTENT USING A NEURAL NETWORK

NVIDIA Corporation, Sant...

1. A computer-implemented method, comprising:receiving, by a neural network model, a first image missing a portion of image data;
receiving by the neural network model, a first semantic map corresponding to the first image and missing a portion of semantic data;
processing the first image and the first semantic map, by the neural network model, to produce a second image including hallucinated image data representing a complete version of the first image and a second semantic map including hallucinated semantic data corresponding to the second image, wherein the second semantic map is a high resolution semantic map and the first semantic map is a low resolution semantic map.

US Pat. No. 10,922,792

IMAGE ADJUSTMENT METHOD AND ASSOCIATED IMAGE PROCESSING CIRCUIT

Realtek Semiconductor Cor...

1. An image adjustment method for adjusting an image, the image comprising at least one frame that comprises a plurality of pixels, the image adjustment method comprising:sequentially processing the pixels in the at least one frame, wherein a pixel under processing is a current pixel, the current pixel and a plurality of adjacent pixels form a current block, and each current block is processed with following operations:
reading a grayscale value of each pixel in the current block;
determining a region grayscale value and a region variance of the current block according to the grayscale values of the pixels;
generating a variance adjustment parameter via a variance adjustment function, wherein the region grayscale value is a variable of the variance adjustment function;
generating an adjusted region variance according to the variance adjustment parameter and the region variance; and
comparing the adjusted region variance with a variance threshold to determine whether to perform a noise suppression operation on the current pixel.

US Pat. No. 10,922,791

IMAGE PROCESSING APPARATUS AND METHOD

REALTEK SEMICONDUCTOR COR...

1. An image processing method, comprising:receiving a currently-input image frame and a previously-output image frame, wherein the currently-input image frame comprises multiple first pixels, and the previously-output image frame comprises multiple second pixels;
comparing the first pixels and the second pixels corresponding to a coordinate system, and obtaining multiple corresponding differences;
obtaining multiple dynamic parameter values based on the differences and a dynamic parameter table;
obtaining multiple boundary retention values based on the dynamic parameter values and a boundary operator; and
obtaining multiple currently-output pixels based on the first pixels, the second pixels, and the boundary retention values.

US Pat. No. 10,922,790

APPARATUS AND METHOD FOR EFFICIENT DISTRIBUTED DENOISING OF A GRAPHICS FRAME

Intel Corporation, Santa...

12. A method comprising:dispatching, by a dispatcher node, ray tracing graphics work to a plurality of nodes;
performing ray tracing operations on a first node to render a first region of an image frame;
requesting data associated with a region outside of the first region from one or more other nodes of the plurality of nodes;
denoising the first region using a combination of data associated with the first region and the data associated with the region outside of the first region; and
combining, by the dispatcher node, regions of the image after denoising at the plurality of nodes to generate a denoised image for the image frame.

US Pat. No. 10,922,789

SUPER-RESOLUTION LATTICE LIGHT FIELD MICROSCOPIC IMAGING SYSTEM AND METHOD

TSINGHUA UNIVERSITY, Bei...

1. A super-resolution lattice light field microscopic imaging system, comprising:a microscope, comprising an objective and a tube lens, and configured to magnify a sample and to image the sample onto a first image plane of the microscope;
a first relay lens, configured to match a numerical aperture of the objective with that of a microlens array and to magnify or minify the first image plane;
a 2D scanning galvo, disposed in a frequency domain plane of the first relay lens, and configured to rotate an angle of a light path in the frequency domain plane;
an illuminating system, configured to provide uniform illumination on the microlens array to generate SIM pattern illumination;
the microlens array, configured to modulate a light beam with a preset angle to a target spatial position at a back focal plane of the microlens array to obtain a modulated image;
an image sensor, disposed at a second image plane of an imaging camera lens and coupled with the microlens array through the imaging camera lens, and configured to record the modulated image; and
a reconstruction module, configured to acquire the modulated image from the image sensor and reconstruct a 3D structure of the sample based on the modulated image,
wherein the illuminating system comprises:
a laser source, disposed outside the microscope, and configured to provide stable and uniform illumination;
a laser filter, configured to eliminate interference from a stray light; and
a dichroic mirror, disposed between the microlens array and the image sensor, configured to distinguish an illumination beam from an imaging beam and direct the illumination beam onto the microlens array to generate the SIM pattern illumination.

US Pat. No. 10,922,788

METHOD FOR PERFORMING CONTINUAL LEARNING ON CLASSIFIER IN CLIENT CAPABLE OF CLASSIFYING IMAGES BY USING CONTINUAL LEARNING SERVER AND CONTINUAL LEARNING SERVER USING THE SAME

Stradvision, Inc., Gyeon...

1. A method for performing continual learning on a classifier, in a client, capable of classifying images by using a continual learning server, comprising steps of:(a) a continual learning server, if a first classifier in a client has outputted first classification information corresponding to each of acquired images, and first hard images determined as unclassifiable by the first classifier according to the first classification information corresponding to the acquired images have been transmitted from the client, performing or supporting another device to (i) perform a process of inputting each of the first hard images into an Adversarial Autoencoder (AAE), to thereby (i-1) allow an encoder in the Adversarial Autoencoder to encode each of the first hard images and thus to output each of latent vectors, (i-2) allow a decoder in the Adversarial Autoencoder to decode each of the latent vectors and thus to output each of reconstructed images corresponding to each of the first hard images, (i-3) allow a discriminator in the Adversarial Autoencoder to output attribute information on whether each of the reconstructed images is fake or real, and (i-4) allow a second classifier in the Adversarial Autoencoder to output second classification information on each of the latent vectors, and then (ii) perform a process of determining whether each of the reconstructed images is unclassifiable by the Adversarial Autoencoder based on the attribute information generated from the discriminator and on the second classification information generated from the classifier and thus selecting second hard images from the reconstructed images, to thereby (ii-1) store first reconstructed images which are determined as the second hard images as a first training data set, and (ii-2) generate each of augmented images through the decoder by adjusting each of the latent vectors corresponding to each of second reconstructed images which is determined as not the second hard images, and thus store the augmented images as a second training data set;
(b) the continual learning server, performing or supporting another device to perform a process of continual learning on a third classifier in the continual learning server corresponding to the first classifier in the client by using the first training data set and the second training data set; and
(c) the continual learning server, performing or supporting another device to perform a process of transmitting one or more updated parameters of the third classifier in the continual learning server to the client, to thereby allow the client to update the first classifier by using the updated parameters.

US Pat. No. 10,922,787

IMAGING APPARATUS AND METHOD FOR CONTROLLING IMAGING APPARATUS

CANON KABUSHIKI KAISHA, ...

1. An imaging apparatus comprising:a first imaging element configured to image a first imaging range;
a second imaging element configured to image a second imaging range; and
a synthesizing unit configured to synthesizes an image corresponding to a third imaging range wider than the first imaging range or the second imaging range based on pixel data groups output by the first imaging element and the second imaging element, wherein
the first imaging element and the second imaging element output pixel data corresponding to a position at which the first imaging range and the second imaging range overlap with each other, to the synthesizing unit prior to other pixel data,
the first imaging element has pixels in a matrix form and reads out pixels on a line-by-line basis in the first direction, and
the second imaging element has pixels in a matrix form, and reads out pixels on the line-by-line basis in a second direction which is opposite to the first direction.

US Pat. No. 10,922,786

IMAGE DIAGNOSIS SUPPORT APPARATUS, METHOD, AND PROGRAM

FUJIFILM Corporation, To...

1. An image diagnosis support apparatus, comprising:a processor configured to:
generate an interpolation image from an original image acquired by imaging a subject;
calculate an index value indicating a feature of a pixel position in a region of interest of the original image based on pixel values of corresponding pixel positions, which are a plurality of pixel positions of the interpolation image, corresponding to the pixel position included in the region of interest of the original image; and
reflect the index value at the pixel position in the region of interest of the original image,
wherein the original image is a three-dimensional image including a plurality of slice images, and
the processor is further configured to:
generate a plurality of interpolation slice images for interpolation between slices of the plurality of slice images as the interpolation images,
set a corresponding interpolation slice image corresponding to a target slice image, which is a target of calculation of the index value, in the plurality of interpolation slice images, and calculate an index value indicating a feature of a pixel position in the region of interest of the target slice image based on a pixel value of a corresponding pixel position of the corresponding interpolation slice image corresponding to a pixel position in the region of interest of the target slice image, and
count a number of pixels of interest that is a number of pixel positions having pixel values indicating the region of interest, among the corresponding pixel positions of the corresponding interpolation slice images, and calculate, as the index value, a value obtained by dividing the number of pixels of interest by a number of corresponding pixel positions of the corresponding interpolation slice images.

US Pat. No. 10,922,785

PROCESSOR AND METHOD FOR SCALING IMAGE

Beijing Baidu Netcom Scie...

1. A processor for scaling an image, comprising an off-chip memory, a communication circuit, a control circuit, and an array processor, wherein:the off-chip memory is configured for storing a to-be-scaled original image, the original image is an N-channel image, and N is an integer greater than 1;
the communication circuit is configured for receiving an image scaling instruction, the image scaling instruction includes a width scaling factor and a height scaling factor;
the control circuit is configured for executing the image scaling instruction, and sending a calculation control signal for calculating pixel data of each target pixel in a scaled target image to the array processor; and
the array processor is configured for extracting pixel data of a pixel in the original image corresponding to a target pixel under the control of the calculation control signal, and calculating in parallel channel values of N channels in the target pixel using N processing elements in the array processor based on the width scaling factor, the height scaling factor, and channel values of N channels in the extracted pixel data,
wherein the processor further comprises:
a parameter transfer circuit, configured for acquiring values of parameters x0, w0, w1, y0, h0, and h1 using coordinates (x,y) of the target pixel in the target image, the width scaling factor scale_w, and the height scaling factor scale_h, and transferring the values to the array processor, wherein x0=?x/scale_w?, y0=?y/scale_h?, h0=y/scale_h?y0, h1=y0?y/scale_h+1, w0=x/scale_w?x0, and w1=x0?x/scale_w+1; and
the array processor is further configured for:
defining four adjacent pixels corresponding to coordinates (x0,y0), (x0+1,y0), (x0,y0+1), and (x0+1,y0+1) in the original image as the pixels corresponding to the target pixel, and extracting the pixel data; and
calculating in parallel the channel values Y(x,y) of the N channels in the target pixel using the N processing elements in the array processor based on an equation Y(x,y)=X(x0,y0)×w0×h0+X(x0+1,y0)×w1×h0+X(x0,y0+1)×w0×h1+X(x0+1,y0+1)×w1×h1, wherein X(x0,y0), X(x0+1,y0), X(x0,y0+1), and X(x0+1,y0+1) are channel values of current channels in the four adjacent pixels respectively.

US Pat. No. 10,922,784

IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD THAT SET A SWITCH SPEED TO SWITCH A SERIES OF IMAGES FROM ONE TO ANOTHER IN A SEQUENTIAL DISPLAY WITH THE FASTER THE SPEED, THE LARGER A REGION OUTPUT FROM THE IMAGES

Canon Kabushiki Kaisha, ...

1. An image processing apparatus comprising:at least one processor operatively coupled to a memory, serving as:
(a) a display device configured to display a series of images, including a selected image out of the series of images;
(b) a speed setting unit configured to set a switch speed that is a speed at which to switch the series of images from one image to a next image, when the display device displays the series of images, in order, and, based on a user's input, to change the switch speed to a selected switch speed from one of a plurality of switch speeds; and
a control unit configured to control operation of the display device such that, when the display device displays the selected image out of the series of images, to cut out a partial region of the selected image as a cutout region so that the display device displays the image of the cutout region, and, in accordance with the speed setting unit changing the switch speed to the selected switch speed during the displaying of the series of images in order on the display device, causes the display device to display an image of the cutout region, that is different from a region of the series of images displayed on the display device before the change of the switch speed, by changing a size of the cutout region, the control unit being configured to control operation to change the size of the cutout region to be a larger scale in accordance with the speed setting unit changing the selected switch speed to a faster switch speed.

US Pat. No. 10,922,783

CUBE-BASED PROJECTION METHOD THAT APPLIES DIFFERENT MAPPING FUNCTIONS TO DIFFERENT SQUARE PROJECTION FACES, DIFFERENT AXES, AND/OR DIFFERENT LOCATIONS OF AXIS

MEDIATEK INC., Hsin-Chu ...

1. A cube-based projection method comprising:generating, by a conversion circuit, pixels of different square projection faces associated with a cube-based projection of a 360-degree image content of a sphere, comprising:
generating pixels of a first square projection face by utilizing a first mapping function set; and
generating pixels of a second square projection face by utilizing a second mapping function set, wherein the different square projection faces comprise the first square projection face and the second square projection face;
wherein the second mapping function set comprises:
where f?v(u, v) is a function of u in a first direction and v in a second direction.

US Pat. No. 10,922,782

MAPPING VERTICES FROM AN EARTH MODEL TO A 2D ARRAY

LANDMARK GRAPHICS CORPORA...

1. A method for mapping vertices from an earth model to a 2D array, comprising:a) aligning the earth model and the 2D array, wherein the earth model comprises a plurality of horizontal curves, a plurality of vertical curves and vertices at each intersection of a vertical curve and a horizontal curve;
b) processing each vertex on a respective vertical curve that is nearest an intersection of a horizontal reference line in the 2D array and the respective vertical curve by marking a point representing the vertex on the reference line, which represents a first row of the 2D array, and in a respective column;
c) processing each next unprocessed vertex on the respective vertical curve that each respective vertex was processed in one of step (b) and at least one of steps (c) and (e) by marking a respective point representing each next vertex on another respective horizontal line in the 2D array, which represents a next respective row of the 2D array, and in the respective column that a nearest processed vertex on the respective vertical curve is marked;
d) forming at least one current curve;
e) processing each unprocessed vertex between each respective current curve and the reference line by marking a respective point representing each unprocessed vertex on the another respective horizontal line in the 2D array on which the vertices processed in one of steps (b) and step (c) on a same side of the reference line are marked, at a unique position;
f) reducing an amount of computer memory required to store the 2D array by optimizing a spacing between the vertices marked in each another respective horizontal line from steps (c) and (e) with an empty column; andrepeating steps (c) through (f) using a computer processor until there are no more unprocessed vertices.

US Pat. No. 10,922,781

SYSTEM FOR PROCESSING IMAGES FROM MULTIPLE IMAGE SENSORS

NXP USA, INC., Austin, T...

1. A system for processing a plurality of input images, the system comprising:an access serializer that receives a plurality of access requests associated with processing of a plurality of input image lines, and a plurality of configuration parameters associated with the plurality of input image lines, respectively, serializes the plurality of access requests, and outputs the serialized plurality of access requests and the plurality of configuration parameters, respectively, wherein each input image of the plurality of input images includes a corresponding set of input image lines of the plurality of input image lines;
a plurality of trigger controllers that are connected to the access serializer for receiving the serialized plurality of access requests and the plurality of configuration parameters, and decode the serialized plurality of access requests to generate a plurality of trigger signals and a plurality of trigger identifiers (IDs), respectively;
a first-in-first-out (FIFO) memory connected to the plurality of trigger controllers for receiving the plurality of trigger IDs and outputting the plurality of trigger IDs based on an order of reception of the plurality of trigger IDs; and
an image signal processing (ISP) pipeline circuit that receives the plurality of trigger IDs, the plurality of configuration parameters, and the plurality of input image lines, and processes the plurality of input image lines to generate a plurality of processed image lines, respectively, wherein the plurality of input images are processed by processing the plurality of input image lines in an order of reception of the plurality of trigger IDs by the ISP pipeline circuit, respectively, and wherein a first input image line of the plurality of input image lines is processed based on a first trigger ID of the plurality of trigger IDs and a first set of configuration parameters of the plurality of configuration parameters, to generate a first processed image line of the plurality of processed image lines.

US Pat. No. 10,922,780

METHOD TO DISTRIBUTE THE DRAWING CALCULATION OF ARCHITECTURAL DATA ELEMENTS BETWEEN MULTIPLE THREADS

GRAPHISOFT SE, Budapest ...

1. A method for calculating a series of frames of video data, comprising:grouping a plurality of architectural data elements into a plurality of threads using one or more algorithms that are operating on a processor;
calculating a frame part of animation data for each of the threads using the one or more algorithms that are operating on the processor;
determining a calculation time for each of the threads using the one or more algorithms that are operating on the processor;
modifying the grouping of the plurality of architectural data elements as a function of the calculation time for each of the threads to achieve the same calculation time using the one or more algorithms that are operating on the processor;
monitoring a calculation time required for each architectural element; and
identifying one or more architectural elements that require a greater calculation time than a calculation time of one or more other architectural elements.

US Pat. No. 10,922,779

TECHNIQUES FOR MULTI-MODE GRAPHICS PROCESSING UNIT PROFILING

INTEL CORPORATION, Santa...

1. An apparatus, comprising:at least one memory comprising instructions; and
a processor coupled to the at least one memory, the processor to execute the instructions to:
determine a plurality of profiling modes for profiling an operating process of a graphics processing unit (GPU) application,
access original binary code for the GPU application, and
instrument the original binary code with profiling instructions inserted at different code locations in the original binary code as specified by an instrumentation schema to generate a multi-mode instrumented binary code from the original binary code, the multi-mode instrumented binary code comprising a plurality of instrumentation modes, each of the plurality of instrumentation modes to generate profiling data corresponding to at least one of the plurality of profiling modes.

US Pat. No. 10,922,778

SYSTEMS AND METHODS FOR DETERMINING AN ESTIMATED TIME OF ARRIVAL

BEIJING DIDI INFINITY TEC...

1. An electronic system for obtaining an estimated time of arrival (ETA) for a service request, comprising:at least one non-transitory computer-readable storage medium storing a set of instructions;
and at least one processor configured to communicate with the at least one non-transitory computer-readable storage medium, wherein when executing the set of instructions, the at least one processor is directed to execute the set of stored instructions to:
obtain a plurality of example historical service orders, each example historical service order of the plurality of example historical service orders including common features each associated with a section of a plurality of sections of a historical measurement, wherein at least two of the plurality of the example historical service orders are of variable trip distances, variable time interval of starting times, variable route information, and variable trip starting locations;
cluster the plurality of example historical service orders into a plurality of subsets of example historical service orders based on common features, the common feature of each example historical service order in a same subset of example historical service orders is associated with a same section of the historical measurement;
for at least one of the plurality of subsets of example historical service orders, use machine learning to train a first model of estimated time of arrival using the common feature of the subset of example historical service orders;
repeat the clustering and training steps for additional subset of example historical service orders using its respective common time feature to obtain a plurality of first ETA models encoded by structured data;
store the structured data in the at least one storage medium encoding the plurality of first ETA models;
receive a service request comprising service request features including a trip distance, a trip starting time, route information, and a trip starting location from a requester;
use a decision tree model to parse through the plurality of first ETA models based on the trip distance, the trip starting time, and the trip starting location of the service request to select one of the plurality of first ETA models as a target ETA model;
use the target ETA model to determine a target ETA for the service request;
and send the target ETA to a requester of the service request.

US Pat. No. 10,922,777

CONNECTED LOGISTICS PLATFORM

SAP SE, Walldorf (DE)

1. A method comprising:defining, using at least one data processor, a plurality of roles with differing levels of access, the plurality of roles comprising a first role for a first user and a second role for a second user;
receiving, using the at least one data processor, an input representing a geofence indicative of at least a geometrical area on a map, the geometrical area comprising one or more geofence edges;
monitoring, using the at least one data processor, telematics data indicative of at least a location of a vehicle and of a direction the vehicle is heading;
determining, using the at least one data processor, that the vehicle is within the geofence based on the location and the one or more geofence edges;
sending, using the at least one data processor, a first message to the first user, via an on-board unit in the vehicle, when the location of the vehicle is determined to be within the geofence and when the direction the vehicle is heading is relevant to contents of the first message, the first message contents comprising a notification of an incident in the direction the vehicle is heading and further comprising a subset of the telematics data selected based on the first role for the first user, the sending of the first message includes
receiving another telematics data from at least another vehicle concerning the incident, and
determining, using the received another telematics data and the geofence, that the at least another vehicle is impacted by the incident and generating a trigger for sending the first message to the first user;
wherein the first message includes at least one context and location-aware service offering generated based on (a) the context of at least one of the telematics data and the another telematics data, and (b) at least one subscription associated with at least one role in the plurality of roles, the at least one subscription identifying one or more context and location-aware service offerings being offered by one or more service providers and defining one or more restrictions for accessing the one or more service offerings within the geofence, the one or more restrictions restricting access by the first user to the one or more service offerings in accordance with at least one subscription associated with the second user;
sending, using the at least one data processor, a second message to the first user, via the on-board unit in the vehicle, when the location of the vehicle is determined to be within the geofence, the second message comprising data indicative of tour details including one or more of scheduled stops, freight to be unloaded, freight to be loaded, scheduled time of arrival at the scheduled stops, and scheduled time of departure at the scheduled stops;
providing, using the at least one data processor, the second user with access to at least a portion of the telematics data while the vehicle is located within the geofence and when the second user is associated with a next stop of the vehicle, the at least the portion of the telematics data selected based on the second role of the second user, the at least the portion of the telematics data being different from the subset of the telematics data;
providing to the second user, using the at least one data processor, data indicative of at least goods being picked up or delivered by the vehicle, wherein sensitive information is removed, based on a user profile of the second user, from the data indicative of at least the goods being picked up or delivered by the vehicle;
determining, using the at least one data processor, that the vehicle is outside of the geofence based on the location and the one or more geofence edges; and
terminating, using the at least one data processor, the second user's access to the at least a portion of the telematics data when the vehicle is determined to be outside of the geofence.

US Pat. No. 10,922,776

PLATFORM FOR REAL-TIME VIEWS ON CONSOLIDATED DATA

Accenture Global Solution...

1. A computer-implemented method for providing real-time views on consolidated data, the method being executed by one or more processors and comprising:receiving, by the one or more processors, a first set of event data from a first data source, the first set of data comprising data representative of an occurrence of a real-world event;
providing, by the one or more processors, an event ticket, the event ticket comprising at least a portion of data of the first set of event data;
identifying a set of computing devices based on a user-initiated registration of a respective computing device, each computing device in the set of computing devices being associated with a citizen user;
defining a sub-set of computing devices from the set of computing devices based on proximities of respective computing devices to the real-world event, the proximities each being determined based on location data representative of a location of the of the real-world event and location data representative of a respective locations of a respective computing device in the set of computing devices;
setting a criticality of the event based on one or more sentiments determined from social media posts relevant to the event and adjusting the criticality based on a population of users in the sub-set of users, a cognitive learning process being used to learn from event data as the event data is received to more accurately determine the one or more sentiments, a priority of the event being higher than a priority of another event based on the criticality, such that a first number of responders sent to respond to the event is greater than a second number of responders sent to respond to the another event;
transmitting, by the one or more processors, presentation data to computing devices in the sub-set of computing devices, the presentation data being based on the event ticket and being processable by the computing devices to display a real-time view comprising one or more graphical representations representative of the real-world event;
receiving, by the one or more processors, a second set of event data from a second data source, the second set of event data comprising data associated with the real-world event;
revising, by the one or more processors, the event ticket to include at least a portion of data of the second set of event data to provide a revised event ticket; and
transmitting, by the one or more processors, revised presentation data to the computing devices, the revised presentation data being based on the revised event ticket and being processable by the computing devices to display a revised real-time view.

US Pat. No. 10,922,775

SYSTEMS AND METHODS FOR AND DISPLAYING PATIENT DATA

AirStrip IP Holdings, LLC...

1. A method for providing a user of a mobile device access to patient information and patient physiological data, the method comprising:receiving, by one or more processors, user input indicating a user command to display a task screen;
in response to the user input, processing, by the one or more processors, user-specific data to determine one or more patient icons, each patient icon representing a time-sensitive, patient-associated task;
displaying the task screen on the mobile device, the task screen displaying one or more patient icon groups, each patient icon group comprising a patient icon of the one or more patient icons;
receiving, by the one or more processor, a user selection of a patient icon from the one or more patient icon groups; and
in response to the user selection of the patient icon, displaying a plurality of windows on the mobile device, the plurality of windows comprising:
a first window displaying first health information associated with a period of time, the first health information being a part of the patient physiological data and comprising electrocardiogram (ECG) data, wherein the first window comprises an indicator that indicates a sub-period of time with respect to the period of time, and
a second window associated with the first window and displaying second health information associated with the sub-period of time that is a portion of the period of time associated with the first window.

US Pat. No. 10,922,774

COMPREHENSIVE MEDICATION ADVISOR

Cerner Innovation, Inc., ...

1. One or more non-transitory computer storage media storing computer-executable instructions that, when executed by one or more computing devices, cause the one or more computing devices to perform a method, the method comprising:generating a cross-venue antibiogram that represents susceptibility of pathogens for a plurality of patients having different medical conditions, wherein the cross-venue antibiogram is generated from patient information for the plurality of patients obtained from acute and non-acute data sources that are associated with two or more independent electronic medical record (EMR) systems, wherein the acute and non-acute data sources are disparate data sources using distinct nomenclatures;
providing the disparate acute and non-acute data sources with real-time access to the cross-venue antibiogram by storing the cross-venue antibiogram on one or more cloud servers accessible by computing devices of the disparate acute and non-acute data sources, wherein the computing devices associated with the disparate acute and non-acute data sources are remote from the one or more cloud servers;
updating the cross-venue antibiogram in real time with additional patient information received from the disparate acute and non-acute data sources using the distinct nomenclatures;
receiving a selection of a specific disease state for a patient;
receiving patient-specific information for the patient including at least one of medical conditions, laboratory results, or allergy information;
creating a patient-specific antibiogram from the updated cross-venue antibiogram, wherein the patient-specific antibiogram is created using the susceptibility of pathogens represented in the updated cross-venue antibiogram for those of the plurality of patients that share the specific disease state of the patient;
providing a medication option for the patient based on the patient-specific antibiogram, the medication option including at least one of dosing, generic alternatives, cost, availability, or susceptibility information; and
presenting a patient-specific preview of the patient-specific antibiogram on a user interface of a clinician device, the patient-specific preview consolidating the patient information for the plurality of patients obtained from the disparate acute and non-acute data sources that are associated with the two or more independent electronic medical record (EMR) systems, wherein the patient-specific preview is presented within a single view on the user interface.

US Pat. No. 10,922,773

COMPUTER PLATFORMS DESIGNED FOR IMPROVED ELECTRONIC EXECUTION OF ELECTRONIC TRANSACTIONS AND METHODS OF USE THEREOF

Broadridge Fixed Income L...

1. A method comprising:receiving, by at least one processor, a session request from an initiating user;
wherein the session request comprises an electronic communication session over a cloud computing network for a transfer of a quantity of a position in at least one financial instrument from the initiating user to at least one session invitee;
generating, by the at least one processor, a list of potential intermediate entities based at least in part on a respective dealer liquidity score associated with each potential intermediate entity of the potential intermediate entities;
receiving, by the at least one processor, a selection from the initiating user identifying a selected intermediate entity of the potential intermediate entities to mediate the electronic communication session;
enabling, by the at least one processor, the initiating user and the selected intermediate entity to negotiate attributes of the electronic communication session;
generating, by the at least one processor, based on the attributes of the electronic communication session, a stack software object controlling a plurality of participation levels in the electronic communication session for each selected invitee of a set of selected invitees;
wherein the plurality of participation levels comprises:
i) a locked stack participation level,
ii) an unlocked stack participation level, and
iii) an open stack participation level;
receiving, by the at least one processor, an invitee selection from the selected intermediate entity indicating the set of selected invitees selected from a plurality of potential invitees;
establishing, by the at least one processor, the electronic communication session, associated with an intermediary computing device of the selected intermediate entity;
wherein the electronic communication session comprises the stack software object;
preventing, by the at least one processor, a respective invitee computing device associated with each respective selected invitee from accessing activities in the electronic communication session unless the respective selected invitee satisfies at least one first predetermined parameter based on the locked stack participation level of the stack software object;
enabling, by the at least one processor, an initiating computing device associated with the initiating user to access in the electronic communication setting at a reserve level while preventing each respective invitee computing device associated with each respective selected invitee from accessing the activities in the electronic communication session unless the respective selected invitee satisfies at least one second predetermined parameter based on the unlocked stack participation level of the stack software object; and
enabling, by the at least one processor, the initiating computing device associated with the initiating user and each respective invitee computing device associated with each respective selected invitee to access the activities in the electronic communication session based on the open stack participation level of the stack software object.

US Pat. No. 10,922,772

COPYRIGHT AUTHORIZATION MANAGEMENT METHOD AND SYSTEM

Huawei Technologies Co., ...

1. A copyright authorization management method, comprising:obtaining owner-of-copyright information from a block chain device, wherein obtaining owner-of-copyright information comprises: receiving a copyright registration request sent by a copyright application client, wherein the copyright registration request comprises information about a cited work;
checking the copyright registration request against one or more predetermined rules associated with the cited work on the block chain device; and
approving the copyright registration request based on the one or more predetermined rules retrieved from the block chain device;
sending a contract determining notification to a corresponding owner-of-copyright client based on the owner-of-copyright information, wherein the contract determining notification carries copyright application-related information of a to-be-authorized work;
receiving transaction information returned by the owner-of-copyright client, wherein the transaction information comprises contract information determined by an owner of copyright based on the copyright application-related information;
sending a cloud signature notification to a cloud signature notification receiving module and directing the owner-of-copyright client to a signature cloud system login module configured to log in to a corresponding contract signature cloud system based on the cloud signature notification;
obtaining a valid contract transaction based on the transaction information, wherein the valid contract transaction comprises signatures satisfying a preset-quantity rule; and
implementing persistence of the valid contract transaction in the block chain device, wherein implementing persistence of the valid contract transaction in the block chain device comprises associating each of the cited work and the to-be-authorized work with one another, and said valid contract transaction configuring the block chain device to receive a query and configured to provide a linkage between the cited work and the to-be-authorized work in response to the query.

US Pat. No. 10,922,771

SYSTEM AND METHOD FOR DETECTING, PROFILING AND BENCHMARKING INTELLECTUAL PROPERTY PROFESSIONAL PRACTICES AND THE LIABILITY RISKS ASSOCIATED THEREWITH

1. A computer-implemented method for detecting, profiling and benchmarking liability trigger events indicative of performance quality and subject matter conflict of interest of intellectual property (IP) professional practices and susceptible to affect a liability risk profile and a liability insurance risk profile of a target entity engaged in said IP professional practices, including IP professionals employed or representing said target entity, the method comprising:a. using one or more processors to execute first set of computer-executable statements and instructions for communicating electronically with a National/Regional Intellectual Property Office (IPO) computer system in at least one IP jurisdiction administered by the United States Patent and Trademark Office (USPTO), identifying and extracting Asset Data from said IPO computer system, processing and clustering said extracted Asset Data, generating an IP jurisdiction identifier to associate each document of said Asset Data with the jurisdiction of the corresponding IP document, and storing said IP jurisdiction identifier with said each document of said Asset Data therewith to a data storage device;
b. using said one or more processors to execute second set of computer-executable statements and instructions for indexing and consolidating said Asset Data and selective internal results of calculations and comparisons performed on said Asset Data, and storing the indexed and/or consolidated Asset Data and said selective internal results of calculations and comparisons performed on said Asset Data to said data storage device;
c. using said one or more processors to execute third set of computer-executable statements and instructions for filtering and profiling the processed Asset Data by checking for codes associated with said one or more of said liability trigger events indicative of said performance quality and said subject matter conflict of interest of IP professional practices and susceptible to affect said liability risk profile and said liability insurance risk profile of said target entity, including IP professionals employed or representing said target entity, to produce Liability Alert Data, and storing said Liability Alert Data to said data storage device;
d. using one or more processors to execute fourth set of computer-executable statements and instructions for applying one or more predetermined factors associated with said one or more liability trigger events to said Liability Alert Data to produce Weighted Liability Alert Data, and storing said Weighted Liability Alert Data to said data storage device;
e. using said one or more processors to execute fifth set of computer-executable statements and instructions to determine, based on receiving said processed Asset Data from said storage device, one or more factors associated with said target entity, including at least the number of professional employees, the number of IP transaction conducted, and the dollar amount of filing fees paid in a pre-determined period of time, and to output the resulting information to said data storage device, and/or to a user interface, and/or to a display device; and
f. using said one or more processors to execute sixth set of computer-executable statements and instructions to determine said liability risk profile and said liability insurance risk profile of said target entity, including IP professionals employed or representing said target entity, by applying one or more predictive models trained on said Asset Data to said Weighted Liability Alert Data, and outputting said liability risk profile and said liability insurance risk profile to said data storage device, and/or to a user interface, and/or to a display device.

US Pat. No. 10,922,770

SYSTEMS AND METHODS FOR DATABASE MANAGEMENT OF TRANSACTION INFORMATION AND PAYMENT DATA

ZOCCAM TECHNOLOGIES, INC....

1. A system for storing data related to a real estate transaction and facilitating the real estate transaction, the system comprising:a device associated with a buyer or an agent of the buyer related to the real estate transaction configured to:
capture images of a check, execute image processing on the images of the check to verify the captured images conform with one or more image quality requirements, and in response to successful verification, extract at least a part of transaction data associated with a payment related to a property of the real estate transaction from the images of the check;
a database; and
a third party application server coupled with the database and configured to store information associated with a plurality of real estate transactions, the information including a transaction identifier, an escrow agent identifier and account information related to an escrow account maintained with a financial institution, the third party application server further configured to:
receive the transaction data associated with the payment from an account of the buyer related to the real estate transaction, the transaction data including the images of the check, an identification of an escrow agent, and a transaction identifier related to the real estate transaction from the device associated with the buyer or the agent of the buyer, the real estate transaction being between the buyer and a seller;
generate an electronic check from the images of the check, forward the electronic check to the financial institution to cause a deposit of the electronic check into the escrow account based on a comparison of the identified escrow agent with the escrow agent identifier associated with the real estate transaction, wherein the escrow account is retrieved when the escrow agent identifier is associated with the identified escrow agent,
generate a data file, store the data file on the database, and generate a notification of the data file, the data file being representative of the transaction data,
automatically, in response to forwarding the electronic check to the financial institution, transmit the notification of the data file to the escrow agent based on the escrow agent identifier, and
generate and transmit another notification to the buyer or the agent of the buyer in response to depositing the electronic check into the escrow account.

US Pat. No. 10,922,769

SYSTEMS AND METHODS FOR DATABASE MANAGEMENT OF TRANSACTION INFORMATION INCLUDING DATA REPRESENTATIVE OF DOCUMENTS RELATED THERETO

ZOCCAM TECHNOLOGIES, INC....

1. A system for storing and accessing data for a real estate transaction, and for facilitating exchange of the data related thereto, the system comprising:a database; and
an application server coupled with the database and configured to store (i) information associated with a plurality of real estate transactions between transaction initiators and recipients and (ii) a plurality of third party identifiers of third parties registered with the application server using corresponding identifier, wherein the information for each real estate transaction includes a transaction identifier, a transaction initiator identifier, and a recipient identifier, the application server further configured to:
associate third party identifiers of one or more third parties with the real estate transaction based in part on a transaction identifier provided by a device associated with the one or more of the third parties,
receive data representative of one or more documents related to the real estate transaction from the device associated with the one or more third parties, the data representative of the one or more documents comprising the transaction identifier and one or more images of the one or more documents related to the real estate transaction, the one or more third parties being distinct from a transaction initiator and a recipient,
associate and store the data representative of the one or more documents related to the real estate transaction based in part on the third party identifiers and the transaction identifier,
in response to receiving the data representative of the one or more documents, generate and send one or more notifications to at least one of the transaction initiator, the recipient, and the one or more of the third parties associated with the real estate transaction, wherein the one or more notifications are indicative of an action to be performed by the transaction initiator to complete the real estate transaction, wherein the action to be performed by the transaction initiator comprises issuing a check drawn on an account of the transaction initiator, and wherein the device associated with the one or more third parties is configured to:
capture images of the check,
execute image processing on the images of the check to verify the captured images conform with one or more image quality requirements, and
in response to successful verification, extract at least a part of transaction data for a payment in connection to the real estate transaction from the images of the check, and transmit the transaction data to the application server,
based on receiving the data representative of the one or more documents, forward the transaction data to a financial institution of the one or more third parties to cause a deposit of the payment into an account of the one or more third parties based on the third party identifies that provided the data representative of the one or more documents, the transaction data including the images of the check,
generate a data file, store the data file on the database, and generate a notification of the data file, wherein the data file is representative of the transaction data,
automatically, in response to forwarding the transaction data to the financial institution, transmit the notification of the data file to the one or more third parties based on the third party identifiers, and
generate and transmit another notification to the transaction initiator or an agent of the transaction initiator in response to depositing the payment into the account of the one or more third parties.

US Pat. No. 10,922,768

SYSTEMS AND METHODS FOR DATABASE MANAGEMENT OF TRANSACTION INFORMATION AND A PLURALITY OF PAYMENT SOURCES

ZOCCAM TECHNOLOGIES, INC....

1. A system for storing data related to a real estate transaction and for facilitating the real estate transaction, the system comprising:a user device of a transaction initiator related to the real estate transaction configured to:
capture images of a check,
execute image processing on the images of the check to verify the captured images conform with one or more image quality requirements, and
in response to successful verification, extract at least a part of transaction data for payment related to the real estate transaction from the images of the check;
a database; and
a third party application server coupled with the database and configured to store information associated with a plurality of real estate transactions between transaction initiators and recipients, the information including a transaction identifier, a recipient identifier, and account information related to a recipient account for the recipient maintained with a financial institution, the application server further configured to:
receive the transaction data from the user device of the transaction initiator and associated with the payment related to the real estate transaction, the transaction data comprising the transaction identifier, the recipient identifier, and the images of the check,
generate an electronic check based on the transaction data,
forward the electronic check and the recipient account information directly to the financial institution to cause a deposit of the electronic check into the recipient account,
generate a data file, storing the data file on the database, and generate a notification of the data file, the data file being representative of the transaction data,
automatically, in response to forwarding the electronic check and the recipient account information to the financial institution, transmit the notification of the data file to a recipient based on the recipient identifier, and
generate and transmit another notification to the transaction initiator in response to depositing the electronic check into to the recipient account.