US Pat. No. 11,030,850

MANAGING BETS THAT SELECT EVENTS AND PARTICIPANTS

CANTOR INDEX, LLC, New Y...

1. A non-transitory machine-readable medium to have instructions stored thereon which are configured to, when executed by at least one processor of at least one computer in electronic communication with at least one other computer via an electronic communications network, direct the at least one processor to:receive at least one electronic message comprising a plurality of group bets from a plurality of bettors, each group bet being received from one of the plurality of bettors via a computing device in networked communication with the at least one processor, each group bet comprising:
plurality of events selected from among a group of events offered by a sponsor of the group bets, each event having a plurality of participants;
a respective participant selected for each of the plurality of events selected for the group bet; and
a bet amount;
combine the amounts of the group bets of the plurality to form a betting pool, in which the amounts of the group bets are combined by combining amounts of different bets having different selected combinations of events and different selected participants from each other that are pooled together in the betting pool, in which the combining the amounts to form the betting pool comprises causing data representing the combined amounts to be stored in a database in electronic communication with the at least one processor; and
compute an amount of a payout for one or more winning group bets of the plurality based at least in part on the combined amounts of the bets in the betting pool; and
determine whether to cause a dispense of the amount of the payout at an interface associated with a self-service machine.

US Pat. No. 11,030,849

GAMES AND GAMING MACHINES HAVING WHEEL FEATURES

Aries Technology, LLC, L...

1. A wheel event for a wagering game presented on a gaming device having a wager accepting device configured to accept a physical item associated with a monetary value, at least one input device, and at least one display, the wheel event comprising the steps of:receiving a physical item associated with a monetary value at the wager accepting device from a player at the gaming device to increase a credit base at the gaming device;
receiving a wager from the player at the gaming device via the at least one input device for a base wagering game;
determining an amount of the wager;
presenting the base wagering game on the gaming device by displaying base wagering game information;
designating at least one outcome of the base wagering game as wheel event triggering outcome; and
when an outcome of the base wagering game is one of the wheel event triggering outcomes, displaying via the at least one display at least one of a plurality of different wheels, each of the plurality of different wheels having a plurality of segments having awards associated therewith, the awards associated with at least one of said plurality of wheels being of type different from the awards of at least another one of the plurality of wheels, the award types being monetary or non-monetary, and presenting a wheel segment selection event in which one of said segments of said wheel is selected and awarding the award associated with said selected segment.

US Pat. No. 11,030,848

GAMING MACHINE, CONTROL METHOD FOR MACHINE, AND PROGRAM FOR GAMING MACHINE

Konami Gaming, Inc., Las...

1. A gaming machine for providing a game, comprising:an operation unit including a plurality of input buttons configured to receive an operation of a player;
a display unit including at least one display device, the at least one display device configured to display computer-generated graphics; and,
a control unit operably coupled to the operation unit and the display unit, the control unit including a processor for generating the game on the at least one display device, the processor programmed to display a first display area and a second display area on the at least one display device, the first display area including a first plurality of cells arranged in a first grid, the second display area including a second plurality of cells arranged in a second grid, the processor of the control unit, in response to the operation of the player, being further programmed to initiate the game and to responsively:
randomly select a plurality of symbols associated with the first display area, each symbol in the plurality of symbols being associated with one of the plurality of cells in the first grid, the plurality of symbols forming an interim outcome;
detect an occurrence of a predetermined symbol in the first display area, the occurrence of the predetermined symbol being associated with one of the first plurality of cells of the first display area;
copy the occurrence of the predetermined symbol from the first display area to a corresponding cell in the second display area, wherein a complimentary award is provided to the player if the corresponding cell in the second display area is already occupied with an occurrence of the predetermined symbol, wherein the complimentary award includes moving a redundant occurrence of the predetermined symbol to an unoccupied cell in the second grid; and
provide a secondary bonus game using the second grid area including randomly selecting a symbol for each unoccupied cell in the second display grid, wherein the randomly selected symbol(s) and the copied occurrence(s) of the predetermined symbol form a secondary bonus game outcome.

US Pat. No. 11,030,847

ELECTRONIC GAMING MACHINE HAVING A REEL ASSEMBLY WITH A SUPPLEMENTAL IMAGE DISPLAY

IGT, Las Vegas, NV (US)

1. A gaming system comprising:a housing;
a reel assembly supported by the housing, the reel assembly comprising a rotatable reel comprising a first light arm, the first light arm comprising a plurality of selectively illuminable first lights, wherein the first light arm connects two spaced apart rims of the reel;
a processor; and
a memory device that stores a plurality of instructions, which when executed by the processor, cause the processor to:
cause the reel to rotate such that the first light arm rotates in a first orbit associated with the reel, and
selectively cause the first lights to illuminate while the reel and the first light arm rotate such that the first lights cause a player perceivable image to be displayed in association with the reel.

US Pat. No. 11,030,846

ELECTRONIC GAMING MACHINES WITH PRESSURE SENSITIVE INPUTS FOR DETECTING OBJECTS

IGT, Las Vegas, NV (US)

1. A gaming device comprisingan input device comprising a plurality of input locations and a pressure sensor to detect, for each input location, an amount of pressure applied to the input device at the input location by a player of the gaming device;
a processor circuit; and
a memory coupled to the processor circuit, the memory comprising machine-readable instructions that, when executed by the processor circuit, cause the processor circuit to:
receive, from the input device, a plurality of first pressure parameter values corresponding to a first amount of pressure being applied to a first portion of the plurality of input locations;
determine a pressure pattern that corresponds to locations of the first portion of the plurality of input locations; and
based on the plurality of first pressure parameter values and the pressure pattern, determine an identification of an object that is on the input device,
wherein a sum of the first pressure parameter values corresponds to an object weight value, and
wherein instructions to determine the identification of the object further cause the processor circuit to determine the identification based on the object weight value and the pressure pattern, and
wherein the object weight value comprises a first object weight value that corresponds to the first pressure parameter values received at a first time,
wherein the processor circuit is further caused to:
receive, from the pressure sensor and at a second time that is after the first time, a plurality of second pressure parameter values corresponding to a second amount of pressure being applied to the first portion of the plurality of input locations, wherein a sum of the second pressure parameter values corresponds to a second object weight value; and
determine an object weight difference between the first object weight value and the second object weight value.

US Pat. No. 11,030,845

SYSTEM AND METHOD FOR WAGERING BASED ON THE MOVEMENT OF FINANCIAL MARKETS

CANTOR INDEX LLC, New Yo...

1. A method comprising:receiving, via an electronic communications network, by at least one computer processor, an electronic message comprising data indicative of a bet comprising a first component and a second component, the first component indicating a first wager on a value of a first financial market indicator after a first period of time expires and a second component indicating a second wager on a value of a second financial market indicator after a second period of time expires;
rendering, by the at least one computer processor, a color-coded graphical representation of the first financial market indicator and the second financial market indicator on a display device;
generating, by the at least one computer processor, an association in a memory between the first financial market indicator and a first financial instrument number and the second financial market indicator and a second financial instrument number;
monitoring, by the at least one computer processor, real-time updates of the first financial market indicator and the second financial market indicator;
rendering, by the at least one processor, the real-time updates using the color-coded graphical representations on the display device;
broadcasting, by the at least one processor, indicia indicating a value of at least one of the first financial market indicator and the second financial market indicator;
receiving, by the at least one computer processor, an electronic instruction comprising a request to transfer the bet; and
responsive to the request, transferring, by the at least one processor, the data indicative of the bet from a first electronically stored account to a second electronically stored account.

US Pat. No. 11,030,844

CASINO OPERATIONS MANAGEMENT SYSTEM WITH MULTI-TRANSACTION LOG SEARCH

IT Casino Solutions, LLC,...

1. A casino operations management system, comprising:a multi transaction log module configured to store multiple transactions for an individual player and to merge transactions for each said individual player, the multi transaction log being further configured to identify unknown players based on at least one image received of each unknown player;
wherein the multi transaction log is further configured to track total transactions for known and unknown players and determine when transactions for each said individual player exceed a reportable threshold total for a predetermined period;
wherein the multi transaction log module includes a tax reporting module, said tax reporting module being configured to generate a report, for transmission to a taxing authority, when said transactions for each said individual player exceed a reportable threshold total for a predetermined period; and
a search module configured to retrieve information related to each said known or unknown player based on input of search data related to unknown player, wherein the search module is configured to retrieve the information related to each said known or unknown player based on a physical description of the known or unknown player.

US Pat. No. 11,030,843

IMPLEMENTING A TRANSPORT SERVICE USING UNIQUE IDENTIFIERS

Uber Technologies, Inc., ...

1. A computing system implementing a transport service for a given region, the computing system comprising:a network communication interface to communicate, over one or more networks, with client devices of riders and drivers of the transport service;
one or more processors; and
one or more memory resources storing instructions that, when executed by the one or more processors, cause the computing system to:
receive, over the one or more networks, a request for the transport service from a client device of a rider;
in response to receiving the request, generate a unique identifier for the request to facilitate a direct pairing between the rider and a driver;
transmit, over the one or more networks, the unique identifier to the client device of the rider;
based on the rider performing the direct pairing with the driver, receive, over the one or more networks, data corresponding to the unique identifier from a driver application executing on a client device of the driver; and
based on receiving the data corresponding to the unique identifier, transmit, over the one or more networks, match data to the client device of the driver to cause the driver application executing on the client device of the driver to switch from an available sub-state to an on-trip sub-state to classify the driver as unavailable due to the driver currently providing the transport service for the rider.

US Pat. No. 11,030,842

BANKNOTE HANDLING APPARATUS

Glory Ltd., Himeji (JP)

1. A banknote handling apparatus comprising:a housing;
a depositing portion arranged at the housing for depositing a banknote from outside into the housing therethrough;
a banknote receiving unit configured to receive a banknote from outside of the housing therein and feed out the received banknote into the housing through the depositing portion, the banknote receiving unit being fixed to the depositing portion, rotatable about a predetermined axis, and movable upward or downward;
a connecting unit configured to detachably attach, to the housing, one of the banknote receiving unit and a cassette which is configured to store a banknote therein and feed out the banknote therefrom, and is capable of being attached to and detached from the apparatus to another apparatus for transporting the banknote stored therein, the connecting unit being arranged at the depositing portion and through which the banknote fed from the banknote receiving unit or the cassette is transported;
a transport unit configured to transport the banknote fed out from the banknote receiving unit or the cassette attached to the connecting unit into the banknote handling apparatus through the connecting unit; and
a moving mechanism configured to rotate the banknote receiving unit about the predetermined axis and to move the banknote receiving unit upward or downward for attaching the banknote receiving unit to the connecting unit or detaching the attached banknote receiving unit from the connecting unit.

US Pat. No. 11,030,841

DECENTRALIZED TALENT DISCOVERY VIA BLOCKCHAIN

ESCAPEX LIMITED, Kwun To...

1. A system of conducting a decentralized talent discovery event via a blockchain network, the system comprising:a distributed ledger to be distributed across a plurality of nodes in the blockchain network; and
a node comprising:
a local copy of a distributed ledger that is copied across a plurality of nodes in the blockchain network; and
a processor programmed to:
receive votes from voters, wherein each vote corresponds to a respective contestant and requires a value of a token used by the blockchain network to be charged to a respective voter in exchange for the respective voter to place a vote;
record each of the votes on the distributed ledger;
cause each value of the token to be transferred from a wallet of the respective voter and added to a token pool based on the received vote, wherein at least a portion of the token pool is awarded to a winner of the decentralized talent discovery event;
determine a winning contestant based on the received votes; and
allocate at least a first portion of the token pool to the winning contestant and at least a second portion of the token pool to a subset of the voters that voted for the winning contestant.

US Pat. No. 11,030,840

WEARABLE DEVICE WITH USER AUTHENTICATION INTERFACE

PAYPAL, INC., San Jose, ...

1. A system, comprising:a non-transitory memory comprising instructions; and
one or more hardware processors coupled to the non-transitory memory and configured to read the instructions to cause the system to perform operations comprising:
detecting, from a user device, a request for access to a first user account of a user;
detecting, via the user device, a first signal received from a first wearable device of the user when the first wearable device determines that it is being worn by the user;
detecting, via the user device, a second signal received from a second wearable device of the user when the second wearable device determines that it is being worn by the user;
authenticating the user based on the first signal and the second signal;
providing the user device with access to the first user account;
monitoring the first signal and the second signal; and
logging the user device out of the first user account after a period of time in response to detecting the first signal or the second signal has ended based on the monitoring.

US Pat. No. 11,030,839

VISITOR ACCESS CONTROL SYSTEM WITH RADIO IDENTIFICATION AND FACIAL RECOGNITION

Inventio AG

1. A method for operating an access control system for controlling access to an access-restricted zone in a building or a site and registering a visitor, the access control system comprising a transmitting and receiving device, a memory device, a processor and an image processing device, said method comprising:receiving, by the access control system, invitation data generated and transmitted by an electronic host system, the invitation data comprising an invitation's identification number and an appointed time when a host expects a visitor in the access-restricted zone;
creating, in the memory device, a visitor profile assigned to the invitation, and storing the invitation data in the visitor profile, the memory device having a database for storing user profiles of access-authorized users and visitors;
receiving by the access control system, image data of the visitor, the invitation's identification number, and a device-specific identifier of an electronic device of the visitor;
storing, in the memory device, the image data and the identifier, the image data and the identifier being assigned to the visitor profile by means of the invitation's identification number;
processing, by the access control system, the image data to generate a reference template; and
saving, in the visitor profile, the reference template.

US Pat. No. 11,030,838

ONBOARD SYSTEM FOR A VEHICLE AND PROCESS FOR SENDING A COMMAND TO A PARK AREA ACCESS SYSTEM

1. An onboard system for a vehicle comprising:an emitter circuit for sending a command selected by a driver of the vehicle to a park area access system;
an image sensor for capturing a first sequence of images of at least part of a body of the driver of the vehicle; and
a control module selectively operating in a training mode and a normal usage mode, the control module:
processing said first sequence of images so as to identify a behavioral feature and then control the emitter circuit to send the command to the park area access system when the identified behavioral feature corresponds to a predetermined behavioral feature;
in the training mode, acquiring a second sequence of images to create a corresponding data representation of said predetermined behavioral feature, wherein the corresponding data representation is stored in a memory and is mapped to the command selected by the driver; and
in the normal usage mode, associating the identified behavioral feature with said command among a plurality of commands,
wherein in the training mode, the associating of the identified behavioral feature with said command is deactivated, and
wherein the control module performs the acquiring of the second sequence of images to create the corresponding data representation of said predetermined behavioral feature and the associating of the identified behavioral feature with said command to produce a driving ability level representative of a distraction level or a drowsiness level of the driver.

US Pat. No. 11,030,837

PROVIDING ACCESS TO A LOCK BY SERVICE CONSUMER DEVICE

ASSA ABLOY AB, Stockholm...

1. A method for providing access to a lock for provision of a service, the lock being associated with a service consumer, the method being performed in a service consumer device and comprising the steps of:receiving a request for access to the lock, the request being based on the service consumer ordering a service requiring access to a physical space which is secured by the lock, the request comprising a first public key associated with a co-ordinator and a second public key associated with a service provider agent, being a person used by a service provider conduct the service;
presenting a first consumer query to the service consumer, asking whether to grant access to the lock for the service provider agent to provide the service;
receiving a first positive consumer response indicating that the service consumer allows the service provider agent to access the physical space secured by the lock;
presenting a second consumer query to the service consumer, asking whether to grant access to the lock for the service provider agent to provide the service, wherein the step of presenting a second consumer request is only performed at a configured time prior to when access to the lock for the service provider agent is needed;
receiving a second positive response, indicating that the service consumer allows the service provider agent to access the physical space secured by the lock; and
delegating access to the lock to the co-ordinator, which comprises encrypting at least part of a delegation using the first public key, encrypting at least part of the delegation using the second public key, and electronically signing the delegation, enabling further delegation, of access of the lock, to the service provider agent, wherein the step of delegating access only performed when the second positive response has been received.

US Pat. No. 11,030,836

DOOR LOCK SYSTEM AND HANDLE OF DOOR FOR VEHICLE

AISIN SEIKI KABUSHIKI KAI...

1. A door lock system comprising:a first circuit performing a first communication to request a response for determining availability of unlocking a door;
a second circuit performing a second communication for determining availability of unlocking the door and transmitting a reception result of the second communication to an outside of the door lock system; and
a control circuit controlling operations of the first circuit and the second circuit,
the control circuit stopping one of the first communication and the second communication from being performed in a case where the other one of the first communication and the second communication is performed, wherein
the first circuit includes a first antenna circuit for wireless communication,
the control circuit interrupts the first antenna circuit in a case where the second communication is performed,
the first antenna circuit includes a first antenna including a coil, a capacitor, and a switch connected in series with the capacitor,
the coil, the capacitor, and the switch are connected in parallel with the control circuit, the first antenna circuit configured to function as a reception circuit when the switch is short-circuited,
the control circuit short-circuits the switch to constitute an LC circuit including the coil and the capacitor in a case where the second communication is performed.

US Pat. No. 11,030,835

FRICTIONLESS ACCESS CONTROL SYSTEM PROVIDING ULTRASONIC USER LOCATION

Sensormatic Electronics, ...

1. An access control system for monitoring an access point, comprising:a positioning unit for receiving acoustic signals from user devices of users, for generating position information indicating positions and directions of movement of the user devices relative to the access point based on the acoustic signals, and for generating instructions based on the position information, wherein the positioning unit includes one or more microphones installed above the access point for detecting the acoustic signals from the user devices; and
an access point controller for controlling access through the access point in response to the instructions from the positioning unit,
wherein the positioning unit comprises an audio processing and location module for determining an angle of arrival of the acoustic signals at the one or more microphones based on the acoustic signals, and the positioning unit determines that the user devices are within an inner zone of the access point based on whether the angle of arrival is sufficiently small.

US Pat. No. 11,030,834

DATA RECORDER SYSTEM AND UNIT FOR A VEHICLE

WABTEC HOLDING CORP., Wi...

1. A data recorder unit comprising:at least one data recorder enclosure configured to house at least one internal component;
at least one data processing unit located in the at least one data recorder enclosure and configured to receive data input from at least one data source associated with a vehicle; and
at least one storage device in communication with the at least one data processing unit and configured to store the data input received by the at least one data processing unit,
wherein the at least one storage device is at least one local storage device located within one or more of a crash-proof or fire-proof enclosure positioned within the at least one data recorder enclosure, and
wherein the at least one storage device comprises a network-attached storage device, the network-attached storage device comprising plural separate logical partitions, wherein a portion of the data input is stored within one logical partition and another portion of the data input is stored within another logical partition.

US Pat. No. 11,030,833

SYSTEM AND METHOD OF MONITORING A FUNCTIONAL STATUS OF A VEHICLE'S ELECTRICAL POWERING SYSTEM

1. A method of monitoring a functional status of a vehicle's electrical powering system, the method comprises the steps of:(A) providing a multimeter module, a microprocessor, and a data storage module, wherein the multimeter module, the microprocessor, and the data storage module are comprised by either a retrofit device or a computerized battery, and wherein the retrofit device is electrically spliced into the vehicle's electrical powering system, and wherein the computerized battery is comprised by the vehicle's electrical powering system;
(B) providing a plurality of measurable characteristics of the vehicle's electrical powering system, wherein each measurable characteristic is associated to a manufacturer specification stored on the data storage module;
(C) providing a plurality of voltage triggers stored on the data storage module, wherein each voltage trigger is associated to a corresponding response;
(D) periodically probing the vehicle's electrical powering system for a series of voltage readings with the multimeter module and storing the series of voltage readings on the data storage module;
(E) comparing the series of voltage readings to each voltage trigger with the microprocessor in order to identify at least one matching trigger from the plurality of voltage triggers;
(F) executing the corresponding response for the matching trigger with the microprocessor, if the matching trigger is identified during step (E);
(G) deriving a baseline for each measurable characteristic through statistical summarization and setting the baseline for each measurable characteristic to the manufacturer specification with the microprocessor;
(H) executing a plurality of iterations for step (D) in order to compile a time-dependent dataset of voltage readings, wherein the time-dependent dataset of voltage readings includes the series of voltage readings from each iteration;
(I) updating the baseline for each measurable characteristic according to the time-dependent dataset of voltage readings;
providing a minor voltage-change threshold for the vehicle's electrical powering system as one of the plurality of voltage triggers;
identifying the minor voltage-change threshold for the vehicle's electrical powering system as the matching trigger during step (E), if a trend in the series of voltage readings is less than or equal to the minor voltage-change threshold; and
logging an accessory-activation event entry for the vehicle's electrical powering system on the data storage module as the corresponding response for the matching trigger during step (F), wherein the accessory-activation event entry includes the series of voltage readings.

US Pat. No. 11,030,832

APPARATUS AND METHOD FOR GENERATING TEST CASE FOR VEHICLE

Hyundai Motor Company, S...

1. An apparatus for generating a test case for a vehicle, the apparatus comprising:a communication device configured to receive vehicle data from an electronic device; and
a controller configured to convert the vehicle data to a state diagram, to pattern the state diagram, and to generate the test case based on the patterned state diagram.

US Pat. No. 11,030,831

FUEL EFFICIENCY ESTIMATION SYSTEM, FUEL EFFICIENCY ESTIMATION METHOD, AND COMPUTER READABLE MEDIUM

MITSUBISHI ELECTRIC CORPO...

1. A fuel efficiency estimation system for calculating a fuel efficiency of a motor vehicle traveling a traveling route, comprising:a receiver/transmitter for communicating with a motor vehicle device provided in the motor vehicle; and
processing circuitry configured to
calculate a velocity profile indicating a change in velocity of a motor vehicle traveling a traveling route,
calculate, based on traveling history information received from the motor vehicle device and collected from the motor vehicle traveling the traveling route for each of a plurality of pieces of disturbance information indicating a plurality of disturbance events occurring on the traveling route, an attenuation factor, which is a ratio of attenuation of the velocity of the motor vehicle traveling the traveling route, for each of the plurality of pieces of disturbance information and to calculate an average value of a plurality of said attenuation factors each acquired for each of the plurality of pieces of disturbance information as a velocity disturbance correction coefficient,
calculate fuel efficiency of the motor vehicle traveling the traveling route using the velocity profile and the velocity disturbance correction coefficient, and
tranmitting the calculated fuel efficiency to the motor vehicle device for display along with the traveling route.

US Pat. No. 11,030,830

CUSTOMIZED OPERATING POINT

Lytx, Inc., San Diego, C...

22. A method for indicating an operating point, comprising:receiving user information;
receiving user reviewing feedback;
receiving reviewing metadata;
determining, using a processor, a recommendation for an operating point based at least in part on the user information, the user reviewing feedback, and the reviewing metadata;
providing the recommendation for an operating point; and
providing an indication of the adjustment to the operating point to a vehicle event recorder.

US Pat. No. 11,030,829

HYPER-REDUNDANT SENSOR NODES

Siemens Energy, Inc., Or...

1. A hyper-redundant monitoring system for a gas turbine, comprising:a processor;
a sensor node operably connected to the processor and comprising a plurality of sensors disposed in an arrangement such that a single parameter is measured by each of the plurality of sensors and each sensor is configured to transmit measurements of the single parameter to the processor;
a power source that delivers power to the processor; and
a controller in operable communication with the processor,
wherein the processor collects the measurements of the single parameter by each of the plurality of sensors, analyzes the measurements of the single parameter to determine analyzed data, and transmits analyzed data to the controller,
wherein the sensor node is configured to operate within the gas turbine,
wherein the controller is configured to change operating parameters of the gas turbine based on the analyzed data, wherein the processor communicates with the controller by way of wireless communication, wherein
the sensor node communicates with the processor by way of wireless communication, and wherein
the plurality of sensors number in a range of four to eight sensors.

US Pat. No. 11,030,828

SYSTEM AND METHOD TO AUTO CREATE AIRCRAFT MAINTENANCE RECORDS BY AIRCRAFT DATA

Honeywell International I...

1. A processor-implemented method for automatically creating aircraft maintenance records and work logs during aircraft maintenance operations, the method comprising:automatically retrieving, using a processor, fault data, testing data, maintenance data, and status data regarding line replaceable units (LRUs) on an aircraft via a central maintenance computer (CMC) on the aircraft;
automatically collecting, from a remote terminal on the aircraft, data regarding maintenance operations performed using the remote terminal, wherein the data regarding maintenance operations performed includes a record of actions performed using the remote terminal to troubleshoot avionics faults;
automatically recording, by the processor in a maintenance database, (1) the fault data, testing data, maintenance data, and status data automatically retrieved from the CMC of the aircraft, and (2) data regarding maintenance operations performed using the remote terminal; and
automatically populating, by the processor using data in the maintenance database, a plurality of fields in a maintenance work log with automatically collected data regarding one or more actions performed using the remote terminal to troubleshoot avionics faults.

US Pat. No. 11,030,827

METHOD AND APPARATUS FOR DYNAMIC DISTRIBUTED SERVICES REDISTRIBUTION

Ford Global Technologies,...

1. A system comprising:a processor configured to:
detect an application initiation request;
determine whether current vehicle connectivity availability is sufficient to support remote execution of the application;
responsive to determining that current vehicle connectivity is insufficient to support remote execution, launch a local version of the application;
maintain a list of locally executing applications, in an ordered preference for transfer to remote execution; and
responsive to the monitored current vehicle connectivity becoming sufficient to execute at least one of the locally executing applications remotely, request remote execution of at least the highest transfer-priority application on the ordered list that also can be executed remotely based on the monitored current vehicle connectivity.

US Pat. No. 11,030,826

HANGER GENERATION IN COMPUTER-AIDED DESIGN PROGRAMS

Applied Software Technolo...

1. A system for hanger placement in a computer-aided design (“CAD”) application, including:a database that stores hanger parameters;
a processor that executes instructions to perform stages comprising:
receiving a first selection of elements on a graphical user interface (“GUI”);
determine a first set of pipe elements from the first selection based on the elements being parallel;
receiving a second selection, on the GUI, indicating a run direction;
determining a service type for the first set of pipe elements that is one of pipe, conduit, or ducts;
placing an initial hanger on the first set of pipe elements, including resizing a bearer width to span the first set of pipe elements and changing the hanger elevation to attach to a lowest bottom of the first set of pipe elements;
placing a second hanger at a spacing along a first path of the first set of pipe elements;
determining that a first pipe element that was part of the first set is no longer part of the first set based on the first pipe element diverging from its original distance from a next closest pipe element in the first set;
determining a branch of the first set of pipe elements; and
repeating the initial hanger placement on the branch.

US Pat. No. 11,030,825

COMPUTER AIDED SYSTEMS AND METHODS FOR CREATING CUSTOM PRODUCTS

Best Apps, LLC, Miami Be...

1. A computer-aided design (CAD) computer system comprising:a computing device;
a network interface;
a non-transitory data media configured to store instructions that when executed by the computing device, cause the computing device to perform operations comprising:
provide, for display on a terminal of a first user, a design customization user interface enabling the first user to define a first template for use in product customization;
enable the first user to define the first template using the design customization user interface by:
defining one or more slots configured to receive content items;
indicating for at least a first slot of the first template whether an end user is permitted or not permitted to add any end user provided content to customize the first slot;
defining prohibited persons whose images, when uploaded by an end user, may not be used by the end user to customize the first slot, where images of other persons are permitted to be used by the end user to customize the first slot;
receive a definition of the first template, the definition of the first template an indication that an end user is permitted to add end user provided content to the first slot;
receive an identification of one or more prohibited persons whose images may not be used by the end user to customize the first slot;
add the first template to an online catalog comprising a plurality of articles of clothing accessible by a plurality of end users, wherein the first template is configured to be used by end users in customizing at least a first product selectable among the plurality of articles of clothing via the online catalog;
enable a depiction of the first product to be displayed by an end user device via a customization user interface in association with a visual indication that the first slot of the first product is customizable by an end user;
enable the end user to provide a first item of content comprising a first image to populate the first slot;
perform an analysis of the first item of content to detect if a face is present;
at least partly in response to detecting the presence of a face in the first item of content, generate a first facial fingerprint;
perform a comparison of the first facial fingerprint to facial fingerprints of prohibited persons with respect to the first slot of the first template;
at least partly in response to determining, based on the comparison of the first facial fingerprint to facial fingerprints of prohibited persons, that the first image includes an image of a prohibited person with respect to the first slot, inhibit the printing or embroidering of the first image on the first product at a location corresponding to the first slot; and
at least partly in response to determining, based on the comparison of the first facial fingerprint to facial fingerprints of prohibited persons, that the first image does not include an image of a prohibited person with respect to the first slot, enable the printing or embroidering of the first image on the first product at a location corresponding to the first slot.

US Pat. No. 11,030,824

AUTOMATIC COLOR HARMONIZATION

COLORO CO., LTD, Shangha...

1. A computing device including a processor, memory, and a display, wherein the display is configured to represent a graphical user interface, and wherein the processor is configured to execute program instructions stored in the memory to perform operations comprising:obtaining a three-dimensional color model containing hue, lightness, and chroma dimensions, wherein the color model represents each of at least three thousand distinct colors as unique points within the hue, lightness, and chroma dimensions;
displaying, by way of the user interface and in accordance with the color model, a rotatable three-dimensional representation of the unique points;
receiving, by way of the user interface, a selection of a first point of the unique points and a selection of a second point of the unique points; and
in response to receiving the selection of the first point and the selection of the second point, displaying, by way of the user interface and in accordance with the color model, a rotatable three-dimensional representation of the first point, the second point, a line connecting the first point and the second point, and a subset of the unique points that are within a particular radius of the line.

US Pat. No. 11,030,823

ADJUSTMENT OF ARCHITECTURAL ELEMENTS RELATIVE TO FACADES

Hover Inc., San Francisc...

1. A method of correcting planar relationships in a multi-dimensional building model comprises:identifying a first planar architectural element that is within and coplanar to a plane of a first façade of the multi-dimensional building model, wherein the multi-dimensional building model includes a plurality of façades;
extracting a plurality of edges of the identified first planar architectural element;
determining a scale based on the identified first planar architectural element;
determining a translation positional error along a normal of the plane of the first façade based on the coplanar position of the first planar architectural element with the plane of the first façade;
moving the plane of the first façade along the normal relative to the first planar architectural element based on the determined translation positional error and the determined scale;
correlating and rectifying one or more façades of the plurality of façades to the moved plane of the first façade; and
reconstructing the multi-dimensional building model with the correlated and rectified one or more façades and moved plane of the first façade.

US Pat. No. 11,030,822

CONTENT INDICATORS IN A 3D ENVIRONMENT AUTHORING APPLICATION

Microsoft Technology Lice...

1. A method of displaying a content indicator of an object, the method comprising:displaying a two-dimensional (2D) graphical user interface (GUI) of an authoring application;
displaying within the 2D GUI a 3D environment;
receiving an indication to load an object into the 3D environment;
based on receipt of the indication, displaying a content indicator indicating a loading status of the object; and
scaling the content indicator, wherein the scaling includes:
determining a forward direction of the camera;
determining a difference in position between the forward direction of the camera and the content indicator;
determining a scalar value based on the difference in position; and
applying the scalar value to the content indicator.

US Pat. No. 11,030,821

IMAGE DISPLAY CONTROL APPARATUS AND IMAGE DISPLAY CONTROL PROGRAM

Alpha Code Inc., Tokyo (...

1. An image display control apparatus, comprising:an image reproducing unit that reproduces a virtual space image and causes the virtual space image to be displayed on a head mounted display;
a target object detecting unit that detects a target object existing within a predetermined distance from the head mounted display from a moving image of a real world captured by a camera installed in the head mounted display;
an image superimposition unit that causes an image of a predetermined range including the target object to be displayed superimposed on the virtual space image while the target object is being detected at a position within the predetermined distance by the target object detecting unit; and
a target object determining unit that determines whether or not a human hand and an object gripped by the hand are included in the target object detected by the target object detecting unit,
wherein the image superimposition unit does not perform image superimposition on the virtual space image in a case in which the target object determining unit determines that the human hand is not included in the target object and in a case in which the target object determining unit determines that the human hand is included in the target object, but the gripped object is not included in the target object, and causes an image of a predetermined range including the human hand and the gripped object to be displayed superimposed on the virtual space image in a case in which the target object determining unit determines that the human hand and the gripped object are included in the target object.

US Pat. No. 11,030,820

SYSTEMS AND METHODS FOR SURFACE DETECTION

Facebook Technologies, LL...

1. A method comprising, by a computing system:tracking first positions of a controller in a three-dimensional space;
determining a plurality of planes based on the first positions;
determining that the plurality of planes are within a threshold deviation of each other;
generating a virtual plane based on the plurality of planes;
tracking second positions of the controller in the three-dimensional space;
identifying one or more of the second positions that are within a threshold distance of the virtual plane;
generating a drawing in the virtual plane based on the one or more of the second positions; and
rendering a scene depicting the drawing.

US Pat. No. 11,030,819

PRODUCT BUILD ASSISTANCE AND VERIFICATION

International Business Ma...

1. A method for image product build assistance and verification, the method comprising:receiving, by a first computing device, a product build order for a product;
matching, by the first computing device, the product build order to one or more recognition algorithms and one or more pieces of product artwork;
generating, by the first computing device, one or more build steps for product assembly;
displaying, by the first computing device, a build step to a first user via a user interface on a mixed reality device;
analyzing, by the first computing device, progress of the build step via the mixed reality device;
generating, by the first computing device, a product build status update based on the progress of the build step, the product build status update including progression analytics wherein the product build status update also includes an identification of the first user and the build step assigned to the first user; and
displaying, by the first computing device, the status update to a second user on a second computing device.

US Pat. No. 11,030,818

SYSTEMS AND METHODS FOR PRESENTING VIRTUAL-REALITY INFORMATION IN A VEHICULAR ENVIRONMENT

1. A system for presenting virtual-reality information in a vehicular environment, the system comprising:a virtual-reality display apparatus;
one or more processors; and
a memory communicably coupled to the one or more processors and storing:
a communication module including instructions that when executed by the one or more processors cause the one or more processors to receive, at a first vehicle, a set of presentation attributes for a second vehicle that is in an external environment of the first vehicle, the set of presentation attributes for the second vehicle corresponding to a virtual vehicle that is different from the second vehicle and within a same vehicle category as the second vehicle, wherein the same vehicle category is one of automobiles, watercrafts, and aerial vehicles and the virtual vehicle differs from the second vehicle in at least one of a model year, a make, a model, one or more colors, a custom logo, custom detailing, one or more advertising messages, and one or more sounds; and
a scene virtualization module including instructions that when executed by the one or more processors cause the one or more processors to present to an occupant of the first vehicle, via the virtual-reality display apparatus in a virtual-reality space, the second vehicle in accordance with the received set of presentation attributes for the second vehicle while the second vehicle is visible from the first vehicle in the external environment of the first vehicle.

US Pat. No. 11,030,817

DISPLAY SYSTEM AND METHOD OF USING ENVIRONMENT MAP TO GENERATE EXTENDED-REALITY IMAGES

Varjo Technologies Oy, H...

1. A display system comprising:at least one display or projector;
at least one camera;
means for tracking a position and orientation of a user's head; and
at least one processor configured to:
control the at least one camera to capture a plurality of images of a real-world environment using a default exposure setting of the at least one camera, whilst processing head-tracking data obtained from said means to determine corresponding positions and orientations of the user's head with respect to which the plurality of images are captured;
process the plurality of images, based on the corresponding positions and orientations of the user's head, to create an environment map of the real-world environment;
generate at least one extended-reality image from at least one of the plurality of images using the environment map;
render, via the at least one display or projector, the at least one extended-reality image;
adjust an exposure of the at least one camera to capture at least one underexposed image of the real-world environment, whilst processing corresponding head-tracking data obtained from said means to determine a corresponding position and orientation of the user's head with respect to which the at least one underexposed image is captured;
process the at least one of the plurality of images, based on a transitional and rotational difference between a position and orientation of the user's head with respect to which the at least one of the plurality of images is captured and the position and orientation with respect to which the at least one underexposed image is captured, to generate at least one derived image;
generate at least one next extended-reality image from the at least one derived image using the environment map;
render, via the at least one display or projector, the at least one next extended-reality image; and
identify oversaturated pixels in the environment map and modify intensities of the oversaturated pixels in the environment map, based on the at least one underexposed image and the position and orientation with respect to which the at least one underexposed image is captured,
wherein the at least one processor is configured to detect whether or not there are oversaturated pixels in any of the plurality of images, and wherein the at least one underexposed image is captured when it is detected that there are oversaturated pixels in the at least one of the plurality of images.

US Pat. No. 11,030,816

ELECTRONIC APPARATUS, CONTROL METHOD THEREOF, COMPUTER PROGRAM, AND COMPUTER-READABLE RECORDING MEDIUM

THINKWARE CORPORATION, S...

1. A control method of an electronic apparatus, the control method comprising:determining whether or not a vehicle is in a stopped state;
determining signal type information using an image data of a signal region portion of a signal lamp in the image data;
detecting a crosswalk from an image data photographed in a camera during a period in which the vehicle is in the stopped state and in which the signal type information is a stop signal;
generating a first object for allowing a driver to recognize that the crosswalk is positioned in front of the vehicle when the crosswalk is detected when the vehicle is maintained in the stopped stated in a state in which the signal type information is the stop signal;
generating a second object for warning the driver that the crosswalk is positioned in front of the vehicle when the vehicle starts in the state in which the signal type information is the stop signal;
determining a mapping position of the generated first object and the second object on a virtual three-dimensional (3D) space for a photographed image of the camera; and
displaying the first object and the second object through augmented reality by mapping the first object and the second object to the virtual three-dimensional space based on the determined mapping position.

US Pat. No. 11,030,815

METHOD AND SYSTEM FOR RENDERING VIRTUAL REALITY CONTENT

Wipro Limited, Bangalore...

1. A method of rendering Virtual Reality (VR) content, the method comprising:identifying, by a rendering device, a user interaction with at least one object within a VR environment;
training, by the rendering device, a deep learning feature extraction model to identify predetermined and undetermined interactions in the VR environment, wherein the deep learning feature extraction model is trained based on a plurality of scene images and associated applied templates that are provided to the deep learning feature extraction model, wherein each of the applied templates identifies at least one spurious object and at least one object of interest in an associated scene image from the plurality of scene images;
classifying, by the rendering device, the user interaction as one of a predetermined interaction and an undetermined interaction based on the deep learning feature extraction model; and
rendering, by the rendering device, a VR content in response to the user interaction being classified as one of the predetermined interaction and the undetermined interaction.

US Pat. No. 11,030,814

DATA STERILIZATION FOR POST-CAPTURE EDITING OF ARTIFICIAL REALITY EFFECTS

Facebook, Inc., Menlo Pa...

1. A method comprising, by a first computing system:capturing, during a video capturing process, a video data stream of a scene using a camera sensor;
capturing one or more contextual data streams associated with the video data stream, wherein the one or more contextual data streams comprise a first sensor data stream and a first computed data stream;
rendering, during the video capturing process, a first artificial reality effect based on the one or more contextual data streams for display with the video data stream;
generating a serialized data stream by serializing a plurality of data chunks, wherein the plurality of data chunks contains data from the video data stream and the one or more contextual data streams, and wherein each data chunk is associated with a timestamp;
storing the serialized data stream into a storage;
extracting, during a post-capture editing process at a later time after the video capturing process, the video data stream and one or more of the contextual data streams from the serialized data stream stored in the storage by deserializing the plurality of data chunks in the serialized data stream based on the associated timestamps;
generating a second computed data stream based on the first sensor data stream in the extracted one or more of the contextual data streams;
comparing the second computed data stream to the first computed data stream extracted from the serialized data stream to select a computed data stream from the first computed data stream and the second computed data stream based on one or more pre-determined criteria; and
rendering the first artificial reality effect or another artificial reality effect for display with the extracted video data stream during the post-capture editing process based at least in part on the selected computed data stream.

US Pat. No. 11,030,813

VIDEO CLIP OBJECT TRACKING

Snap Inc., Santa Monica,...

1. A method comprising:capturing, using a camera-enabled device, video content of a real-world scene and movement information collected by the camera-enabled device during capture of the video content;
storing the captured video content and movement information, the movement information that is stored comprising a plurality of inertial measurement unit (IMU) frames associated with respective timestamps;
processing the stored captured video content to identify a real-world object in the scene;
after the video content is captured, in response to receiving a request to augment the stored captured video content with a virtual object:
retrieving the plurality of IMU frames associated with the stored captured video content; and
matching the plurality of IMU frames with the stored captured video content by correlating the timestamps of one or more of the plurality of IMU frames with a timestamp of a frame of the stored captured video content;
generating an interactive augmented reality display that:
adds the virtual object to the stored captured video content to create augmented video content comprising the real-world scene and the virtual object; and
adjusts, during playback of the augmented video content, an on-screen position of the virtual object within the augmented video content based at least in part on matching the plurality of IMU frames with the stored captured video content;
temporarily increasing a size of the virtual object in response to receiving a first type of user input that presses and holds the virtual object to indicate a first state change enabling a user to manipulate the virtual object in two-dimensional space; and
modifying an appearance of the virtual object to indicate a second state change in response to receiving a second type of user input, the second state change enabling the user to manipulate the virtual object in three-dimensional space relative to real-world objects.

US Pat. No. 11,030,812

AUGMENTED REALITY SYSTEM USING ENHANCED MODELS

The Boeing Company, Chic...

1. An augmented reality system that comprises:a group of unmanned vehicles configured to: move relative to a physical object, generate images of the physical object, generate scan data descriptive of points in space for a region of the physical object, and communicate with a computer system configured to simultaneously:
receive the images of the physical object from the group of unmanned vehicles moving relative to the physical object;
receive the scan data descriptive of points in space for the region of the physical object from a number of unmanned vehicles in the group of unmanned vehicles moving relative to the physical object;
create, based upon the images and the scan data descriptive of points in space for the region, an enhanced model of the physical object that comprises a greater level of detail and granularity of data about the region than a level of detail and granularity of the region in a model of the physical object generated prior to the enhanced model, such that the region of the physical object in the enhanced model comprises a greater amount of detail than other regions of the physical object in the enhanced model; and
identify nonconformances in the region and update, based upon a scan of a changed structure on the physical object, the model of the physical object generated prior to the enhanced model;
classify the nonconformances of the changed structure; and
a portable computing device configured to:
localize, based upon the enhanced model, to the physical object; and
display information, identified based upon the enhanced model and a correlated location on the model of the physical object generated prior to the enhanced model, on a live view of the physical object seen through the portable computing device that identifies an operation to be performed on a nonconformance in the nonconformances.

US Pat. No. 11,030,811

AUGMENTED REALITY ENABLED LAYOUT SYSTEM AND METHOD

Orbit Technology Corporat...

1. A system for augmented reality layout, comprising:a) an augmented reality layout server; and
b) an augmented reality layout device, which comprises a camera; and
c) a model synchronizer, which is configured to align the design model with the video stream, such that the model synchronizer allows the user to capture an initial alignment vector that overlays an initial alignment position in the video stream during initial positioning of objects in the augmented reality view, such that the design model is stored with the initial alignment vector;
wherein the augmented reality layout device is configured to create a design model and populate the design model with objects retrieved from the augmented reality layout server;
such that the augmented reality layout device is configured to allow a user to position the objects precisely in a two-dimensional top view of the design model; and
such that the augmented reality layout device is configured to show the design model in an augmented reality view, wherein the design model is superimposed on a video stream showing an environment that the design model is designed for,
wherein the video stream is received from the camera of the augmented reality layout device;
wherein the model synchronizer is further configured to realign the design model when the design model is reloaded;
such that the model synchronizer allows the user to capture a current alignment vector that overlays a current alignment position in the video stream,
such that the model synchronizer executes a linear transformation calculation to calculate a transposition vector and a transformation matrix, such that the model synchronizer executes a linear perspective transposition and rotational transformation from a location and direction of an initial three-dimensional view to a current three-dimensional view of the design model, such that the current three-dimensional view is superimposed on the video stream.

US Pat. No. 11,030,810

SHARED MIXED-REALITY ENVIRONMENTS RESPONSIVE TO MOTION-CAPTURE DATA

LUCASFILM ENTERTAINMENT C...

1. A method comprising:receiving a first motion or position of a first performer in a first real-world environment;
identifying the first motion or position as a first predefined motion or position;
altering a virtual representation of the first performer in a 3-D virtual environment based on the first motion or position of the first performer in the first real-world environment;
altering a virtual asset in the 3-D virtual environment in response to identifying the first motion or position as the first predefined motion or position;
receiving a second real-time motion or position of a second performer in a second real-world environment;
identifying the second real-time motion or position as a second predefined motion or position;
altering the virtual asset in the 3-D virtual environment in response to identifying the second real-time motion or position as the second predefined motion or position;
rendering a first 2-D video stream of the virtual representation of the first performer and the virtual asset in the 3-D virtual environment; and
compositing the first 2-D video stream with a live view of the second performer in the second real-world environment on an augmented reality device such that the virtual representation of the first performer exchanges possession of the virtual asset with the second performer such that they all appear to be live in the second real-world environment when the second real-world environment is viewed through the augmented reality device.

US Pat. No. 11,030,809

AUGMENTED REALITY GLASSES

BOE TECHNOLOGY GROUP CO.,...

1. Augmented reality glasses, comprising:a headgear assembly configured to secure the augmented reality glasses to a head of a user;
an optomechanical assembly comprising a display;
a frame assembly configured to carry the optomechanical assembly;
a damping rotary structure configured to rotatably connect the headgear assembly and the frame assembly, such that when the augmented reality glasses are worn on the head of the user, the user views a picture displayed by the display of the optomechanical assembly; and
a rotation transmission member configured to synchronize a rotation of the frame assembly with a rotation of the display, such that when the augmented reality glasses are worn on the head of the user, the picture displayed by the display enters eyes of the user substantially vertically,
wherein the rotation transmission member is inside the frame assembly and comprises:
a first transmission component adjacent the damping rotary structure and engaged with the frame assembly;
a second transmission component adjacent and engaged with the optomechanical assembly; and
a third transmission component between the first transmission component and the second transmission component.

US Pat. No. 11,030,808

GENERATING TIME-DELAYED AUGMENTED REALITY CONTENT

PTC Inc., Needham, MA (U...

1. A method performed by a computing system, comprising:obtaining an image of an object captured by an image capturing device in a vicinity of the object during relative motion between the object and the image capturing device;
determining a location of the image capturing device relative to the object during image capture based on one or more attributes of the object in the image;
storing, in computer memory, the image of the object and the location of the image capturing device during image capture;
mapping a three-dimensional (3D) graphical model representing the object to the object in the image based, at least in part, on the location of the image capturing device;
incorporating an update into the 3D graphical model after the 3D model has been mapped to the object in the image, the update representing a current state of at least part of the object that is different from the state of the object when the image was captured;
after the image capturing device has moved away from the vicinity of the object and at a time after the update to the 3D graphical model has been incorporated, retrieving the image from the computer memory at a remote location;
receiving, at the remote location and at a time after the image capturing device has moved away from the vicinity of the object, first data indicating selection by a user of the object in the image that was retrieved; and
in response to receiving the first data, generating second data for use in rendering content on a display device at the remote location, the second data comprising information about the object selected and being based on the image stored, the location of the image capturing device stored, and the 3D graphical model containing the update;
wherein the second data represents the image that was captured augmented with the current state of the at least part of the object obtained from the 3D graphical model containing the update.

US Pat. No. 11,030,807

IMAGE TO ITEM MAPPING

Unmade Ltd.

1. A method of displaying a digital image on a representation of an object, comprising:defining reference points on a digital image template of a reference item, the digital image template representing sections of the pattern of the reference item and the reference item being an article of clothing;
manufacturing, by knitting, the article of clothing as the reference item including the reference points being knitted into the reference item based on the defined reference points on the digital image template;
fitting the manufactured article of clothing to an object;
capturing an image of the article of clothing fitted to the object;
mapping positions on a digital image template of a non-reference item to location positions of the reference points on the captured image the digital image template of the non-reference item including corresponding sections of a pattern of the non-reference item, to generate a manipulated non-reference digital image; and
displaying the manipulated non-reference digital image on the reference item fitted to the object.

US Pat. No. 11,030,806

COMBINED VIRTUAL AND PHYSICAL ENVIRONMENT

VR Exit LLC, Fort Lauder...

1. An immersive simulation system for providing a combined virtual and physical environment for a plurality of users, the system comprising:a pod in which a first user and a second user are each experiencing a respective virtual environment, the pod comprising:
a plurality of panels defining one or more interaction zones for the first user and the second user,
a portal sized to permit the first user and the second user to enter the pod,
a tracking system including a plurality of transmitters within the pod, and one or more sensory system stimulating devices, wherein activation of the one or more sensory stimulating devices is synchronized with one or more events in a simulation running on one or more computing devices;
a first computing device positioned on the first user, wherein the first computing device receives information from a plurality of receivers placed on the first user, wherein at least one receiver of the plurality of receivers receives signals from at least one transmitter of the plurality of transmitters, and wherein the first computing device positioned on the first user updates a virtual environment for the first user based on information derived from the received signals;
a headset positioned on the first user, wherein the headset is in communication with the first computing device, the headset providing visual output to the first user based on the updated virtual environment for the first user;
a second computing device positioned on the second user and that receives information from a second plurality of receivers placed on the second user, the computing device on the second user configured to update a virtual environment for the second user based on the received information; and
a server in communication, over a network, with the first computing device and the second computing device, wherein the server:
receives virtual environment update data from the first computing device corresponding to the updated virtual environment for the first user, updates the virtual environment for the second user based on the virtual environment update data,
transmits data regarding the updated virtual environment for the second user to the second computing device positioned on the second user,
determines, at a first point in time, that the first computing device is outside of an area surrounding a physical character in the pod,
responsive to determining that the first computing device is outside of the area, transmits, to the first computing device, first data representing a virtual representation of the physical character in the virtual environment for the first user, wherein generation of the first data is controlled by the server,
determines, at a second point in time, that the first computing device has moved from outside of the area surrounding the physical character to inside of the area surrounding the physical character, and
responsive to determining that the first computing device has moved from outside of the area surrounding the physical character to inside of the area surrounding the physical character, (i) transmits, to a third computing device positioned on the physical character virtual environment data for the first user such that the third computing device enables the physical character to interact with the first user based on the virtual environment for the first user, and (ii) causes the first computing device to present, in the virtual environment for the first user, a transformed depiction of the physical character.

US Pat. No. 11,030,805

DISPLAYING DATA LINEAGE USING THREE DIMENSIONAL VIRTUAL REALITY MODEL

INTERNATIONAL BUSINESS MA...

1. A method comprising:receiving data lineage comprising a plurality of levels and receiving a configuration;
building, by a processor, a three dimensional (3D) virtual reality (VR) model comprising a plurality of floors based on data lineage content generated based on the data lineage and corresponding to the plurality of levels and the configuration, the 3D VR model depicting, on at least a first of the plurality of floors, a plurality of rooms of a virtual building representing data elements and hallways of the virtual building representing data flows between the data elements;
displaying, on a display device, a view of the 3D VR model, wherein the 3D VR model is configured for a user to navigate the plurality of the rooms and hallways of the virtual building to determine lineage of data;
responsive to the user turning in a first direction on the first of the plurality of floors, presenting to the user a first history of the data lineage content; and
responsive to the user turning in a second direction on the first of the plurality of floors, presenting to the user a second history of the data lineage content.

US Pat. No. 11,030,804

SYSTEM AND METHOD OF VIRTUAL PLANT FIELD MODELLING

BLUE RIVER TECHNOLOGY INC...

1. A method comprising:accessing a plurality of skeleton segments representing a plant located in a field captured in an image, each skeleton segment of the plurality of skeleton segments representing a portion of the plant and comprising one or more nodes;
identifying, from the plurality of skeleton segments, a set of candidate skeleton segments, each candidate skeleton segment representing a portion of the plant located near a ground surface of the field captured in the image;
building a virtual model of the plant by connecting each candidate skeleton segment to a neighboring skeleton segment, wherein each neighboring skeleton segment represents an adjacent portion of the plant positioned farther from the ground surface than the portion of the plant represented by the candidate skeleton segment connected to the neighboring skeleton segment;
determining a treatment for the plant using the virtual model; and
generating instructions to apply the determined treatment to the plant.

US Pat. No. 11,030,803

METHOD AND APPARATUS FOR GENERATING RASTER MAP

Baidu Online Network Tech...

1. A method for generating a raster map, the method comprising:generating a first raster map having a first resolution based on acquired laser point cloud;
generating a second raster map having a second resolution by merging every preset number of rasters in the first raster map, the second resolution being lower than the first resolution; and
storing the first raster map and an association between the first raster map and the second raster map;
wherein the generating the first raster map comprises:
generating a multi-dimensional attribute of each of the rasters in the first raster map for providing navigation and positioning capabilities, the multi-dimensional attribute comprising an occupancy attribute indicating whether a raster being occupied, an average reflectivity attribute for determining a material of an obstacle in an environment, a color attribute representing a color of the obstacle in the environment, a density attribute for determining a type of the obstacle in the environment, and a curvature attribute for fitting a characteristic of a curved surface of the obstacle in the environment.

US Pat. No. 11,030,802

DYNAMIC MAP UPDATE DEVICE, DYNAMIC MAP UPDATE METHOD, AND NON-TRANSITORY COMPUTER READABLE MEDIUM RECORDING DYNAMIC MAP UPDATE PROGRAM

TOYOTA JIDOSHA KABUSHIKI ...

1. A dynamic map update device comprising a processor configured toacquire a captured image from a plurality of vehicles, each of the plurality of vehicles having a camera configured to capture surroundings, the captured image being captured by the camera;
update a dynamic map of a predetermined area based on the captured image;
vary an update frequency of the dynamic map depending on a position among a plurality of positions in the predetermined area; and
acquire the captured image from the plurality of vehicles according to the update frequency that varied depending on the position among the plurality of the positions in the predetermined area.

US Pat. No. 11,030,801

THREE-DIMENSIONAL MODELING TOOLKIT

Standard Cyborg, Inc., S...

1. A system comprising:a memory; and
at least one hardware processor coupled to the memory and comprising instructions that causes the system to perform operations comprising:
accessing a first data stream at a client device, the first data stream comprising image data that comprises a set of image bits that comprise attributes, the image data depicting an object;
accessing a bit mask that corresponds with the object depicted by the image data from among a plurality of bit masks, the bit mask identifying a portion of the set of image bits of the image data based on the attributes of the portion of the set of image bits;
accessing a second data stream at the client device, the second data stream comprising depth data associated with the portion of the image data identified by the bit mask;
generating a point cloud based on the depth data, the point cloud comprising a set of data points that define surface features of an object depicted in the first data stream; and
causing display of a visualization of the point cloud at the client device.

US Pat. No. 11,030,800

RENDERING IMAGES USING MODIFIED MULTIPLE IMPORTANCE SAMPLING

Chaos Software Ltd., Sof...

1. A computer-implemented method, comprising:receiving data describing a scene, wherein the scene comprises one or more light sources and one or more objects having different surface optical properties;
receiving a request to render an image of the scene using a multiple importance sampling method that combines a plurality of sampling techniques, wherein each sampling technique uses a different probability distribution to sample a respective fraction of a total number of samples, wherein the total number of samples includes different fractions corresponding to different sampling techniques of the plurality of sampling techniques, wherein each sampling technique is suitable for rendering different regions of the scene, and wherein each sampling technique defines a different probability that a data point is to be sampled;
modifying a particular one of the probability distributions corresponding to a particular sampling technique to reduce a variance of the multiple importance sampling while holding the respective fractions and the other probability distributions fixed;
rendering the scene using the multiple importance sampling using the modified particular probability distribution and the other probability distributions; and
outputting the rendered scene in response to the request.

US Pat. No. 11,030,799

IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD AND STORAGE MEDIUM. WITH ESTIMATION OF PARAMETER OF REAL ILLUMINATION BASED ON NORMAL INFORMATION ON PIXEL INCLUDED IN HIGH LUMINANCE AREA

Canon Kabushiki Kaisha, ...

1. An image processing apparatus comprising:one or more processors and one or more memories, configured to function as a plurality of units comprising:
(1) a first acquisition unit configured to acquire normal information corresponding to an image;
(2) an estimation unit configured to estimate a parameter of a real illumination at the timing of capturing the image based on a high-luminance area of an object included in the image;
(3) a first setting unit configured to set a parameter of a virtual illumination based on the parameter of the real illumination; and
(4) a lighting processing unit configured to perform lighting processing for the image based on the normal information and the parameter of the virtual illumination,
wherein the estimation unit estimates the parameter of the real illumination based on the normal information on a pixel included in the high-luminance area.

US Pat. No. 11,030,798

SYSTEMS AND METHODS FOR VIRTUAL APPLICATION OF MAKEUP EFFECTS BASED ON LIGHTING CONDITIONS AND SURFACE PROPERTIES OF MAKEUP EFFECTS

PERFECT MOBILE CORP., Ne...

1. A method implemented in a computing device, comprising:obtaining a digital image depicting an individual;
determining lighting conditions of the content in the digital image, wherein determining the light conditions comprises estimating at least one of: an angle of lighting incident on the individual depicted in the digital image; a lighting intensity; and a color of the lighting incident on the individual depicted in the digital image by:
comparing a shadow effect on the individual depicted in the digital image with predefined three-dimensional (3D) models having varying shadow effects, each of the 3D models having corresponding information relating to lighting conditions;
identifying a closest matching 3D model based on comparing the shadow effect on the individual depicted in the digital image with the predefined 3D models having varying shadow effects; and
retrieving the corresponding information relating to the lighting conditions of the identified closest matching 3D model, the corresponding information comprising at least one of: the angle of lighting incident on the individual depicted in the digital image, the lighting intensity, and the color of the lighting incident on the individual depicted in the digital image;
wherein identifying the closest matching 3D model based on comparing the shadow effect on the individual depicted in the digital image with the predefined 3D models having varying shadow effects comprises:
converting the digital image depicting the individual to a luminance-only image;
constructing a 3D mesh model from the luminance-only image;
for each of the 3D models having varying shadow effects, determining a degree of correlation in luminance values between each 3D model and the constructed 3D mesh model; and
identifying the closest matching 3D model based on a 3D model having a highest degree of correlation with the 3D mesh model;
obtaining selection of a makeup effect from a user;
determining surface properties of the selected makeup effect;
applying a facial alignment technique to a facial region of the individual and defining a region of interest corresponding to the makeup effect;
extracting lighting conditions of the region of interest;
adjusting visual characteristics of the makeup effect based on the surface properties of the makeup effect and the lighting conditions of the region of interest; and
performing virtual application of the adjusted makeup effect to the region of interest in the digital image.

US Pat. No. 11,030,797

PRIMITIVE FRAGMENT PROCESSING IN THE RASTERIZATION PHASE OF A GRAPHICS PROCESSING SYSTEM

Imagination Technologies ...

1. A system for processing primitive fragments in a rasterization phase of a graphics processing system wherein a rendering space is subdivided into a plurality of tiles, the system comprising:a priority queue for storing primitive fragments;
a non-priority queue for storing primitive fragments;
logic configured to:
receive a plurality of primitive fragments, each primitive fragment corresponding to a pixel sample in a tile,
determine whether a depth buffer read is to be performed for hidden surface removal processing of one or more of the primitive fragments, and
sort the primitive fragments into the priority queue and the non-priority queue based on the depth buffer read determinations; and
hidden surface removal logic configured to perform hidden surface removal processing on the primitive fragments in the priority and non-priority queues wherein priority is given to the primitive fragments in the priority queue.

US Pat. No. 11,030,796

INTERFACES AND TECHNIQUES TO RETARGET 2D SCREENCAST VIDEOS INTO 3D TUTORIALS IN VIRTUAL REALITY

ADOBE Inc., San Jose, CA...

1. A method comprising:intercepting, by a virtual reality (VR)-embedded video application, a rendered three-dimensional (3D) environment transmitted by a VR design application to a VR display before the VR display receives the rendered 3D environment;
rendering, by the VR-embedded video application, a composite 3D environment by rendering a VR-embedded widget on top of the rendered 3D environment;
outputting, by the VR-embedded video application, the composite 3D environment to the VR display;
evaluating, by the VR-embedded video application, VR inputs transmitted to the VR design application before the VR design application receives the VR inputs;
intercepting, by the VR-embedded video application, a first set of the VR inputs that interact with the VR-embedded widget in the composite 3D environment; and
determining, by the VR-embedded video application, not to intercept a second set of the VR inputs that do not interact with the VR-embedded widget in the composite 3D environment,
wherein the VR-embedded widget is configured to present at least one of an external two-dimensional (2D) screencast video or a three-dimensional (3D) simulation scene associated with the external 2D screencast video.

US Pat. No. 11,030,795

SYSTEMS AND METHODS FOR SOFT SHADOWING IN 3-D RENDERING CASTING MULTIPLE RAYS FROM RAY ORIGINS

Imagination Technologies ...

1. A machine-implemented method of graphics processing, comprising:identifying visible surfaces of a scene for pixels of a frame of pixels;
determining origins for casting rays from the visible surfaces towards a light;
for one or more of the ray origins:
casting multiple test rays from the determined origin towards different points within the light;
determining whether each of the test rays are occluded from reaching the light; and
using the results of said determining whether the test rays are occluded from reaching the light to determine an extent of occlusion from the light for one or more pixels corresponding to the ray origin by:
determining a glancing ray which is the closest ray to an occlusion which is not occluded by the occlusion; and
using an angle between the determined glancing ray and a ray cast towards to the centre of the light to determine the extent of occlusion from the light for said one or more pixels corresponding to the ray origin.

US Pat. No. 11,030,794

IMPORTANCE SAMPLING FOR DETERMINING A LIGHT MAP

Imagination Technologies ...

1. A graphics processing unit configured to determine a bounce light map for use in rendering a scene, the graphics processing unit comprising:processing logic configured to:
use an importance sampling technique to identify one or more positions within the scene based on values of corresponding elements of initial lighting indications which represent lighting at respective positions within the scene;
trace one or more sampling rays towards the one or more identified positions within the scene; and
determine a lighting value of the bounce light map using one or more results of tracing the one or more sampling rays.

US Pat. No. 11,030,793

STYLIZED IMAGE PAINTING

Snap Inc., Santa Monica,...

1. A stylized painting effect system for creating a stylized painting effect image, the system comprising:an eyewear device including:
a frame having a temple connected to a lateral side of the frame; and
a depth-capturing camera configured to capture at least one of a left raw image or a right raw image;
an image display for presenting images, including an original image, wherein the original image is based on the left raw image, a left processed image, the right raw image, a right processed image, or combination thereof;
an image display driver coupled to the image display to control the image display to present the original image;
a user input device to receive mark-ups for the original image, a stylized painting effect selection, and a style selection from a user;
a memory and a processor coupled to the depth-capturing camera, the image display driver, the user input device, and the memory; and
programming in the memory, wherein execution of the programming by the processor configures the stylized painting effect system to perform functions, including functions to:
capture, via the depth-capturing camera, at least one of the left raw image or the right raw image;
present, via the image display, the original image;
receive, via the user input device, the markups, the stylized painting effect selection, and the style selection from the user;
create at least one stylized painting effect image with a stylized painting effect scene;
apply the stylized painting effect image to the mark-ups in: (i) the left raw image or the left processed image to create a left stylized painting effect image, (ii) the right raw image or the right processed image to create a right stylized painting effect image, or (iii) combination thereof;
generate a stylized painting effect image having an appearance of a spatial movement or rotation around the stylized painting effect scene of the at least one stylized painting effect image, by blending together the left stylized painting effect image and the right stylized painting effect image; and
present, via the image display, the stylized painting effect image.

US Pat. No. 11,030,792

SYSTEM AND METHOD FOR PACKING SPARSE ARRAYS OF DATA WHILE PRESERVING ORDER

Parallel International Gm...

1. A method for packing stream outputs of a geometry shader into an output buffer, comprising:generating, using vertices of primitives received from one or more geometry shaders, a stream output data together with an index buffer, where each absent vertex is replaced with a primitive restart;
rebuilding the index buffer to a list format using T-vectors constructed for one-element ranges of the index buffer; and
unwrapping the index data of the rebuilt index buffer to a packed buffer; wherein
the packed buffer excludes incomplete primitives and cancelled primitives and contains only complete primitives.

US Pat. No. 11,030,791

CENTROID SELECTION FOR VARIABLE RATE SHADING

Advanced Micro Devices, I...

1. A method for performing graphics rendering operations, the method comprising:generating, by a processor, a partially covered fragment having a size that is larger than a size of a pixel of a render target for which the fragment is being processed;
identifying, by the processor, covered samples of the partially covered fragment;
identifying, by the processor, a closest sample of the covered samples to a center of the fragment;
setting, by the processor, as a centroid for evaluation of attributes of the fragment for a pixel shader stage, a position of the closest sample; and
shading, by the processor, the fragment based on the centroid.

US Pat. No. 11,030,790

APPARATUS AND METHOD FOR PERFORMING MOTION CAPTURE USING A RANDOM PATTERN ON CAPTURE SURFACES

Rearden Mova, LLC, Mount...

1. A system comprising:a movie or video stored on a machine-readable medium with one or more scenes representing a rendered animated three-dimensional (3D) face;
at least one of the scenes including a rendering of a first plurality of 3D points on at least part of the rendered animated 3D face at a first plurality of time intervals correlated to a high resolution second plurality of 3D points on at least part of a surface of a performer's face at a second plurality of time intervals;
wherein the second plurality of 3D points on the surface of the performer's face were not identified by markers applied to the performer's face; and
wherein a processor automatically tracked the second plurality of 3D points.

US Pat. No. 11,030,789

ANIMATED CHAT PRESENCE

Snap Inc., Santa Monica,...

1. A method comprising:receiving message content associated with a communication session at a client device, the message content including, user profile data that defines a language preference, facial tracking data and audio data;
transcribing the audio data to a first text string;
determining a first language of the message content based on the language preference;
determining a second language associated with the communication session;
translating the first text string to a second text string based on the second language;
generating an avatar based on the facial tracking data; and
causing display of a presentation of the message content at the client device, the presentation of the message content comprising a display of the avatar that includes the second text string.

US Pat. No. 11,030,788

VIRTUAL REALITY PRESENTATION OF BODY POSTURES OF AVATARS

LINDEN RESEARCH, INC., S...

1. A computing system, comprising:a server system; and
a data storage device configured to store a three-dimensional model of a virtual reality world, including avatar models representing residences of the virtual reality world;
wherein the server system is configured to:
generate, from the three-dimensional model of the virtual reality world and the avatar models, data stream to client devices that are connected to the server system via a computer network, the data stream providing views of the virtual reality world;
receive, from a client device, input data tracking position, orientation, and motion of a head of a user of the client device, wherein an avatar represents the user in the virtual reality world;
identifying, in the input data, head motion patterns of the user representing head gestures;
predict, based on the patterns in the input data representing head gestures of the user, a body posture of an avatar of the user in the virtual reality world; and
present the predicted body posture of the avatar in the virtual reality world, wherein the body posture is predicted without the client device tracking, relative positions or movements of portions of the user that represent a body posture of the user.

US Pat. No. 11,030,787

MOBILE-BASED CARTOGRAPHIC CONTROL OF DISPLAY CONTENT

Snap Inc., Santa Monica,...

1. A method comprising:displaying a map on a display device of a client device;
displaying, within the map, a client device icon indicating a geographic location of the client device and a content icon indicating another geographic location on the map, the content icon encircled on the map by a visual perimeter element of an unlock area near the another geographic location that the client device must be located in order to unlock content associated with the content icon;
receiving, by the client device, selection of the content icon to unlock the content associated with the content icon;
in response to the selection of the content icon, display a prompt to move the client device within the unlock area of the visual perimeter element;
displaying, on the client device, an updated map displaying the client device icon at a new geographic location that is within the unlock area of the visual perimeter element;
receiving, by the client device, another selection of the content icon to unlock the content associated with the content icon; and
in response to the another selection and the client device being in the unlock area of the visual perimeter element, displaying the content associated with the content icon on the display device of the client device.

US Pat. No. 11,030,786

HAIR STYLES SYSTEM FOR RENDERING HAIR STRANDS BASED ON HAIR SPLINE DATA

Snap Inc., Santa Monica,...

1. A method comprising:receiving, by one or more processors, hair spline data comprising coordinates of a plurality of hair strands;
selecting, by the one or more processors, a first hair strand of the plurality of hair strands;
retrieving, by the one or more processors, coordinates of the first hair strand;
identifying, by the one or more processors, based on the respective coordinates of the plurality of hair strands, a second hair strand that is adjacent to the first hair strand, the identifying comprising determining that an angle between the first hair strand and the second hair strand is less than a threshold;
storing, by the one or more processors, a reference to the second hair strand in association with the coordinates of the first hair strand; and
generating, by the one or more processors, one or more additional hair strands between the first hair strand and the second hair strand based on the coordinates of the first hair strand and the reference to the second hair strand.

US Pat. No. 11,030,785

DISPLAY DEVICE AND METHOD OF CONTROLLING SAME

CANON KABUSHIKI KAISHA, ...

1. A display device comprising:a rear display unit configured to emit light by outputting an image;
a front display unit configured to display a displayed image by transmitting the light from the rear display unit;
an input unit configured to receive an instruction from a user, and
at least one processor and/or at least one circuit to perform the operations of:
switching a display mode between a wide viewing angle mode and a narrow viewing angle mode, based on the instruction inputted by the user;
controlling the front display unit so as to transmit the light based on first image data;
controlling the rear display unit so as to output a first image based on the first image data in a case that the display mode is the narrow viewing angle mode; and
controlling the rear display unit so as to output a second image based on the first image data in a case that the display mode is the wide viewing angle mode, wherein the second image is blurred more than the first image.

US Pat. No. 11,030,784

METHOD AND SYSTEM FOR PRESENTING A DIGITAL INFORMATION RELATED TO A REAL OBJECT

Apple Inc., Cupertino, C...

1. A method of presenting digital information related to a real object, comprising:determining a spatial relationship between a camera and a real object for which digital information is to be presented;
determining whether the spatial relationship indicates that a distance between the camera and the real object is below a threshold;
selecting a presentation mode from a plurality of presentation modes according to the spatial relationship, wherein the plurality of presentation modes comprises at least an augmented reality mode, and at least one alternative mode, wherein in response to determining that the spatial relationship indicates that the distance between the camera and the real object is below the threshold, the augmented reality mode is selected as the presentation mode; and
presenting at least one representation of the digital information using the selected presentation mode.

US Pat. No. 11,030,783

HIDDEN SURFACE REMOVAL IN GRAPHICS PROCESSING SYSTEMS

Arm Limited, Cambridge (...

1. A method of operating a graphics processor, the graphics processor comprising:a rasteriser that rasterises input primitives to generate graphics fragments to be processed, each graphics fragment having one or more sampling points associated with it; and
a renderer that processes fragments generated by the rasteriser to generate output fragment data;
wherein the rasteriser, when it receives a primitive to be rasterised, for each of one or more patches representing respective different regions of a render output to be generated, tests the patch against the primitive to determine if the primitive at least partially covers the patch;
the graphics processor further comprising:
a patch early depth test circuit configured to perform an early depth test for a primitive in respect of a patch of a render output that the primitive has been found by the rasteriser at least partially to cover; and
a sample depth test circuit configured to perform depth tests for sampling positions that have been found to be covered by a primitive;
the method comprising, when processing primitives to generate a render output:
storing a per patch depth buffer for the render output, that stores for each of one or more patches representing respective different regions of the render output being generated, depth value information for the patch for use by the patch early depth test circuit when performing a patch early depth test for a primitive in respect of the patch; and
storing a per sample depth buffer for the render output, that stores a depth value for each of one or more sampling positions of the render output being generated for use by the sample depth test circuit when performing a depth test for a primitive in respect of a sampling position of the render output being generated;
the method further comprising:
the graphics processor stopping processing the render output, and when it does so:
writing the per sample depth values in the per sample depth buffer to storage so that those values can be restored when continuing processing of the render output, but discarding the per patch depth value information in the per patch depth buffer;
and
the graphics processor resuming processing of the render output; and when it does so:
loading the per sample depth buffer values written out to storage into a per sample depth buffer for use when continuing processing of the render output; and
using the loaded per sample depth buffer values to store a set of per patch depth value information in a per patch depth buffer for use by the patch early depth test circuit when performing patch early depth tests for primitives when continuing processing of the render output.

US Pat. No. 11,030,782

ACCURATELY GENERATING VIRTUAL TRY-ON IMAGES UTILIZING A UNIFIED NEURAL NETWORK FRAMEWORK

ADOBE INC., San Jose, CA...

6. A non-transitory computer readable medium comprising instructions that, when executed by at least one processor, cause a computing device to:determine, for a model digital image, coarse transformation parameters and fine transformation parameters for transforming a product digital image to fit the model digital image utilizing a coarse-to-fine warp neural network;
generate a fine warped product digital image by modifying the product digital image in accordance with the coarse transformation parameters and the fine transformation parameters;
generate a conditional segmentation mask indicating pixels of the model digital image to replace with the fine warped product digital image utilizing a conditional segmentation mask prediction neural network to process the product digital image and a set of digital image priors corresponding to a segmentation mask of the model digital image; and
utilize the fine warped product digital image to generate, in accordance with the conditional segmentation mask, a virtual try-on digital image comprising a depiction of a model from the model digital image by replacing pixels of the product digital image with the fine warped product digital image such that the model appears to be wearing a product from the product digital image.

US Pat. No. 11,030,781

SYSTEMS FOR COLLECTING, AGGREGATING, AND STORING DATA, GENERATING INTERACTIVE USER INTERFACES FOR ANALYZING DATA, AND GENERATING ALERTS BASED UPON COLLECTED DATA

Palantir Technologies Inc...

1. A computer system comprising:a non-transitory computer-readable storage medium storing program instructions; and
one or more computer processors configured to execute the program instructions to cause the computer system to:
access or receive sensor data from one or more sensors associated with physical assets, the sensor data comprising measurements over a time period;
using the sensor data, aggregate attribute values associated with a group of one or more physical assets to determine one or more aggregate attribute values for the time period; and
in response to a user selection of the group of the one or more physical assets, generate a first interactive user interface including:
indications of the one or more aggregate attribute values for the group for the time period, and
indications of attribute values for each of the one or more physical assets of the group for the time period.

US Pat. No. 11,030,780

ULTRASOUND SPECKLE REDUCTION AND IMAGE RECONSTRUCTION USING DEEP LEARNING TECHNIQUES

The Board of Trustees of ...

1. A method for ultrasound image reconstruction using a convolutional neural network (CNN), the method comprising:(a) training the CNN with a dataset comprising simulated transducer array channel signals containing simulated speckle as inputs, and corresponding simulated speckle-free echogenicity estimates (B-mode images) as outputs; wherein the training uses a loss function involving a norm between estimated B-mode images and simulated speckle-free B-mode images; wherein the estimated B-mode images are estimated by the CNN from the simulated transducer array channel signals;
(b) measuring real-time RF signals taken directly from ultrasound transducer array elements prior to summation;
(c) preprocessing the measured real-time RF signals to apply time delays to focus the array at field points, and inputting the pre-processed measured real-time RF signals to the CNN;
(d) processing by the CNN the pre-processed measured real-time RF signals to produce as output an estimated real-time B-mode image with reduced speckle.

US Pat. No. 11,030,779

DEPTH-ENHANCED TOMOSYNTHESIS RECONSTRUCTION

KONINKLIJKE PHILIPS N.V.,...

1. An image processing system, comprising:a processor and memory configured to:
receive i) a 3D input image volume previously reconstructed from projection images of an imaged object acquired along different projection directions and ii) a specification of an image structure in the input volume;
form, based on said specification, a geometric surface 3D model for said structure in the 3D input image volume, the 3D model having a depth;
adapt, based on said 3D model, the input image volume to so form a 3D output image volume; and
reconstruct a new image volume based on said 3D output image volume.

US Pat. No. 11,030,778

METHODS AND APPARATUS FOR ENHANCING COLOR VISION AND QUANTIFYING COLOR INTERPRETATION

HEALTHY.IO LTD., Tel Avi...

1. A method comprising:capturing a plurality of digital color images of a reagent dipstick over a time period using a camera, the reagent dipstick including a color calibration bar and a plurality of reagent test pads exposed to a biological sample, wherein a color of a first reagent test pad of the plurality of reagent test pads changes over the time period in response to a concentration of a first analyte in the biological sample;
capturing a digital color image of a color chart using the camera, the color chart including a plurality of colors associated with concentration levels of analytes in the biological sample;
color correcting, using a processor, the captured plurality of digital color images of the reagent test pads to a lighting condition of the digital color image of the color chart based at least on the color calibration bar;
determining, using the processor, a gradient of color change over time of the first reagent test pad from the received plurality of color images of the reagent dipstick;
computing, using the processor, an expected gradient of color change over time for at least one test pad of the plurality of test pads from the captured digital image of the color chart; and
determining, using the processor, the concentration of the first analyte in the biological sample by comparing the determined gradient of color change of the first reagent test pad with the computed expected gradient of color change for the at least one test pad.

US Pat. No. 11,030,777

ADAPTIVE SUBBAND CODING FOR LIFTING TRANSFORM

Sony Group Corporation, ...

1. A method programmed in a non-transitory memory of a device comprising:multiplying a residual of a point cloud by a weight to generate a weighted residual;
quantizing the weighted residual to generate a quantized level;
dividing the quantized level by the weight to generate a reconstructed residual which is used for color compression of the point cloud;
dividing a plurality of lifting coefficients into a plurality of subbands; and
deriving a set of dead-zones for each subband for a set of color components, wherein the set of dead-zones includes dead-zones of Cb and Cr channels that are larger than a dead-zone of a Luma channel.

US Pat. No. 11,030,776

CALIBRATION OF A LIGHT-FIELD IMAGING SYSTEM

Molecular devices (Austri...

13. A light-field imaging system, comprising:a calibration target;
a light source to illuminate the calibration target;
an objective to collect light from the illuminated calibration target;
a microlens array downstream of the objective;
an image sensor to capture a light-field image of the calibration target; and
a computer configured to determine a total magnification and a microlens magnification from the light-field image,
wherein the calibration target includes at least one type of marker arranged to form a first periodic repeat and a second periodic repeat, wherein the computer is configured to determine the total magnification using the first periodic repeat, and wherein the computer is configured to determine the microlens magnification using the second periodic repeat.

US Pat. No. 11,030,775

MINIMAL USER INPUT VIDEO ANALYTICS SYSTEMS AND METHODS

FLIR Systems, Inc., Wils...

1. A surveillance camera system, comprising:an imaging sensor configured to generate a plurality of video image frames of a scene; and
a logic device communicatively coupled to the imaging sensor and configured to:
receive a user input indicating a location of a horizon;
perform a calibration to:
track an object captured across a first plurality of image locations in the plurality of video image frames;
determine an association between the first plurality of image locations and corresponding image sizes of the tracked object;
extrapolate and/or interpolate the association to obtain image sizes of the tracked object at a second plurality of image locations different from the first plurality of image locations, wherein the image sizes of the tracked object at the second plurality of image locations are further based on the horizon, wherein extrapolating and/or interpolating comprises determining a plurality of additional associations between changes of image sizes and changes of image locations based on the association, and wherein the determining the plurality of additional associations comprises associating no change of image size with a change of image locations parallel to the horizon of the scene;
determine a correlation between the first and second pluralities of image locations and the corresponding image sizes of the tracked object in the plurality of video image frames;
determine a physical size of the tracked object;
determine a second correlation between the image sizes and the physical size; and
perform video analytics based on the second correlation and the correlation between the image locations and the corresponding image sizes of the object determined by the calibration, wherein performing the video analytics comprises displaying, on an image frame, representations of the association between the first plurality of image locations and corresponding image sizes of the tracked object and/or representations associated with the image sizes of the tracked object at the second plurality of image locations.

US Pat. No. 11,030,774

VEHICLE OBJECT TRACKING

FORD GLOBAL TECHNOLOGIES,...

1. A method, comprising:determining an object location prediction based on a video stream data, wherein the object location prediction is based on processing cropped typicality and eccentricity data analytics (TEDA) data with a neural network by determining a first eccentricity image based on a per pixel average and a per pixel variance over a moving window of k video frames;
cropping the TEDA data based on a three-channel output image including a grayscale image, a positive eccentricity image determined by selecting pixels from the first eccentricity image when the pixels are greater than a per-pixel mean, and a negative eccentricity image determined by selecting pixels from the first eccentricity image when the pixels are less than the per-pixel mean; and
providing the object location prediction to a vehicle based on a location of the vehicle.

US Pat. No. 11,030,773

HAND TRACKING BASED ON ARTICULATED DISTANCE FIELD

Google LLC, Mountain Vie...

1. A method comprising:capturing, at a depth camera, a depth image of at least one hand of a user, the depth image comprising a plurality of pixels; and
identifying a current pose of the at least one hand by fitting an implicit surface model of the hand to a subset of the plurality of pixels, the fitting comprising:
interpolating a dense grid of precomputed signed distances to define a first signed distance function;
volumetrically deforming the first signed distance function based on a skinned tetrahedral mesh associated with a candidate pose to define an articulated signed distance field; and
estimating the current pose of the hand based on the articulated signed distance field.

US Pat. No. 11,030,772

POSE SYNTHESIS

Microsoft Technology Lice...

1. Enacted on a computing system, a method for synthesizing a novel pose of an object, the method comprising:receiving a reference image of an object corresponding to an original viewpoint;
translating the reference image of the object into a reference depth map of the object;
synthesizing a new depth map of the object corresponding to a new viewpoint; and
inputting the reference image of the object and the new depth map of the object into an identity recovery model to generate a new image of the object from the new viewpoint.

US Pat. No. 11,030,771

INFORMATION PROCESSING APPARATUS AND IMAGE GENERATING METHOD

SONY INTERACTIVE ENTERTAI...

1. An information processing apparatus comprising: a detecting section configured to detect an attitude of a head-mounted display device worn on a head of a user; a visual line direction determining section configured to determine a visual line direction in accordance with the attitude of the head-mounted display device detected by the detecting section; an image generating section configured to generate an image based on the determined visual line direction; and an image providing section configured to provide the head-mounted display device with the generated image, wherein the visual line direction determining section determines the visual line direction in such a manner that a rotation angle detected of the head-mounted display device relative to a horizontal reference direction is inverted to allow the user to view the generated display image behind the user at a same height.

US Pat. No. 11,030,770

MARKER AND POSTURE ESTIMATING METHOD USING MARKER

National Institute of Adv...

1. A marker, comprising:a two-dimensional pattern code;
a first posture detection pattern emitting different light depending on an observation direction of the two-dimensional pattern code at around the first axis on a two-dimensional plane formed by the two-dimensional pattern code; and
the first posture detection pattern emits mutually-different lights to two spaces obtained by being divided by a plane including a perpendicular line to the two-dimensional lane and the first axis.

US Pat. No. 11,030,769

METHODS AND APPARATUS TO PERFORM IMAGE ANALYSES IN A COMPUTING ENVIRONMENT

Nielsen Consumer LLC, Ne...

1. An apparatus to classify a retail product tag, the apparatus comprising:a feature extractor to generate a first image descriptor based on a first image of a first retail product tag corresponding to a first retailer category, the first image descriptor representative of one or more visual features of the first retail product tag;
a feature descriptor generator to generate a feature descriptor corresponding to the first retail product tag based on the first image descriptor and a first category signature corresponding to the first retailer category, wherein the feature descriptor is at least twice a bit length of the first image descriptor; and
a classifier to:
generate a first probability value corresponding to a first type of promotional product tag and a second probability value corresponding to a second type of promotional product tag based on the feature descriptor; and
determine whether the first retail product tag corresponds to the first type of promotional product tag or the second type of promotional product tag based on the first probability value and the second probability value.

US Pat. No. 11,030,768

IMAGE PROCESSING FOR OCCLUDED ITEM RECOGNITION

NCR Corporation, Atlanta...

1. A method, comprising:receiving an image depicting an unknown item that is occluded in the image;
preprocessing the image and producing a modified image that includes item pixels associated with the unknown item, wherein the preprocessing further includes clustering the item pixels of the image by color and blacking out a first color associated with a tracked person in the modified image;
identifying a known item identifier for the unknown item from the item pixels of the modified image; and
providing the known item identifier.

US Pat. No. 11,030,767

IMAGING APPARATUS AND IMAGING SYSTEM

FANUC CORPORATION, Yaman...

1. An imaging apparatus for performing processing of machine learning related to estimation of a distance image closer to reality related to an object than a distance image related to the object captured by an imaging sensor from the distance image, the imaging apparatus comprising:a data acquisition unit for acquiring distance image data related to the object; and
a preprocessing unit for creating input data from the distance image data related to the object,
wherein
processing of machine learning for estimating distance image data close to reality related to the object from the distance image data related to the object is performed using the input data,
the preprocessing unit creates teacher data in which the distance image data related to the object is input data and the distance image data close to reality related to the object is output data, and
the imaging apparatus further comprises
a learning unit for performing supervised learning related to the processing of the machine learning based on the teacher data and generating a learned model for estimating the distance image data close to reality related to the object from the distance image data related to the object.

US Pat. No. 11,030,766

AUTOMATED MANIPULATION OF TRANSPARENT VESSELS

Dishcraft Robotics, Inc.,...

1. A method comprising:receiving, by a computer system, one or more images from one or more cameras having a surface in the field of view of each camera of the one or more cameras, the one or more images being only two-dimensional images;
identifying, by the computer system, a transparent object in the one or more images and using only the one or more images without use of either of a stereoscopic camera and a lidar sensor;
determining, by the computer system, a pose of the transparent object using only the one or more images, the pose including one or more dimensions of a six-dimensional (6D) pose of the transparent object, the 6D pose including three positional dimensions and three angular dimensions;
determining, by the computer system, grasping configuration parameters according to the pose;
invoking, by the computer system, grasping of the transparent object by an end effector coupled to an actuator according to the grasping configuration parameters, the actuator being both a positional and rotational actuator;
wherein identifying, by the computer system, the transparent object in the one or more images comprises:
inputting, by the computer system, the one or more images using an object configuration classifier that assigns the transparent object to a selected category of a finite number of object configuration categories, the classifier not determining dimensions of the 6D pose of the transparent object;
wherein the finite number of object configuration categories includes a first subset of categories and a second subset of categories;
wherein the first subset of the plurality of categories includes:
(i) the one or more transparent objects include a vessel positioned upright on the surface;
(ii) the one or more transparent objects include a vessel lying on its side on the surface;
wherein the second subset includes:
(v) the one or more transparent objects include one or more vessels that are neither upright on the surface nor lying on their sides on the surface;
wherein the selected category being one of the first subset of categories, the method further comprising:
in response to determining that the selected category is in the first subset of categories, determining, by the computer system, the grasping configuration parameters according to less than all of the dimensions of the 6D pose of the transparent object;
wherein the selected category being one of the second subset of categories, the method further comprising:
in response to determining that the selected category is in the second subset of categories, performing, by the computer system:
determining the 6D pose of the transparent object; and
determine the grasping configuration parameters according to the 6D pose.

US Pat. No. 11,030,765

PREDICTION METHOD FOR HEALTHY RADIUS OF BLOOD VESSEL PATH, PREDICTION METHOD FOR CANDIDATE STENOSIS OF BLOOD VESSEL PATH, AND BLOOD VESSEL STENOSIS DEGREE PREDICTION DEVICE

BEIJING KEYA MEDICAL TECH...

1. A computer-implemented prediction method for a candidate stenosis of a blood vessel path, the method comprising:extracting a blood vessel path and its centerline based on the image of the blood vessel;
determining a candidate stenosis for each blood vessel path by:
obtaining a blood vessel radius of the blood vessel path;
detecting, by a processor, a radius peak and a radius valley of the blood vessel radius of the blood vessel path;
predicting a reference healthy radius of the blood vessel path by performing a linear regression on the radius peak in the blood vessel radius;
replacing the radius peak among the radius peaks in the blood vessel radius that is lower than the corresponding reference healthy radius with the corresponding reference healthy radius;
predicting, by the processor, the healthy radius of the blood vessel path by performing a quadratic regression on the radius peak of the blood vessel radius; and
determining the candidate stenosis based on the radius valley and the healthy radius of the blood vessel path;
setting a range of the candidate stenosis for each blood vessel path based on the determined candidate stenosis;
obtaining image blocks along the centerline within the range of candidate stenosis for each of the blood vessel path; and
based on the obtained image blocks, determining the degree of stenosis for each blood vessel path by using a trained learning network comprising a convolutional neural network and a recurrent neural network.

US Pat. No. 11,030,764

METHOD AND SYSTEM FOR TRAILER SIZE ESTIMATING AND MONITORING

DENSO INTERNATIONAL AMERI...

1. A vehicle system for estimating a trailer size, comprising:a plurality of sensors arranged on a vehicle and configured to detect objects external to the vehicle and provide trailer data indicative of a trailer location behind a vehicle;
a memory configured to maintain a virtual grid including a plurality of cells;
a controller in communication with the sensors and memory and configured to:
determine, based on the trailer data received within a predefined amount of time, an occupancy frequency for each of the plurality of cells, the occupancy frequency being an incrementation of each time an object is detected within the respective cell;
determine a threshold distribution based on the occupancy frequency of each cell;
determine a trailer size based on the cells having an occupancy frequency exceeding the threshold distribution;
receive subsequent trailer data; and
determine, based on the subsequent trailer data, a subsequent occupancy frequency for each cell over a subsequent period of time subsequent to the predefined amount of time.

US Pat. No. 11,030,763

SYSTEM AND METHOD FOR IDENTIFYING ITEMS

Mashgin Inc., Palo Alto,...

1. A method for item identification, comprising:determining an image set of a set of items, wherein the image set is captured by a set of cameras;
generating a point cloud for the set of items;
generating a height map of the items using the point cloud;
determining a region mask for each item based on the height map using a segmentation classifier;
generating a coarse mesh for each item using the respective region mask;
determining an image segment set for each item by projecting the respective coarse mesh into each image of the image set; and
determining an item identifier for each item using the respective image segment set.

US Pat. No. 11,030,762

METHOD AND APPARATUS FOR IMPROVING 3D IMAGE DEPTH INFORMATION AND UNMANNED AERIAL VEHICLE

AUTEL ROBOTICS CO., LTD.,...

1. A method for improving 3D image depth information, wherein the method comprises:acquiring a raw 3D image, performing first-time exposure on the raw 3D image, and generating a first exposure image;
acquiring an effective pixel point in the first exposure image, and if a quantity of effective pixel points meets a preset condition, extracting the effective pixel point in the first exposure image; otherwise, performing second-time exposure on the raw 3D image, and generating a second exposure image; and
if a quantity of effective pixel points in the second exposure image meets a preset condition, extracting the effective pixel point in the second exposure image; otherwise, continuously performing exposure and generating a corresponding exposure image, determining whether the quantity of effective pixel points of the exposure image meets the preset condition until an exposure time reaches an exposure time threshold or a quantity of effective pixel points in the corresponding exposure image generated through exposure meets the preset condition, and extracting an effective pixel point in the exposure image; and
demarcating and calibrating the effective pixel point, and generating a 3D depth information map;
wherein the acquiring an effective pixel point in the first exposure image specifically comprises:
acquiring the pixel point of the first exposure image, and determining whether the pixel point meets a preset signal quality parameter; and
if the pixel point meets the preset signal quality parameter, marking the pixel point as an effective pixel point, and acquiring all effective pixel points in the first exposure image.

US Pat. No. 11,030,761

INFORMATION PROCESSING DEVICE, IMAGING DEVICE, APPARATUS CONTROL SYSTEM, MOVABLE BODY, INFORMATION PROCESSING METHOD, AND COMPUTER PROGRAM PRODUCT

Ricoh Company, Ltd., Tok...

1. An information processing device, comprising:processing circuitry configured to
set an initial object region within image information obtained by capturing an imaging range, the initial object region corresponding to an object existing in the imaging range;
acquire luminance information indicating luminance in the imaging range; and
adjust the initial object region to obtain an adjusted object region based on the acquired luminance information within a luminance detection region that is set in a lower part of the initial object region, wherein the processing circuitry is further configured to
acquire distance information indicating a distance between the object and an apparatus equipped with a stereo camera that acquires the image information and generates look-down view information based on the distance information, wherein the distance information is disparity information,
set the initial object region based on the look-down view information,
set the initial object region in a disparity image indicating a distribution of disparities in the imaging range based on the disparity information,
generate U map data indicating a U map, in which a combination of a first axial position, a second axial position, and a disparity in the disparity image is converted to two-dimensional histogram information, based on the disparity information,
generate real U map data indicating a real U map, in which a first axis in the U map is converted to an actual distance and the disparity on a second axis is converted to a decimated disparity value at a decimation rate corresponding to the actual distance, based on the U map data, and
detect an isolated region corresponding to the object in the real U map, and set the initial object region based on the isolated region.

US Pat. No. 11,030,760

IMAGE PROCESSING DEVICE, RANGING DEVICE AND METHOD

KABUSHIKI KAISHA TOSHIBA,...

1. An image processing device comprising:storage which stores a statistical model generated by learning bokeh produced in a first image affected by aberration of an optical system, the bokeh changing nonlinearly in accordance with a distance to a subject in the first image, and
a processor which
obtains a second image affected by the aberration of the optical system, and
inputs the second image to the statistical model and obtains distance information indicating a distance to a subject in the second image.

US Pat. No. 11,030,759

METHOD FOR CONFIDENT REGISTRATION-BASED NON-UNIFORMITY CORRECTION USING SPATIO-TEMPORAL UPDATE MASK

ASELSAN ELEKTRONIK SANAYI...

1. A method for a scene-based non-uniformity correction to achieve a fixed-pattern noise reduction and eliminate ghosting artifacts in an infrared imagery, comprising the steps of:assessing an input infrared image frame for scene detail for registrations to prevent false registrations that originated from low-detail scenes,
calculating horizontal and vertical translations between the input infrared image frames to find a shift,
introducing a scene-adaptive registration quality metric to eliminate erroneous parameter updates resulting from unreliable registrations,
applying a Gaussian mixture model (GMM)-based temporal consistency restriction to mask out unstable updates in non-uniformity correction parameters;
wherein the horizontal and vertical translations between the input infrared image frames are calculated by using 1-D horizontal and vertical projections of edge maps generated from original input infrared image frames using an edge extraction filter and matching 1-D projection vectors using a cross-correlation;
wherein the 1-D horizontal and vertical projections are calculated with the equations:

wherein pnx(j) is the horizontal projection vector and Pny(i) is the vertical projection vector;
where En represents an edge image calculated as:

where xn is the true response for the nth input infrared image frame and h is an edge filter of size r×r used in a scene detail calculation.

US Pat. No. 11,030,758

METHOD AND APPARATUS FOR REGISTERING IMAGES OF HISTOLOGICAL SECTIONS

Strateos, Inc., San Fran...

1. A method, comprising:digitally downsampling a set of full-resolution tissue section images to create a set of downsampled tissue section images;
calculating one or more transform matrices that register the downsampled tissue section images;
scaling the one or more transform matrices to apply to the set of full-resolution tissue section images; and
registering the set of full-resolution tissue section images by applying the scaled one or more transform matrices to the set of full-resolution tissue section images.

US Pat. No. 11,030,757

QUEUE ANALYZING METHOD AND IMAGE MONITORING APPARATUS

VIVOTEK INC., New Taipei...

1. A queue analyzing method of determining whether a rear object belongs to a queue of a front object, the queue analyzing method comprising:computing an angle difference and an interval between the rear object and the front object;
transforming an original interval threshold into an amended interval threshold via the angle difference;
comparing the interval with the amended interval threshold; and
determining the rear object and the front object belong to the same queue when the interval is smaller than the amended interval threshold.

US Pat. No. 11,030,756

SYSTEM AND METHOD FOR POSITION TRACKING USING EDGE COMPUTING

7-Eleven, Inc., Irving, ...

1. A system comprising:an array of cameras positioned above a space, wherein:
each camera of the array of cameras is operatively coupled with a camera client from an array of camera clients;
each camera of the array of cameras is configured to capture a video of a portion of the space, the space containing a person;
the array of camera clients operably coupled with the array of cameras; wherein:
a first camera client of the array of camera clients is operably coupled with a first camera and configured to:
receive a first plurality of frames of a first video from the first camera, wherein each frame of the first plurality of frames shows the person within the space, the first plurality of frames comprises a first plurality of color frames and a first plurality of depth frames, wherein:
the first plurality of color frames corresponds to visual colors of objects in the space; and
the first plurality of depth frames corresponds to distances of objects in the space from the first camera;
generate a timestamp when each corresponding color and depth frame is received by the first camera client;
send the first plurality of frames labeled with one or more corresponding timestamps and an identifier number of the first camera client to a first server from among a plurality of cluster servers;
generate a first plurality of tracks by performing a local position tracking of the person in the first plurality of depth frames;
for a first depth frame of the first plurality of depth frames, generate a first track of the first plurality of tracks by:
detecting a first contour associated with the person;
determining, based on pixel coordinates of the first contour, a first bounding area around the person shown in the first depth frame;
determining, based on the first bounding area, first coordinates of the person in the first depth frame; and
associating a first tracking identification to the person, wherein the first tracking identification is linked to historical detections associated with the person, wherein the historical detections associated with the person comprise at least one of a contour, a bounding area, and a segmentation mask associated with the person;
for a second depth frame of the first plurality of depth frames, generate a second track of the first plurality of tracks by:
detecting a second contour associated with the person;
determining, based on pixel coordinates of the second contour, a second bounding area around the person shown in the second depth frame;
determining, based on the second bounding area, second coordinates of the person in the second depth frame;
determining whether the second bounding area corresponds to the first bounding area; and
in response to determining that the second bounding area corresponds to the first bounding area, associating the first tracking identification to the person;
send the first plurality of tracks labeled with one or more corresponding timestamps, the identifier number of the first camera, the historical detections associated with the person, and the first tracking identification associated with the person to a second server from among the plurality of cluster servers;
a second camera client of the array of camera clients is operably coupled with a second camera and separate from the first camera client, the second camera client configured to:
receive a second plurality of frames of a second video from the second camera, wherein each frame of the second plurality of frames shows the person within the space, the second plurality of frames comprises a second plurality of color frames and a second plurality of depth frames, wherein:
the second plurality of color frames corresponds to visual colors of objects in the space; and
the second plurality of depth frames corresponds to distances of objects in the space from the second camera;
generate a timestamp when each corresponding color and depth frame is received by the second camera client;
send the second plurality of frames labeled with one or more corresponding timestamps and an identifier number of the second camera to the first server from among the plurality of cluster servers;
generate a second plurality of tracks by performing a local position tracking of the person in the second plurality of depth frames;
for a third depth frame of the second plurality of depth frames, generate a third track of the second plurality of tracks by:
detecting a third contour associated with the person;
determining, based on pixel coordinates of the third contour, a third bounding area around the person shown in the third depth frame;
determining, based on the third bounding area, third coordinates of the person in the third depth frame; and
associating a second tracking identification to the person, wherein the second tracking identification is linked to the historical detections associated with the person;
for a fourth depth frame of the second plurality of depth frames, generate a fourth track of the second plurality of tracks by:
detecting a fourth contour associated with the person;
determining, based on pixel coordinates of the fourth contour, a fourth bounding area around the person shown in the fourth depth frame;
determining, based on the fourth bounding area, fourth coordinates of the person in the fourth depth frame;
determining whether the fourth bounding area corresponds to the third bounding area; and
in response to determining that the fourth bounding area corresponds to the third bounding area, associating the second tracking identification to the person;
send the second plurality of tracks labeled with one or more corresponding timestamps, the identification number of the second camera, the historical detections associated with the person, and the second tracking identification associated with the person to the second server from among the plurality of cluster servers; and
each server from among the plurality of cluster servers configured to:
receive the first plurality of frames and the first plurality of tracks from the first camera client;
receive the second plurality of frames and the second plurality of tracks from the second camera client;
store the first and second plurality of frames such that a particular frame from the first and second plurality of frames is retrievable using one or more corresponding labels comprising an identifier number of a camera associated with the particular frame and a timestamp associated with the particular frame; and
store the first and second plurality of tracks such that a particular track from the first and second plurality of tracks is retrievable using one or more corresponding labels comprising an identifier number of a camera associated with the particular track, a timestamp associated with the particular track, a particular historical detection associated with a person detected in the particular track, and a particular tracking identification detected in the particular track.

US Pat. No. 11,030,755

MULTI-SPATIAL SCALE ANALYTICS

CISCO TECHNOLOGY, INC., ...

1. A method of object detection, the method comprising:generating one or more object trackers for tracking at least one object detected from on one or more images;
generating one or more blobs for the at least one object based on tracking motion associated with the at least one object from the one or more images;
generating one or more tracklets for the at least one object based on associating the one or more object trackers and the one or more blobs, the one or more tracklets including one or more scales of object tracking data for the at least one object;
determining one or more uncertainty metrics based on the one or more object trackers and an embedding of the one or more tracklets; and
generating a training module for tracking the at least one object using the embedding and the one or more uncertainty metrics.

US Pat. No. 11,030,754

COMPUTER IMPLEMENTED PLATFORM, SOFTWARE, AND METHOD FOR DRAWING OR PREVIEW OF VIRTUAL IMAGES ON A REAL WORLD OBJECTS USING AUGMENTED REALITY

Sketchar, UAB, Vilnius (...

1. A computer-implemented method for drawing using augmented reality, wherein the method is implemented by a processor executing a marker-less tracking algorithm stored in a memory of a mobile computing device, the method comprising:detecting, by a page detector, an image of a drawing area;
initializing a marker-less tracker, wherein said initializing comprises:
(a) capturing, via the page detector, a frame of the drawing area, (b) displaying, via a graphical user interface (GUI) of the mobile computing device, the frame of the drawing area, and (c) uniformly distributing template patches over the frame of the drawing area, wherein the template patch is a fragment of texture of a template image used for surface tracking; and
executing template patches tracking, wherein a perspective transformation of the template image to the frame as a current image is evaluated in video streaming on the GUI of the mobile computing device;
updating an initial template patch with an initial descriptor to an updated template patch with a new descriptor;
monitoring the updated template patch to validate it; and
responsive to a failure to validate the updated patch, switching back to the initial template patch with the initial descriptor.

US Pat. No. 11,030,753

IMAGE SEGMENTATION AND MODIFICATION OF A VIDEO STREAM

Snap Inc., Santa Monica,...

1. A method comprising:determining, using one or more processors of a client device, an approximate location of an object of interest within a video stream comprising a first set of images and a second set of images;
identifying, by the one or more processors, an area of interest comprising a plurality of pixels within the one or more images of the first set of images, the area of interest being a portion of the one or more images encompassing the approximate location of the object of interest;
generating, by the client device a binarization matrix by performing operations comprising:
for each pixel of the plurality of pixels within the area of interest:
retrieving a set of color values associated with the pixel;
determining a binary value for the pixel by comparing a first value of a first portion of the set of color values with a second value of a second portion of the set of color values;
storing the binary value for the pixel in the binarization matrix;
modifying each pixel in the plurality of pixels within the area of interest using the binarization matrix to create a binarized area of interest;
identifying, by the one or more processors, a first set of pixels and a second set of pixels within the binarized area of interest; and
modifying, by the one or more processors, a color value for the first set of pixels within the second set of images of the video stream.

US Pat. No. 11,030,752

SYSTEM, COMPUTING DEVICE, AND METHOD FOR DOCUMENT DETECTION

United Services Automobil...

1. A computing device, comprising:a processor;
a computer-readable medium; and
instructions stored on the computer-readable medium, the instructions configured to, when executed by the processor, cause the processor to:
receive a digital image file, the digital image file including a digital image and primary edge coordinates corresponding to a check depicted in the digital image;
determine a plurality of supplemental edge coordinates corresponding to the check depicted in the digital image;
compare the primary edge coordinates to the supplemental edge coordinates;
select one of the supplemental edge coordinates or the primary edge coordinates based on a predetermined criteria;
detect a check image based on the selected edge coordinates; and
extract check data from the check image.

US Pat. No. 11,030,751

CELL IMAGE EVALUATION DEVICE, METHOD, AND PROGRAM

FUJIFILM Corporation, To...

19. A cell image evaluation method comprising:determining whether a captured image obtained by capturing an inside of a container that contains a cell is an image obtained by capturing a meniscus region within the container or an image obtained by capturing a non-meniscus region within the container; and
evaluating the image of the meniscus region and the image of the non-meniscus region by different evaluation methods to evaluate a state of the cell included in the captured image.

US Pat. No. 11,030,750

MULTI-LEVEL CONVOLUTIONAL LSTM MODEL FOR THE SEGMENTATION OF MR IMAGES

MSD International GmbH, ...

1. A computer-implemented method of image segmentation comprising:receiving magnetic resonance (MR) images in groups;
applying a first neural network block to the MR images to produce feature maps at two or more levels of resolution;
applying a second neural network block to the feature maps at the two or more levels of resolution to produce two or more output tensors at corresponding levels of resolution;
applying a segmentation block to the two or more output tensors to generate results; and
obtaining the results of the segmentation block as a probability map.

US Pat. No. 11,030,749

IMAGE-PROCESSING APPARATUS, IMAGE-PROCESSING METHOD, AND STORAGE MEDIUM STORING IMAGE-PROCESSING PROGRAM

OLYMPUS CORPORATION, Tok...

1. An image-processing apparatus comprising:an image processor comprising circuitry or a hardware processor that operates under control of a stored program, the image processor being configured to execute processes comprising:
a saliency-map calculating process that calculates saliency maps based on at least one type of feature quantity obtained from an input image;
a salient-region-identifying process that identifies a salient region by using the saliency maps;
a salient-region-score-calculating process that calculates a score of the salient region by comparing a distribution of values of the saliency map in the salient region and a distribution of values of the saliency map in a region other than the salient region; and
a saliency-evaluating process that evaluates the saliency of the salient region based on the score,
wherein the salient-region-score-calculating process calculates the score based on a difference between a weighted sum of an average value and a maximum value of the saliency map in the salient region and an average value of the saliency map in the region other than the salient region.

US Pat. No. 11,030,748

AUTOMATIC CT DETECTION AND VISUALIZATION OF ACTIVE BLEEDING AND BLOOD EXTRAVASATION

KONINKLIJKE PHILIPS N.V.,...

1. An image processing system, comprising:an input interface for receiving an earlier input image and a later input image acquired of an object whilst a fluid is present within the object, wherein said object is a human or animal anatomy and said liquid is blood, and wherein the earlier input image and the later input image have been acquired after injection of a contrast agent;
a differentiator configured to form a difference image from the at least two input images;
an image structure identifier operable to identify in the difference image one or more locations of internal bleedings, where blood escapes through a leakage point in a vessel, based on a respective feature descriptor computed to describe a respective neighborhood around said one or more locations, if a region, in the difference image with negative values is adjacent to a region, with positive values; and
an output interface for outputting a feature map that includes the one or more locations.

US Pat. No. 11,030,747

SYSTEM AND METHOD FOR AUTOMATIC THORACIC ORGAN SEGMENTATION FROM CT USING A DEEP LEARNING FRAMEWORK

7. An apparatus for automatic thoracic organ segmentation, comprising:one or more processors;
a display; and
a non-transitory computer readable memory storing instructions executable by the one or more processors, wherein the instructions are configured to:
receive three-dimensional (3D) images obtained by a computed tomography (CT) system;
process the 3D images to have the same spatial resolution and matrix size;
build a two-stage deep learning framework using convolutional neural networks (CNNs) for organ segmentation;
adapt the deep learning framework to be compatible with incomplete training data;
improve the CNNs upon arrival of new training data;
post-process the output from the deep learning framework to obtain final organ segmentation; and
display the organ segmentations;
receive 3D images and their corresponding information such as pixel spacing, slice thickness and matrix size;
resize the 3D images to have the same pixel spacing, matrix size; and
apply lower and upper thresholds on the image intensities.

US Pat. No. 11,030,746

ASSISTED DENTAL BEAUTIFICATION METHOD AND APPARATUS FOR IMPLEMENTING THE SAME

Chengdu Besmile Medical T...

1. An assisted dental beautification method, comprising:obtaining information for dental beautification by shooting or receiving a video of a patient's mouth;
selecting a representative picture from the video of the patient's mouth, selecting corresponding positions of teeth to be beautified from the representative picture, and generating an image of beautified teeth at the corresponding positions, to obtain a first beautified reference picture, wherein the representative picture comprises a front view, a left side view, and a right side view, selected automatically according to a process comprising:
comparing a face of each frame with a standard facial model, and calculating a face offset angle, to obtain candidate pictures for the front, left, and right side views;
extracting an oral cavity area in the candidate pictures, calculating gray scale pictures, multiplying two gray scale differences in each pixel area of the gray scale pictures, and accumulating pixel by pixel to calculate a score according to the formula:
D(f)=?y?x|f(x,y)?f(x+1,y)|*|f(x,y)?f(x,y+1)|;
wherein x and y represent an abscissa and an ordinate of each pixel; and
selecting one of the front, left, and right views with a highest score from the candidate pictures to obtain the representative picture;
transmitting the video of the patient's mouth to a server, calculating information about the teeth to be beautified in the video of the patient's mouth using the server, and sending the information about the teeth to be beautified to a user terminal, to obtain a second beautified reference picture; and
displaying the first and second beautified reference pictures to the patient for selection.

US Pat. No. 11,030,745

IMAGE PROCESSING APPARATUS FOR ENDOSCOPE AND ENDOSCOPE SYSTEM

OLYMPUS CORPORATION, Tok...

1. An image processing apparatus for an endoscope, the apparatus comprising processing circuitry configured to:acquire a pair of a left image and right image;
generate a three-dimensional display image for three-dimensional observation by a user based on the pair of the left image and right image;
set a first region in a part of one of the left image and the right image, the first region being a region having a blur;
set a second region in a part of each of the left image and the right image, the second region being a region where the user is performing treatment, wherein the processing circuitry is configured to set the second region based on a third region set by using at least one of motion information of a treatment instrument used by the user, position information of the treatment instrument on each of the left image and the right image, and shape information of the treatment instrument;
determine whether or not the first region at least partially overlaps with the second region;
in response to determining that the first region at least partially overlaps with the second region, outputting a notification to the user; and
in response to determining that the first region does not at least partially overlap with the second region, not outputting the notification to the user,
wherein the outputting the notification to the user comprises generating information for notifying the user that the second region has a blur, when the user performs three-dimensional observation.

US Pat. No. 11,030,744

DEEP LEARNING METHOD FOR TUMOR CELL SCORING ON CANCER BIOPSIES

AstraZeneca Computational...

1. A method of generating a score of a histopathological diagnosis of a cancer patient, comprising:loading a first image patch into a processing unit, wherein the first image patch is cropped from a digital image of a slice of tissue, and wherein the slice of tissue has been immunohistochemically stained using a diagnostic antibody;
determining how many pixels of the first image patch belong to a first tissue that has been positively stained by the diagnostic antibody, wherein the determining is performed by processing each pixel of the first image patch using a convolutional neural network to recognize whether the processed pixel belongs to the first tissue based on other pixels in the first image patch;
processing additional image patches that have been cropped from the digital image to determine how many pixels of each additional image patch belong to the first tissue;
computing the score of the histopathological diagnosis based on a total number of pixels that belong to the first tissue; and
displaying the digital image and the score on a graphical user interface.

US Pat. No. 11,030,743

SYSTEM AND METHOD FOR CORONARY CALCIUM DEPOSITS DETECTION AND LABELING

TENCENT AMERICA LLC, Pal...

1. A method implemented by one or more processors, a memory, and one or more programs, the one or more programs being stored in the memory, the program comprising one or more modules each corresponding to a set of instructions, the one or more processors being configured to execute the instructions, and the method comprising:receiving image data of one or more coronary arteries;
generating a binary segmentation indicating presence of calcium in the one or more coronary arteries from the image data;
generating, in parallel with generating the binary segmentation, a branch density of the one or more coronary arteries; and
assigning a coronary artery label from the branch density to the binary segmentation such that at least one indication of presence of calcium of the binary segmentation is labeled as present in a specific one of the one or more coronary arteries.

US Pat. No. 11,030,742

SYSTEMS AND METHODS TO FACILITATE REVIEW OF LIVER TUMOR CASES

GE Precision Healthcare L...

1. An apparatus comprising:at least one processor; and
at least one computer readable storage medium including instructions which, when executed, cause the at least one processor to at least:
process an image to reduce noise in the image;
identify at least one of an organ of interest or a region of interest in the image;
analyze values in at least one of the organ of interest or the region of interest;
process the at least one of the organ of interest or the region of interest based on the analyzed values to provide a processed object in the at least one of the organ of interest or the region of interest; and
display the processed object for interaction via an interface, the display to include exposing at least one of the organ of interest or the region of interest by at least:
removing voxels from the image outside a soft tissue range to produce a modified image;
applying a morphology closure filter to the modified image;
selecting a largest connected component in the modified image;
selecting a first slice in the largest connecting component having a largest number of pixels in the largest connected component;
selecting a plurality of second slices within a thickness of the first slice to form an object;
eroding the object to select internal voxels inside the object to expose the region of interest;
computing deviation in the region of interest; and
applying a look up table to display values of the region of interest.

US Pat. No. 11,030,741

DENTAL THREE-DIMENSIONAL DATA PROCESSING DEVICE AND METHOD THEREOF

MEDIT CORP., Seoul (KR)

1. A dental three-dimensional data processing device, comprising:an input unit for receiving teeth data, and image data of a human face;
a control unit for generating three-dimensional alignment data by combining and aligning the teeth data and the image data;
a movement extraction unit for generating trajectory data by analyzing the movement of teeth or temporomandibular joint from the image data;
a simulation unit for simulating the movement of the teeth or the temporomandibular joint by applying the trajectory data to the alignment data; and
an output unit for outputting the results of simulating so that the alignment data moves according to the trajectory data in the simulation unit,
wherein the simulation unit displays the movement trajectory of the teeth data and an interference surface on a screen, and
wherein the control unit detects the rotation shaft of the temporomandibular joint through the simulation.

US Pat. No. 11,030,740

DIGITAL ANALYSIS OF A DIGITAL IMAGE REPRESENTING A WOUND FOR ITS AUTOMATIC CHARACTERISATION

1. A method for digital analysis of a wound for automatic determination of information relating to the wound, comprising:acquiring a digital image of a zone of skin containing the wound;
determining a first likelihood map associated with a first characteristic of the zone using a first classifier on the image;
determining a second likelihood map associated with a second characteristic of the zone, using a second classifier, distinct from the first classifier, on the image;
segmenting the image into regions of homogeneous color, wherein (a) the segmenting the image into regions of homogeneous color and (b) the determining the first likelihood map and the determining the second likelihood map are performed independently of each other;
matching the regions with the first and second likelihood maps, and adapting the regions so as to determine regions corresponding substantially to the wound based at least in part on the matching; and
determining the information based on the regions corresponding substantially to the wound.

US Pat. No. 11,030,739

HUMAN DETECTION DEVICE EQUIPPED WITH LIGHT SOURCE PROJECTING AT LEAST ONE DOT ONTO LIVING BODY

Panasonic Intellectual Pr...

1. A device comprising:at least one light source that projects, onto a subject including face, dots formed by first light;
an image sensor that detects second light resulting from the projection of the dots and outputs an image signal denoting an image of the subject on which the dots are projected, the image signal including a plurality of pixels; and
a circuit;
wherein the second light includes scattered light component which is scattered inside the subject and directly reflected light component which is reflected by a surface of the subject, and
wherein the circuit
extracts, from the pixels of the image signal, first pixels corresponding to a first region of the face by performing face recognition process,
removes directly reflected component by performing a lowpass filtering process on the first pixels,
calculates an average of values of the first pixels, the values corresponding to the scattered light component, and
generates a biological information of the subject based on period of amplitude of change in the average with respect to time.

US Pat. No. 11,030,738

IMAGE DEFECT IDENTIFICATION

International Business Ma...

1. A computer-implemented method for image processing, comprising:determining, by one or more processors, whether a first image indicates a defect associated with a target object;
in response to determining that the first image indicates the defect:
generating, based on a heatmap, a mask covering a portion of the defect; and
generating the second image by removing, from the first image, the portion of the defect; and
identifying, by the one or more processors, the defect by comparing the first image with the second image.

US Pat. No. 11,030,737

HARDWARE TROJAN SCANNER

UNIVERSITY OF FLORIDA RES...

1. A method of detecting hardware Trojans in an integrated circuit (IC), the method comprising:providing a first Scanning Electron Microscope (SEM) image at a first dwelling-time on SEM, wherein the first SEM image is taken from a Trojan-free sample of the IC;
providing a target sample of the IC;
thinning a backside of the target sample to a predetermined thickness;
capturing a second SEM image of the target IC from its back side at a second dwelling-time, wherein the second dwelling time is shorter than the first dwelling time;
aligning the first SEM image and second SEM image at sample level by using image registration settings;
enhancing the captured second SEM image for increased feature detection;
comparing the enhanced second SEM image with the first SEM image by applying an image-processing algorithm to detect Trojan-caused-changes; and
producing a hardware Trojan map by grouping regions suspected of having hardware Trojans.

US Pat. No. 11,030,736

METHOD FOR APPLYING AUTOMATIC OPTICAL INSPECTION TO COPPER COILS THINNED BY LASER ETCHING AND APPARATUS THEREFOR

Laser Tek Taiwan Co., Ltd...

1. A method for applying automatic optical inspection (AOI) to copper coils thinned by laser etching, comprising the steps of:(a) placing a half-finished product under a scanning unit;
(b) activating the scanning unit to optically scan the half-finished product to generate a digital image of the half-finished product;
(c) sending the digital image to an image analysis unit;
(d) activating the image analysis unit to analyze the digital image, identify cutting boundaries of the half-finished product, compare the cutting boundaries of the half-finished product with an original laser processing path file, and identify defects of the half-finished product based on the comparison;
(e) activating the image analysis unit to find points around the half-finished product and distances;
(f) activating the image analysis unit to simulate an optimum path with respect to the defects of the half-finished product based on the points around the half-finished product and the distances;
(g) activating the image analysis unit to convert the optimum path into an optimum processing path file;
(h) activating the image analysis unit to send the optimum processing path file to a program unit;
(i) conveying the half-finished product to a predetermined position under a laser processing unit; and
(j) activating the program unit to instruct the laser processing unit to process the half-finished product based on the optimum processing path file, thereby producing a finished product.

US Pat. No. 11,030,735

SUBTERRANEAN DRILL BIT MANAGEMENT SYSTEM

ExxonMobil Upstream Resea...

1. A method comprising:training a first neural network of a supervised learning model to identify a location, an extent, a type, and a consistency of damage to a drill bit or bottom hole assembly based on images depicting damaged components of the drill bit or bottom hole assembly;
using the trained first neural network to identify the location, the extent, the type, and the consistency of damage to the drill bit or bottom hole assembly in at least one image obtained of the drill bit or bottom hole assembly;
based on the identified the location, the extent, the type, and the consistency of damage to the drill bit or bottom hole assembly, training a second neural network of the supervised learning model to identify a cause of damage to the drill bit or bottom hole assembly;
using the trained second neural network to identify the cause of damage to the drill bit or bottom hole assembly; and
generating a graphical output based on the identified location, extent, type and consistency of damage to the drill bit or bottom hole assembly.

US Pat. No. 11,030,734

MIRROR DIE IMAGE RECOGNITION SYSTEM, REFERENCE DIE SETTING SYSTEM, AND MIRROR DIE IMAGE RECOGNITION METHOD

FUJI CORPORATION, Chiryu...

1. A mirror die image recognition system configured to perform recognition, from many dies on a water diced into many separated dies, of a mirror die having a same quadrilateral shape as a production die having a pattern, in a manner that distinguishes the mirror die from the production die, the mirror die image recognition system comprising:a camera configured to image at least a portion of the wafer in a field of view; and
processing circuitry configured to
process the image to acquire a brightness level of a region at least five locations including regions corresponding to four corner location portions and a center portion of each die in an image captured by the camera,
determine whether the brightness levels of the regions of the at least five locations are uniform, and
recognize the die for which the brightness levels of the regions of the at least five locations are uniform as the mirror die having the same quadrilateral shape as the production die from other dies in the image.

US Pat. No. 11,030,733

METHOD, ELECTRONIC DEVICE AND STORAGE MEDIUM FOR PROCESSING IMAGE

Beijing Dajia Internet In...

1. A method for processing an image, comprising:receiving an instruction for a preset fly-away special effect;
creating a facial grid and facial feature grids in the image to be processed;
determining a facial image in an image region covered by the facial grid;
setting a pixel value of each pixel in the facial image to a target pixel value;
extracting facial feature images from an image region covered by the facial feature grids; and
obtaining a target image by mapping the facial feature images onto the facial image based on a preset triangular mapping algorithm and a preset offset.

US Pat. No. 11,030,732

INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING SYSTEM, AND IMAGE PROCESSING METHOD FOR GENERATING A SUM PICTURE BY ADDING PIXEL VALUES OF MULTIPLE PICTURES

SONY INTERACTIVE ENTERTAI...

1. An information processing device comprising:a picture data acquisition unit configured to sequentially acquire picture data of frames of a moving picture obtained by photographing;
a picture adding unit configured to generate a sum picture obtained by adding, to pixel values of a picture of a current frame newly acquired, pixel values of a picture of a past frame acquired earner, the pixel values added together being those of pixels at corresponding positions;
an output unit configured to output data representing a result of a predetermined process performed using the sum picture; and
an image analysis unit configured to perform an analysis process by extracting a feature point from the sum picture, and acquire information about a subject, wherein the output unit outputs data representing a result of information processing performed on a basis of the information about the subject, and wherein the image analysis unit performs an analysis process on a basis of the feature point extracted from the sum picture, performs an analysis process by extracting a feature point from the picture of the current frame, and integrates results of both.

US Pat. No. 11,030,731

SYSTEMS AND METHODS FOR FUSING INFRARED IMAGE AND VISIBLE LIGHT IMAGE

ZHEJIANG DAHUA TECHNOLOGY...

1. An image fusion system, comprising:a processor coupled to a storage; and
the storage configured to store instructions, the instructions, when executed by the processor, causing the image fusion system to effectuate a method comprising:
obtaining a visible light image and an infrared image relating to a same scene;
performing a first decomposition to the visible light image to obtain a first high-frequency component of the visible light image and a first low-frequency component of the visible light image;
performing a first decomposition to the infrared image to obtain a first high-frequency component of the infrared image and a first low-frequency component of the infrared image;
fusing the first high-frequency component of the visible light image and the first high-frequency component of the infrared image based on a first algorithm to generate a first fused high-frequency component;
fusing, based on a threshold and a difference between the first low-frequency component of the visible light image and the first low-frequency component of the infrared image, the first low-frequency component of the visible light image and the first low-frequency component of the infrared image to generate a first fused low-frequency component; and
performing reconstruction based on the first fused high-frequency component and the first fused low-frequency component to generate a fused image.

US Pat. No. 11,030,730

COMPOSITE GROUP IMAGE

Shutterfly, LLC, Redwood...

1. A method of generating a composite group image from subgroup images, the method comprising:accessing the subgroup images, each of the subgroup images having a common background;
determining boundaries of a subgroup area within each of the subgroup images;
determining at least one horizontal shift factor and at least one vertical shift factor using the determined boundaries;
determining a gap distance between the determined boundaries of the subgroup areas of the subgroup images using the at least one horizontal shift factor and the at least one vertical shift factor;
generating an arrangement for the subgroup images based on the at least one horizontal shift factor, the at least one vertical shift factor, and the gap distance; and
generating the composite group image by blending the subgroup images arranged in the arrangement.

US Pat. No. 11,030,729

IMAGE PROCESSING METHOD AND APPARATUS FOR ADJUSTING A DYNAMIC RANGE OF AN IMAGE

HUAWEI TECHNOLOGIES CO., ...

1. An image processing method, comprising:determining a maximum value in nonlinear primary color values of all components of each pixel of a first to-be-processed image;
determining dynamic parameters of a first transfer function comprising a reversed S-shaped transfer curve, wherein a form of the reversed S-shaped transfer curve is as follows:
wherein the L is the maximum value in the nonlinear primary color values of all the components of each pixel of the first to-be-processed image, wherein the L? is the transfer value, and wherein parameters the a, the b, the p, and the m are dynamic parameters of the reversed S-shaped transfer curve;converting the maximum value of each pixel into a transfer value based on the first transfer function for which the dynamic parameters are determined;
calculating a ratio between the transfer value and the maximum value of each pixel; and
adjusting a dynamic range for the nonlinear primary color values of all the components of each pixel based on the ratio to obtain nonlinear primary color values of all components of each corresponding pixel of a first target image.

US Pat. No. 11,030,728

TONE MAPPING TECHNIQUES FOR INCREASED DYNAMIC RANGE

Apple Inc., Cupertino, C...

1. An electronic device, comprising:a camera that captures an image having associated image metadata;
a display; and
control circuitry configured to:
apply a first tone mapping to the image;
generate tone mapping parameters for the image using the image metadata, wherein the tone mapping parameters are based on at least one of: whether a face is present in the image and camera settings in the image metadata file;
apply a second tone mapping to the image using the tone mapping parameters;
display the image on the display after applying the second tone mapping.

US Pat. No. 11,030,727

DISPLAY APPARATUS AND DISPLAY METHOD

Canon Kabushiki Kaisha, ...

1. A display apparatus comprising:an input unit configured to acquire data of an image;
a correcting unit configured to correct the data of the image; and
a display unit configured to display the image on a screen,
wherein in a case where an SDR image and an HDR image are displayed on the screen so as to be arranged side by side, the correcting unit corrects data of the SDR image so that an upper limit in display brightness of the SDR image is lower than an upper limit in display brightness of the HDR image,
wherein data of the SDR image and data of the HDR image are generated from the same image data, and
wherein the same image data is data with PQ characteristics or HLG characteristics.

US Pat. No. 11,030,726

IMAGE CROPPING WITH LOSSLESS RESOLUTION FOR GENERATING ENHANCED IMAGE DATABASES

Shutterstock, Inc., New ...

1. A computer-implemented method, comprising:selecting, in a server, a first image portion from an image;
identifying one or more known similar images associated with the first image portion;
determining a first score for enhancing the first image portion based on the one or more known similar images;
increasing a pixel resolution in the first image portion according to a scale to form an enhanced image portion;
identifying a synthetic value for the enhanced image portion; and
storing the enhanced image portion in a database when the synthetic value is below a tolerance value.

US Pat. No. 11,030,725

NOISE-CANCELLING FILTER FOR VIDEO IMAGES

Searidge Technologies Inc...

1. A computer-implemented method of processing a video stream in real-time, the method comprising:providing a hardware graphics processing unit (GPU) configured with a shader configured to implement a bilateral filter;
receiving video images of the video stream;
using the GPU shader to apply the bilateral filter to the video images of the video stream to generate a filtered video stream in real-time; and
transmitting the filtered video stream for display on a display device or storage in a storage device,
wherein the GPU computes the bilateral filter according to:
wherein ks is:wherein s are the coordinates of a pixel at the center of window ?, p are the coordinates of a current pixel, Js is a resulting pixel intensity, Ip, Is are pixel intensities at p and s respectively, I(Is, Ip) is defined as:wherein Ip and Is are vectors defining pixel RGB colour values, R(p,s) is defined as:wherein px, py are coordinates of the current pixel with respect to a kernel size and dimension, and,which is valid for:which is a first half of the kernel, wherein a second half of the kernel is symmetrical to the first half of the kernel.

US Pat. No. 11,030,724

METHOD AND APPARATUS FOR RESTORING IMAGE

SAMSUNG ELECTRONICS CO., ...

1. An image restoration method comprising:acquiring a target image by rearranging an input image of an object; and
restoring an output image from the acquired target image, based on an image restoration model comprising a convolutional layer corresponding to a plurality of kernels corresponding to a plurality of dilation gaps, respectively,
wherein the plurality of dilation gaps are determined based on a configuration of lenses included in an image sensor and a plurality of sensing elements included in the image sensor.

US Pat. No. 11,030,723

IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM

SONY CORPORATION, Tokyo ...

1. An image processing apparatus comprising:an image correction unit that repeatedly performs an image correction process using a plurality of processing units in at least two stages which include first-stage to final-stage processing units, wherein
the image correction unit inputs a low-quality image which is an image to be corrected and a high-quality image which is a reference image,
each of the plurality of processing units in each stage performs a correction process for the low-quality image, using a class correspondence correction coefficient classified in accordance with a class corresponding to a feature amount extracted from the high-quality image or a degraded image of the high-quality image,
the class correspondence correction coefficient is generated by a learning process,
the first-stage processing unit performs the correction process for the low-quality image, using a class correspondence correction coefficient corresponding to a feature amount extracted from a degraded image of the high-quality image having a degradation level that is substantially equal to a degradation level of the low-quality image which is the image to be corrected, and
wherein the image correction unit and the plurality of processing units are each implemented via at least one processor.

US Pat. No. 11,030,722

SYSTEM AND METHOD FOR ESTIMATING OPTIMAL PARAMETERS

FotoNation Limited, Galw...

21. A device comprising: one or more processors; memory storing device instructions that, when executed by the one or more processors, causes the device to perform operations comprising:providing a neural network, the neural network comprising:
a layer;
a first head downstream from the layer, the first head outputting a first output value associated with a first control parameter; and
a second head downstream from the layer and in parallel with the first head, the second head outputting a second output value associated with a second control parameter;
inputting first control parameter training images into the layer of the neural network;
receiving the first output value from the first head;
comparing the first output value and a first modified parameter value associated with at least one of the first control parameter training images;
generating, based at least in part on the comparing, a first control parameter error value;
adjusting the layer and the first head based at least in part on the first control parameter error value;
inputting second control parameter training images into the neural network;
receiving the second output value from the second head;
comparing the second output value and a second modified parameter value associated with at least one of the second control parameter training images;
generating, based at least in part on the comparing, a second control parameter error value; and
adjusting the second head based at least in part on the second control parameter error value.

US Pat. No. 11,030,721

EFFICIENT PARALLEL OPTICAL FLOW ALGORITHM AND GPU IMPLEMENTATION

Snap Inc., Santa Monica,...

1. A method comprising:receiving, by a computing device, image data from a camera of the computing device;
generating, from the image data, by the computing device, an image pyramid comprising multiple levels of an image in the image data sub sampled at various resolutions for each level;
determining, by the computing device, one or more predetermined levels of the image pyramid comprising higher resolution and one or more predetermined levels of the image pyramid comprising coarse levels of the image pyramid;
transferring, by the computing device, image data corresponding to the one or more predetermined levels of the image pyramid comprising higher resolution to the graphic processing unit (GPU) of the computing device,
during transfer, to the GPU of the computing device, of the image data corresponding to the one or more predetermined levels of the image pyramid comprising higher resolution, calculating, by the central processing unit (CPU) of the computing device, optical flow of the one or more predetermined coarse levels of the image pyramid;
transferring, by the CPU of the computing device, the calculated optical flow of the one or more predetermined coarse levels of the image pyramid to the GPU;
calculating, by the GPU of the computing device, optical flow of the one or more predetermined levels of the image pyramid comprising higher resolution using the optical flow of the one or more predetermined coarse levels of the image pyramid calculated by the CPU to generate an optical flow of the image data; and
outputting, by the GPU of the computing device, the optical flow of the image data.

US Pat. No. 11,030,720

DIRECT RETINAL PROJECTION APPARATUS AND METHOD

Varjo Technologies Oy, H...

1. A direct retinal projection apparatus comprising:means for detecting a gaze direction of a user;
at least one projector;
at least one first optical element comprising at least a first optical portion and a second optical portion having different optical properties with respect to magnification, wherein the at least one first optical element comprises an optical axis and is asymmetrical with respect to the optical axis, and the second optical portion is substantially ellipsoidal in shape;
at least one first actuator associated with the at least one first optical element; and
a processor configured to render a warped image having a spatially-uniform angular resolution via the at least one projector, whilst adjusting an orientation of the at least one first optical element via the at least one first actuator, based on the detected gaze direction of the user, to direct a projection of the warped image from the at least one first optical element towards a retina of a user's eye, wherein the asymmetrical first optical element with the elliptical second optical portion differently magnifies projections of a first portion and a second portion of the warped image, to produce on the retina of the user's eye a de-warped image having different spatially-variable angular resolutions at least along orthogonal axes of the de-warped image.

US Pat. No. 11,030,719

IMAGING UNIT, DISPLAY APPARATUS AND METHOD OF DISPLAYING

Varjo Technologies Oy, H...

1. A display apparatus comprising:an imaging unit comprising:
at least one camera, the at least one camera is to be used to capture an image of a given real-world scene; and
at least one optical element arranged on an optical path of a projection of the given real-world scene, wherein the at least one optical element comprises a first optical-element portion and a second optical-element portion having different optical properties with respect to magnification, wherein the projection of the given real-world scene is differently magnified by the first optical-element portion and the second optical-element portion in a manner that the image captured by the at least one camera has a variable angular resolution across a field of view of the at least one optical element, an angular resolution of a first portion of the captured image being greater than an angular resolution of a second portion of the captured image;
at least one image renderer; and
a processor coupled to the at least one camera and the at least one image renderer, wherein the processor is configured to:
process the captured image of the given real-world scene to generate an output image; and
render the output image via the at least one image renderer, wherein a shape of the first optical-element portion and a shape of the second optical-element portion are based on an aspect ratio of the output image.

US Pat. No. 11,030,718

IMAGE STITCHING METHOD AND RELATED MONITORING CAMERA APPARATUS

VIVOTEK INC., New Taipei...

1. An image stitching method applied to a monitoring camera apparatus with a first image receiver and a second image receiver for respectively acquiring a first image and a second image, the image stitching method comprising:detecting a plurality of first features in the first image and a plurality of second features in the second image;
dividing the plurality of first features at least into a first group and a second group in accordance with intervals between the plurality of first features and further dividing the plurality of second features at least into a third group in accordance with intervals between the plurality of second features;
analyzing the plurality of first features and the plurality of second features via an identification condition to determine whether one of the first group and the second group is matched with the third group; and
utilizing two matched groups to stitch the first image and the second image;
wherein the plurality of first features and the plurality of second features are human-made patterns respectively located inside the first image and the second image;
wherein the identification condition is selected from a group consisting of color, a dimension, a shape, an amount, an arrangement and a combination of the plurality of first features and the plurality of second features.

US Pat. No. 11,030,717

APPARATUS AND METHODS FOR MULTI-RESOLUTION IMAGE STITCHING

GoPro, Inc., San Mateo, ...

1. A computerized apparatus for providing a panoramic image, the computerized apparatus comprising:a non-transitory computer-readable medium comprising a plurality of computer-readable instructions configured to, when executed by one or more processor apparatus, cause the computerized apparatus to:
obtain a plurality of high-resolution input images from a plurality of capture devices;
transform the plurality of high-resolution input images to produce at least a first downsampled image, a second downsampled image, a first residual image, and a second residual image;
combine the first downsampled image with the second downsampled image to produce a combined downsampled image;
combine the first residual image with the second residual image to produce a combined residual image; and
combine the combined downsampled image with the combined residual image to produce the panoramic image.

US Pat. No. 11,030,716

IMAGE PROCESSING APPARATUS

Canon Kabushiki Kaisha, ...

1. An image processing apparatus comprising:a memory containing instructions; and
a controller executing the instructions to operate as:
an acquisition unit configured to acquire a panoramic image; and
a moving image generation unit configured to, based on a plurality of cropped images cropped from the panoramic image by sequentially moving a position of a cropping area, generate a moving image in which the plurality of cropped images is sequentially reproduced,
wherein according to a size in a predetermined direction or an aspect ratio of the panoramic image, the moving image generation unit switches whether to crop the panoramic image with an aspect ratio of the moving image generated by the moving image generation unit, or crop the panoramic image with an aspect ratio different from the aspect ratio of the moving image generated by the moving image generation unit.

US Pat. No. 11,030,715

IMAGE PROCESSING METHOD AND APPARATUS

HUAWEI TECHNOLOGIES CO., ...

1. An image processing method, comprising:obtaining a plurality of first images each based on application of a different set of scaling operations to an original image;
obtaining a plurality of difference maps based on the first images and the original image;
obtaining an edge feature parameter of each first pixel in an intermediate image based on pixel values of pixels in the plurality of difference maps, wherein each edge feature parameter comprises an edge direction and an edge strength, wherein a resolution of the intermediate image is the same as a common resolution of the difference maps that is an integer multiple of resolution of the original image, and wherein the integer multiple is the same as a scaling multiple used when the plurality of difference maps are obtained;
obtaining, based on the edge feature parameter of each first pixel in the intermediate image, an edge feature parameter of each second pixel whose pixel value is unknown in a target image, wherein a resolution of the target image is any multiple of the resolution of the intermediate image; and
determining a pixel value of each second pixel based on the edge feature parameter of each second pixel to obtain the target image.

US Pat. No. 11,030,714

WIDE KEY HASH TABLE FOR A GRAPHICS PROCESSING UNIT

MICROSOFT TECHNOLOGY LICE...

1. A system comprising:a graphics processing unit (GPU) having a plurality of processors in communication with a memory, wherein the GPU includes one or more paired hash tables configured in a multi-level tree configuration, a paired hash table having an upper portion and a lower portion, the upper portion and the lower portion indexed by a same key to two distinct entries of the paired hash table, the upper portion of the paired hash table including a first portion of an address, the lower portion of the paired hash table including a second portion of the address,
wherein the GPU includes a first module including instructions that when executed on the GPU performs a key-value mapping using the one or more paired hash tables to access an original value associated with a wide hash key that exceeds a word size of the GPU memory.

US Pat. No. 11,030,713

EXTENDED LOCAL MEMORY INCLUDING COMPRESSED ON-CHIP VERTEX DATA

Intel Corporation, Santa...

1. An electronic processing system, comprising:an application processor;
persistent storage media communicatively coupled to the application processor; and
a graphics subsystem communicatively coupled to the application processor, the graphics subsystem including:
a local memory;
a memory extender communicatively coupled to the local memory to extend the local memory, wherein the local memory comprises a first decode unit and a second decode unit, the first decode unit provides address decoding for local access to the local memory, and the second decode unit selectively provides address decoding for non-local access to the embedded local memory; and
a scheduler communicatively coupled to the memory extender to determine if a graphics workload utilizes the local memory for local access, wherein a selection is changed from the first decode unit to the second decode unit as an alternate address decoder to provide address decoding for non-local access to the local memory in response to a determination by the scheduler that the local memory is currently unused for a particular context.

US Pat. No. 11,030,712

MULTI-RESOLUTION SMOOTHING

Intel Corporation, Santa...

1. A computing system, comprising:a graphics processor to:
identify, using image data of visual content to be rendered by the graphics processor at different resolutions at different regions of a frame, pixels at a boundary between pixels of the different resolutions; and
selectively smooth, in response to identifying the pixels at the boundary, only the identified pixels at the boundary.

US Pat. No. 11,030,711

PARALLEL PROCESSING IMAGE DATA HAVING TOP-LEFT DEPENDENT PIXELS

Intel Corporation, Santa...

1. A method of processing image data comprising:receiving the image data in a graphics processing unit, wherein the image data is associated with a graphics application and includes one or more dependent pixels;
identifying a plurality of blocks in the image data;
selecting the plurality of blocks for processing;
partitioning at least one block into an upper left section and a lower right section, wherein the upper left section and the lower right section use a vector reference to process image data in a matrix format; and
processing a plurality of pixels in a wavefront order, wherein the processing the plurality of pixels includes dispatching one or more parallel processing instructions to parallel process the upper left section and the lower right section using the vector reference.

US Pat. No. 11,030,710

SYSTEM AND METHOD FOR RIDESHARING

1. A system, comprising:a computing system including a processor configured to:
receive, from a network device, a request for a ride at a scheduled time;
without user intervention, start to look for a candidate ride to serve the received request at a predetermined time prior to the scheduled time;
without user intervention, book the candidate ride to serve the received request upon determining that the candidate ride can serve the received request;
determine a potential failure by the candidate ride to serve the received request;
look, for a fallback duration of time, for a replacement ride to serve the received request in response to determining the potential failure;
without user intervention, stop looking for the replacement ride in response to reaching an end of the fallback duration, which occurs at a given time in relation to the scheduled time; and
without user intervention, book the replacement ride to serve the received request upon determining that the replacement ride can serve the received request.

US Pat. No. 11,030,709

METHOD AND SYSTEM FOR AUTOMATICALLY CREATING AND ASSIGNING ASSEMBLY LABOR ACTIVITIES (ALAS) TO A BILL OF MATERIALS (BOM)

DIGIBILT, Inc., Skokie, ...

1. A method for automatically creating and assigning assembly labor activities to a bill of materials, comprising:receiving a first request message on a SaaS service on a cloud server network device with one or more processors via a cloud communications network from a network device with one or more processors for automatically creating a Bill of Materials (BOM) for a created Building Information Modeling (BIM) electronic drawing created in a three-dimensional (3D) BIM modeling program on the network device for a desired physical structure;
creating automatically in real-time with the SaaS service on the cloud server network device a BOM in a pre-determined layout in a non-transitory computer readable medium in a database to store digital information for the created BIM electronic drawing for the desired physical structure with steps comprising:
accessing with the SaaS service on the cloud server network device an existing set of a plurality of master tables including a plurality of BOM field identifiers to store digital information from the created BIM electronic drawing for digital representations of a plurality of physical components used to build the desired physical structure,
populating with the SaaS service on the cloud server network device the set of the plurality of master tables with digital information from a link to the created BIM electronic drawing including in the first request message including the digital information from the digital representations of the plurality of physical components used to build the desired physical structure,
importing with the SaaS service on the cloud server network device from the link to the created BIM electronic drawing from the first request message, a plurality of 3D modeling program tables and filters from the 3D BIM modeling program used to create the created BIM electronic drawing in the 3D BIM modeling program,
filtering with the SaaS service on the cloud server network device the plurality of 3D modeling program tables to exclude non-essential tables and non-essential information and grouping of filtered plurality of 3D BIM modeling tables by instance and type of the plurality of physical components from the created BIM electronic drawing,
comparing with the SaaS service on the cloud server network device a plurality of 3D modeling program field identifiers from the imported and filtered 3D BIM modeling program tables with a plurality of master field identifiers in the plurality of master tables and as long as matches occur,
copying with the SaaS service on the cloud server network device digital information from a matched 3D BIM modeling program table field into a corresponding master table field, and
storing in the database with the SaaS service on the cloud server network device the set of the plurality of master tables including the plurality of BOM field identifiers with digital information populated from the created BIM electronic drawing and copied from the imported and filtered 3D modeling program tables, and
creating with the SaaS service on the cloud server network device from the database, a BOM for the created BIM electronic drawing for the desired physical structure,
the automatically created BOM including an electronic report produced in a standard and repeatable format and including a plurality of digital representations of a plurality of individual components used to build a desired physical structure to a resolution of a smallest individual piece level,
the automatically created BOM including the electronic report produced in the standard and repeatable format with a calculated quantity, purchase cost, installation time, installation cost and waste factor for the plurality of physical components used to build the desired physical structure and including other BOM components which cannot be drawn in the 3D BIM modeling program, including fastening components and covering components for the desired physical structure,
the automatically created BOM reducing risk, reducing costs and ensuring a trackable level of quality for the builders of the desired physical structure, by eliminating any need for manually estimating of any quantity, cost, installation time, installation cost or waste factor for any one of the plurality of physical components used to build the desired physical structure;
sending a first response message including the automatically created BOM from the SaaS service on the cloud server network device to the network device via the cloud communications network;
receiving a second request message on the SaaS service on the cloud server network device via the cloud communications network from the network device to automatically obtain a plurality of Assembly Labor Activity (ALA) templates for the automatically created BOM for the created BIM electronic drawing for the desired physical structure with steps comprising:
selecting automatically with the SaaS service on the cloud server network device a plurality of blank ALA templates for a plurality of desired BOM items from the automatically created BOM in the database in another pre-determined layout in the non-transitory computer readable medium on the cloud server network device to store ALA information for the created BIM electronic drawing for the desired physical structure,
populating with the SaaS service on the cloud server network device the selected plurality of blank ALA templates associated with a set of BOM Items in the database with ALA information from the automatically created BOM for the plurality of desired BOM items,
filtering with the SaaS service on the cloud server network device the plurality of populated ALA templates associated to include only ALA information for the plurality of desired BOM items, and
grouping of ALA information in the plurality of populated ALA templates into a first quality assurance category including ALA information that has been assigned to any of the plurality of desired BOM items and into a second quality assurance category including ALA information that has not yet been assigned to any BOM item in the automatically created BOM;
sending a second response message including the selected, populated, filtered and grouped plurality of ALA templates for the automatically created BOM for the created BIM electronic drawing for the desired physical structure from the SaaS on the cloud server network device to the network device via the cloud communications network;
receiving a third request message on the SaaS service on the cloud server network device via the cloud communications network from the network device to automatically create a Critical Path Method (CPM)—Work Time Schedule (WTS) from the automatically created BOM and the automatically selected, populated, filtered and grouped plurality of ALA templates;
automatically creating on the SaaS service on the cloud server network device the CPM-WTS including a cost estimate report for building the desired physical structure by automatically calculating labor costs using labor productivity information and labor rates stored in a database associated with the SaaS service on the cloud communications network;
sending a third response message including the automatically created CPM-WTS from the SaaS on the cloud server network device to the network device via the cloud communications network;
receiving a plurality of fourth request messages on the SaaS service on the cloud server network device via the cloud communications network from an interactive message system configured by the network device for a pre-determined time period,
the pre-determined time period including a time period that the interactive messaging system queries the SaaS service on the cloud server network device via the cloud communications network with a plurality of query messages to request progress information including: (1) which ALA items for the automatically created BOM are scheduled to occur for a specific timeframe in the automatically created CPM-WTS, (2) a scheduled crew size for building trades completing the ALA items, (3) contact information for contractors, subcontractors, suppliers or inspectors that will supply labor, materials or non-construction activities associated with the scheduled ALA items for the automatically created BOM, and (4) electronic instructional text, audio or visual information, or electronic links thereto, for installing the scheduled ALA items for the automatically created BOM in the automatically created CPM-WTS;
sending a plurality of fourth response messages for the pre-determined time period including the determined progress information from the SaaS on the cloud server network device to the interactive message system via the communications network,
the interactive messaging system using the determined progress information in the plurality of fourth response messages received from the SaaS on the cloud server network device to send one or more electronic messages during the pre-determined time period to one or more other mobile network devices with one or more processors for contractors, subcontractors, suppliers, inspectors or other parties that will supply labor, materials or non-construction activities associated with the scheduled ALA items for the automatically created BOM, and
the one or more electronic messages including which scheduled ALA items for the automatically created BOM are scheduled to occur for a specific timeframe in the automatically created CPM-WTS and electronic instructional text, audio or visual information, or the electronic links thereto, for installing the scheduled ALA items for the automatically created BOM in the automatically created CPM-WTS;
displaying on a display component on the network device from the SaaS service on the cloud server network device via the cloud communications network the automatically created BOM with the scheduled ALA items and automatically created CPM-WTS; and
automatically updating from the SaaS service via the cloud communications network in real-time the automatically created CPM-WTS on the display component on the network device with actual labor productivity information, crew size information and delay information collected from a job site for the desired physical structure to predict any schedule impacts for building the desired physical structure and to optimize information in the automatically created CPM-WTS.

US Pat. No. 11,030,708

METHOD OF AND DEVICE FOR IMPLEMENTING CONTAGIOUS ILLNESS ANALYSIS AND TRACKING

1. A method comprising:using a wearable device to detect an illness and/or symptoms of the illness in a user, wherein the wearable device comprises:
at least one body fluid detector configured to detect one or more body fluids used to generate body fluid analysis information, wherein the at least one body fluid detector comprises a moisture sensor including a sweat collection implementation to collect sweat from the user,
at least one microphone configured to receive audio from the user,
at least one temperature sensor configured to measure the temperature of the user, and
at least one motion sensor configured to detect an amount of motion of the user, wherein the wearable device detects the illness and/or symptoms of the illness in the user based on the detected sweat from the user, the received audio from the user, the measured temperature of the user, the detected amount of motion of the user, and a search history of the user,wherein the detected amount of motion of the user is compared with previously stored amounts of motion to determine lethargy,wherein the search history of the user is analyzed by detecting specified keywords to determine the illness and/or the symptoms of the illness in the user;determining, with the wearable device, additional devices of users who come within a specified distance of the wearable device; and
sending an alert regarding a diagnosis and/or analysis of the symptoms of the illness to a central server and/or a cloud device, wherein the central server and/or the cloud device share the diagnosis and/or the analysis of the symptoms of the illness with the additional devices of users.

US Pat. No. 11,030,707

INTEGRATING AN APPLICATION INTO OPERATING SYSTEM COMPONENTS OF A MOBILE COMPUTING PLATFORM

Microsoft Technology Lice...

15. A computer system, comprising:a processor; and
a storage device holding an operating system executable by the processor, an application executable by the processor, user data managed by the operating system, instructions for the application to be integrated with the operating system, wherein operating system instructions of the operating system are executable by the processor to:
recognize a security policy for governing access rights, by the application, to the user data managed by the operating system;
recognize a first identifying data provided by the application indicating an entity that matches a second identifying data of the entity, wherein the first identifying data provided by the application that matches the second identifying data of the entity determines a specific accessible portion of the user data according to the security policy, wherein the first identifying data is matched with the second identifying data by computing a first value for the first identifying data provided by the application and comparing the first computed value to a second computed value associated with the entity;
selectively permit the application to access the user data managed by the operating system, by the operating system permitting the application to access the specific accessible portion of the user data according to the security policy, and the operating system denying the application access to other portions of the user data, the specific accessible portion of the user data including at least contact data for the entity; and
present an operating system component including application content associated with the contact data for the entity, wherein the application content includes a user interface control that launches the application in response to interaction with the user interface control.

US Pat. No. 11,030,706

SYSTEMS AND METHODS OF ACCESS CONTROL AND SYSTEM INTEGRATION

Xero Limited, Wellington...

1. A method comprising:at an accounting system, receiving, from a financial system, an authorization to link a financial account at the financial system with a bookkeeping account at the accounting system, the accounting system maintaining bookkeeping accounts for a plurality of organizations, the authorization identifying a user associated with a first organization of the plurality of organizations;
based on the receiving of the authorization, linking the bookkeeping account associated with the first organization to the authorized financial account;
verifying, by the accounting system, that the financial system supports a third-party payment service for the financial account;
submitting, from the accounting system to the third-party payment service, a batch file comprising a batch of payments drawn on the financial account, the batch of payments comprising a plurality of account identifiers for a plurality of financial accounts receiving the payments in the batch of payments;
receiving, by the accounting system from the financial system, a confirmation that all payments in the batch of payments completed successfully; and
updating, by the accounting system and in response to the confirmation, accounting data of the bookkeeping account to show that the payments in the batch of payments were made.

US Pat. No. 11,030,705

QUICK SERVE TAX APPLICATION

Intuit Inc., Mountain Vi...

1. A method, comprising:receiving an email from a user device, where the email comprises an image of a third-party tax form;
importing information from the image of a third-party tax form associated with a taxpayer;
assigning, based at least in part on the information, the taxpayer to one of a plurality of user classes,
where the user class of the taxpayer determines at least one subsequent interaction between the taxpayer and an online tax application, and
where the assigning is performed by a trained binomial or trained multinomial classifier that uses at least one of logistic regression, random forest, support vector machines, naïve Bayes, and stochastic gradient descent;
pre-populating one or more fields of an online tax return for the taxpayer based on at least some of the information; and
sending a subsequent email to the user device with a notification and a link,
where the notification indicates that the online tax return is pre-populated and is available for review on a webpage upon selection of the link, and
where the online tax return is presented in a view in a graphical user interface displayed by the online tax application upon selection of the link;
where each of the method steps are performed by a hardware processor executing software instructions stored in the memory.

US Pat. No. 11,030,704

AUTOMATED DAMAGE ASSESSMENT AND CLAIMS PROCESSING

Allstate Insurance Compan...

1. A method comprising:directing, at a claims processor associated with an enhanced claims processing server, a plurality of cameras to capture images of an insured item;
determining, at the claims processor, at least one damaged area of the insured item by applying an edge filter to the captured images of the insured item and a reference image of another item of a same type as the insured item and subtracting image data of the insured item from the reference image of the other item of the same type as the insured item to isolate an image portion containing only damage to the insured item;
further processing the image portion containing only the damage to the insured item to fill any missing links in edges of the at least one damaged area;
based on the determination, capturing additional data of the at least one damaged area, the additional data including raw depth data;
mapping, based on the additional data and the further processed image portion, a depth of damage to the at least one damaged area and identifying, based on the mapping, a type of damage to the at least one damaged area; and
outputting, through a communication module associated with the enhanced claims processing server, damage information to an insurance consumer associated with the insured item, the damage information including the depth of damage and type of damage.

US Pat. No. 11,030,703

SYSTEM AND METHOD FOR MOBILE DEVICE DISABLING AND VERIFICATION

HARTFORD FIRE INSURANCE C...

1. A system for providing for implementation and verification of use of a mobile device disabling technology in a vehicle, comprising:a mobile device, wherein the mobile device is configured to:
execute instructions of an installed mobile device disabling application, the mobile device disabling application: (a) causing the mobile device to communicate with a vehicle computer system; and (b) disabling one or more communications capabilities of the mobile device; and
execute instructions of an installed verification application, the installed verification application: (a) causing the mobile device to compare, to verification rules, results of checking for one or more empty logs, including at least an empty text message log, on the mobile device, to verify absence of tampering; and (b) causing the mobile device to transmit results of the verification, the transmitted results of the verification, responsive to determining that the text message log is empty, constituting an indication of tampering;
a central computer system, in communication with the mobile device, comprising:
one or more data storage devices storing data indicative of: remote users; a selected mobile device disabling technology associated with each of remote users and third parties; a plurality of mobile device disabling technologies; a plurality of discount levels; and correlations between each of the mobile device disabling technologies and the plurality of discount levels; and
a rules processor configured to:
initiate a communication to a third party system having data indicative of whether one or more of the mobile device disabling technologies is activated or has been disabled;
correlate a remote user's selected mobile device disabling technology to one of the plurality of discount levels;
determine a premium for a risk coverage policy based on the correlated discount level;
transmit the determined premium to the remote user;
receive a result of the verification from the mobile device;
based on the result of the verification, maintain the determined premium, or modify the determined premium by discontinuing the determined premium or applying a different one of the plurality of discount levels; and
transmit, by a communications interface to the mobile device, data indicative of a modified discount level.

US Pat. No. 11,030,702

MOBILE INSURANCE PLATFORM SYSTEM

PROGRESSIVE CASUALTY INSU...

1. A non-transitory machine-readable medium encoded with machine-executable instructions, wherein execution of the machine-executable instructions by a processor determines a cost of vehicle insurance comprising:enabling a telematics native application associated with a vehicle in a mobile device;
receiving telematics data from a plurality of local and remote sensors through a personal area
network generated by the mobile device and in communication with the vehicle and a local network;
the plurality of local and remote sensors includes a first sensor that is unitary part of the mobile device;
monitoring a mobile client data comprising an amount of time a mobile user spends on the mobile device, a number of messages processed through the mobile device, or a number of Web sites visited through the mobile device while the mobile user is engaged in an insured activity;
generating a corrective action alert triggered by the telematics data that represents a status of the insured activity that the mobile device is monitoring;
processing a mobile content through an adaptive transmission controller configured to optimize a mobile content to a plurality of screen sizes including a screen size and a resolution, and an operating system of a mobile device;
adjusting an insurance policy premium or an insurance policy discount in response to the mobile client data and the telematics data to reward the mobile user engaged in the insured activity.

US Pat. No. 11,030,701

SYSTEMS AND METHODS FOR ELECTRONICALLY MATCHING ONLINE USER PROFILES

STATE FARM MUTUAL AUTOMOB...

1. A matching computer system for determining a trust score for a user based upon at least telematics data, social media data, and insurance data, the matching computer system including at least one processor in communication with at least one memory device, wherein the at least one processor is configured to:register, with the matching computer system, one or more users;
receive consent from the one or more users to capture the social media data associated with social media activities of each respective user, and the telematics data associated with usage of one or more items that each respective user is interested in at least one of renting and offering for rent;
collect the social media data and the insurance data from each registered user, wherein the social media data is collected from at least one social media platform and the insurance data is collected from at least one insurance provider server;
collect, via one or more sensors associated with the one or more items, the telematics data;
store, within the at least one memory device, the social media data, the insurance data, and the telematics data;
retrieve the telematics data, the social media data, and the insurance data associated with each registered user;
apply a scoring algorithm to each respective telematics data, each respective social media data, and each respective insurance data, wherein the scoring algorithm is automatically updated and refined, by the matching computer system, by utilizing a feedback data transmission from a user computing device associated with each registered user, and wherein the feedback data transmission is associated with at least one previous transaction conducted by each registered user; and
determine a trust score for each registered user based, at least in part, upon the application of the scoring algorithm to each respective telematics data, each respective social media data, and each respective insurance data, wherein the trust score represents a level of trustworthiness of the user.

US Pat. No. 11,030,700

SYSTEMS AND METHODS FOR SURFACE SEGMENT DATA

The Travelers Indemnity C...

1. A system, comprising:a specially-programmed electronic controller device;
an electronic telematics device coupled to a vehicle and in communication with the specially-programmed electronic controller device via a wireless electronic network; and
a non-transitory electronic memory device in communication with the specially-programmed electronic controller device, the non-transitory electronic memory device storing (i) risk parameter data in relation to surface segment types, and (ii) specially-programmed instructions that when executed by the specially-programmed electronic controller device result in:
identifying, based on data received from the electronic telematics device and via the wireless electronic network, first information descriptive of an amount of time the vehicle spends on a first type of surface segment, the first information being recorded by the electronic telematics device based on a measuring, by the electronic telematics device, of a first physical parameter that is indicative of the first type of surface segment;
identifying, based on data received from the electronic telematics device and via the wireless electronic network, second information descriptive of an amount of time the vehicle spends on a second type of surface segment, the second information being recorded by the electronic telematics device based on a measuring, by the electronic telematics device, of a second physical parameter that is indicative of the second type of surface segment;
identifying, by accessing the risk parameter data stored in the non-transitory electronic memory device, a first risk metric of the first type of surface segment;
identifying, by accessing the risk parameter data stored in the non-transitory electronic memory device, a second risk metric of the second type of surface segment;
calculating, based on (i) the amount of time the vehicle spends on the first type of surface segment and (ii) the first risk metric, a first risk exposure;
calculating, based on (i) the amount of time the vehicle spends on the second type of surface segment and (ii) the second risk metric, a second risk exposure; and
calculating, based at least in part on the first and second risk exposures, an insurance rate for the vehicle.

US Pat. No. 11,030,699

BLOCKCHAIN CONTROLLED MULTI-CARRIER AUCTION SYSTEM FOR USAGE-BASED AUTO INSURANCE

State Farm Mutual Automob...

1. A computer system for generating and managing usage-based insurance contracts using blockchains, the computer system including at least one processor in communication with at least one memory device, the at least one processor is programmed to:store a smart insurance contract for a current trip, wherein the smart insurance contract is for insuring at least one of a rider of a vehicle and the vehicle itself during the current trip, wherein the smart insurance contract includes one or more terms, wherein the smart insurance contract is stored in a first block in a blockchain structure along with a digital signature based on a vehicle identifier, and wherein a plurality of nodes each store a copy of the smart insurance contract in the blockchain structure;
receive, from the rider, a requested modification of at least one of the one or more terms of the smart insurance contract;
transmit the requested modification to an insurance server associated with the smart insurance contract;
receive a response to the requested modification from the insurance server; and
store the response in a second block of the blockchain structure along with the smart insurance contract to facilitate providing usage-based trip insurance for the at least one of the rider and the vehicle in a transparent manner, wherein the second block is subsequent to the first block, and wherein the response includes the digital signature and the vehicle identifier to associate the response with the smart insurance contract.

US Pat. No. 11,030,698

SERVER FOR REAL-TIME ACCIDENT DOCUMENTATION AND CLAIM SUBMISSION

STATE FARM MUTUAL AUTOMOB...

1. An insurance claim processing server comprising:a non-transitory computer readable media having stored thereon an insurance claim processing application; and
a processor, wherein said processor upon execution of the insurance claim processing application, is configured to:
initiate an insurance claim based on a vehicle and a claim location;
receive, over a network, video images associated with the insurance claim from a user's mobile device, the video images comprising the vehicle that is the subject of the insurance claim captured by the user's mobile device;
analyze and combine the video images comprising the vehicle with stored data regarding the claim location and generate a model of a physical scene associated with the insurance claim based on the received video images comprising the vehicle, the model of the physical scene including a model of the vehicle, the model of the physical scene being a three-dimensional rendering of the claim location and the model of the vehicle being a three-dimensional rendering of the vehicle;
analyze, using a feature extraction method comprising at least an edge detection algorithm or a corner detection algorithm, the model of the physical scene including the model of the vehicle and automatically identify a damaged portion of the vehicle;
determine, using at least the edge detection algorithm or the corner detection algorithm, a portion of the vehicle where there is insufficient data from the received video images to identify a damage;
automatically add (1) a damage tag to the model of the physical scene including the model of the vehicle thereto, the damage tag indicating the automatically identified damaged portion of the vehicle and (2) an insufficient-data tag to the model of the vehicle, the insufficient-data tag indicating the portion of the vehicle where there is insufficient data from the received video images to identify the damage;
transmit, over the network to the user's mobile device, the model of the physical scene including the model of the vehicle, including the automatically added damage tag and the insufficient-data tag, and a notification instructing the user to provide replacement video images corresponding to at least the portion of the vehicle where the insufficient-data tag was automatically added;
receive, over the network from the user's mobile device, damage-related data from the user's mobile device regarding at least the portion of the vehicle where there was insufficient data from the received video images to identify the damage as indicated by the insufficient-data tag;
automatically update the model of the physical scene including the model of the vehicle based on the damage-related data received from the user's mobile device regarding at least the portion of the vehicle where there was insufficient data from the received video images to identify the damage as indicated by the insufficient-data tag;
analyze features extracted using the feature extraction method from the model of the physical scene including the model of the vehicle and estimate a force of impact on an occupant of the vehicle;
identify a hidden damage to the vehicle and a potential injury to the occupant based on the estimated force of impact estimated from the model of the vehicle; and
validate a medical claim of the occupant associated with the insurance claim based on the potential injury to the occupant identified.

US Pat. No. 11,030,697

SECURE DOCUMENT EXCHANGE PORTAL SYSTEM WITH EFFICIENT USER ACCESS

Maximus, Inc., Reston, V...

1. A method of managing a secure exchange portal system for medical review, comprising:receiving a data stream from at least a first source, wherein the data stream includes documents having data for a medical claim being evaluated under a medical review, and wherein the data stream includes at least two documents;
dividing the received data stream into a plurality of chunks, wherein the chunks include sets of documents having the data for the medical claim being evaluated;
individually encrypting at least two of the plurality of chunks, wherein individually encrypting each of the chunks includes generating one or more encryption keys for each chunk;
creating metadata for the at least two documents, wherein the metadata includes data for the encryption keys, wherein creating the metadata includes creating a first set of metadata and a second set of metadata, the first set of metadata being used to restrict access to at least one of the chunks to a first group of users and the second set of metadata being used to restrict access to at least one of the chunks to a second group of users, wherein the first group of users is different from the second group of users, and wherein the first group of users accesses a specific portion of the at least one of the documents using the first set of metadata; and
evaluating the medical claim based of the data for the medical claim in at least one of the plurality of chunks.

US Pat. No. 11,030,696

METHODS OF PROVIDING INSURANCE SAVINGS BASED UPON TELEMATICS AND ANONYMOUS DRIVER DATA

STATE FARM MUTUAL AUTOMOB...

1. A computer-implemented method for using anonymous driver data to adjust driving risk, comprising:determining, by one or more processors, a plurality of anonymous drivers of a plurality of drivers based at least in part upon driver experience data;
collecting, at the one or more processors via a communication network from a plurality of mobile computing devices associated with the plurality of anonymous drivers, anonymous driver data associated with driving behavior of the plurality of anonymous drivers, the anonymous driver data including a plurality of data combinations of (i) geolocation data and (ii) anonymous driver telematics data associated with anonymous driver behavior;
identifying, by the one or more processors, a plurality of road segments based upon the geolocation data for each of the plurality of data combinations;
determining, by the one or more processors, one or more anonymous driver behaviors based upon the anonymous driver telematics data for each of the plurality of data combinations, wherein the one or more anonymous driver behaviors indicate one or more of the following: speed, braking, or acceleration;
connecting, at the one or more processors via the communication network from a mobile computing device associated with an insured driver, to the mobile computing device associated with the insured driver through an application executed on the mobile computing device associated with the insured driver;
collecting, by the one or more processors via the communication network from the mobile computing device associated with the insured driver, insured driving behavior data associated with the driving behavior of the insured driver, wherein the insured driving behavior data is associated with one or more of the plurality of road segments and includes telematics data generated by one or more sensors of the mobile computing device associated with the insured driver;
determining, by the one or more processors, one or more insured driver behaviors based upon the telematics data associated with each of the one or more of the plurality of road segments and indicating one or more of the following: speed, braking, or acceleration;
determining, by the one or more processors, a driving risk score associated with the insured driver by comparing the one or more anonymous driver behaviors with the one or more insured driver behaviors for each of the one or more of the plurality of road segments, wherein comparing the one or more anonymous driver behaviors with the one or more insured driver behaviors comprises comparing one or more of the following: speed, braking, or acceleration;
determining, by the one or more processors, an adjustment to an insurance policy associated with the insured driver based upon the determined driving risk score;
adjusting, by the one or more processors, the insurance policy according to the adjustment to the insurance policy; and
sending, via the communication network, an indication of the adjustment to the insurance policy to the mobile computing device associated with the insured driver for presentation to the insured driver.

US Pat. No. 11,030,695

METHOD AND SYSTEM RELATING TO SOCIAL MEDIA TECHNOLOGIES CONFIGURED FOR PERMISSIONED ACTIVITIES ON A NETWORK PLATFORM HOSTING SHAREHOLDER FORUM AND MEETINGS

Broadridge Financial Solu...

1. A method comprising:receiving, by a processor, an identifying data concerning an ownership or management of at least one brokerage account or at least one security by at least one user;
wherein the identifying data of the at least one user comprises a proxy control number associated with the at least one user and an investor type identifier of the at least one user;
obtaining, by the processor, based on the identifying data, positional information associated with the at least one user;
validating, by the processor, the at least one user as at least one permissioned user based at least in part on the positional information associated with the at least one user;
providing, by the processor and over a computer network, to the at least one permissioned user, a permissioned access on a computing device associated with the permissioned user to a network platform hosting a shareholder forum and a presentation of a shareholder meeting;
generating, by a processor, within the network platform, a representation of the positional information associated with the at least one permissioned user without providing a personal identifying information of the at least one permissioned user so that to allow at least one other user, accessing the network platform, to observe the representation of the positional information of the at least one permissioned user that remains anonymous;
permitting, by the processor, the at least one permissioned user to perform at least one activity within the network platform based at least in part on the positional information of the at least one permissioned user, the investor type identifier of the at least one permissioned user, or both; and
wherein the at least one activity is at least one of i) interacting with the at least one other user within the network platform and ii) accessing a particular content within the network platform.

US Pat. No. 11,030,694

PROCESS FOR PROVIDING TIMELY QUALITY INDICATION OF MARKET TRADES

NYSE Group, Inc., New Yo...

1. A computer system, comprising:one or more processors configured to execute machine-readable instructions;
a message intercept module operatively coupled to the one or more processors, the message intercept module in communication with a trader system and an executing venue system via one or more communication links and configured to:
detect electronic transmissions between the trader system and the executing venue system, and
copy at least one of electronic order information and order execution details from the electronic transmissions without impeding any of said electronic transmissions;
an execution quality calculation module (EQCM) operatively coupled to the one or more processors, the EQCM in communication with the message intercept module and configured to:
receive the electronic order information and the order execution details from the message intercept module,
receive real-time market data contemporaneously with the order execution details, the real-time market data originating from an external data source,
determine an execution quality based on the real-time market data and the order execution details, and
transmit the execution quality to said trader system; and
an interactive graphic user interface (GUI) operatively coupled to the one or more processors and configured to generate a display comprising:
a selectable first region configured to display the execution quality in real-time in a graphical format simultaneously with the electronic order information, and
a second region and at least one input area on the display, wherein responsive to a selection of the selectable first region, the second region displays one or more indications of the execution quality of the electronic order.

US Pat. No. 11,030,693

SYSTEM AND METHOD FOR MATCHING TRADING ORDERS BASED ON PRIORITY

BGC PARTNERS, INC., New ...

1. A system comprising:at least one processor of at least one computer; and
a memory including instructions which, when executed by the processor, control to:
receive, over a communication network, a first trading order for a trading product via a computing device of a first trader, in which the first trading order includes a display portion and a reserve portion, and wherein the display portion of the first trading order is displayed on at least one interface screen of at least one display device of at least one first trading workstation,
receive, over the communication network, a second trading order for the trading product via a computing device of a second trader, in which the second trading order is received subsequently to the first trading order, in which the second trading order includes a display portion and a reserve portion, and wherein the display portion of the second trading order is displayed on at least one interface screen of at least one electronic display device of at least one second trading workstation,
receive, over the communication network, from a computing device of a counterparty trader an electronic message comprising a counterorder for the trading product;
use the counterorder to automatically fill the display portion of the first trading order;
use the counterorder to automatically fill the display portion of the second trading order;
after automatically filling the display portion of the second trading order and based on the first trading order being received before the second trading order, exclusively offer, over the communication network, through a user interface of a remote client device of a plurality of remote client devices configured to communicate trading commands to the system, at least a portion of the counterorder to the first trader for a configurable period of time without offering any portion of the counterorder to the second trader until at least the configurable period of time expires and prevent the reserve portion of the first trading order from being disclosed to given traders with the exception of the counterparty trader;
receive from the first trader an acceptance of at least a part of the at least portion of the counterorder during the exclusive offer period of time;
responsive to receiving from the first trader an acceptance of at least a part of the at least portion of the counterorder during the exclusive offer period of time, extend the exclusive offer period of time; and
exclusively offer a second part of a remaining portion of the counterorder to the first trader for the extended exclusive offer period of time.

US Pat. No. 11,030,692

SYSTEM AND METHOD FOR A SEMI-LIT MARKET

IEX Group, Inc., New Yor...

1. A computer-implemented method for electronic trading in a semi-lit market environment, the method comprising:maintaining an order book in an electronic trading system, said electronic trading system comprising: (a) a matching engine configured to receive trading orders via a communication interface or client gateway and to execute said trading orders, (b) storage medium configured to record said order book, and (c) an interface for communicating electronic data to trade participants;
imposing an additional latency on said incoming trading orders such that an arrival or processing of said trading orders at the matching engine is delayed by a period of time;
regulating, by at least one computer processor, a conditional distribution of electronic information of said order book to the trade participants by:
determining, based on a comparison of at least one price parameter of an order submitted by a trade participant with respect to a predetermined threshold price point, whether to permit the trade participant to access a selected portion of said order book; and
selectively disclosing order book data to or withholding order book data from the trade participant, on the interface for communicating electronic data to trade participants, based on the step of determining, thereby facilitating a semi-lit market environment in said electronic trading system.

US Pat. No. 11,030,691

DECISION TREE DATA STRUCTURE BASED PROCESSING SYSTEM

Chicago Mercantile Exchan...

17. A computer system of processing a file suspected to contain data generated by a data transaction processing system in which data items are transacted by a hardware matching processor that matches electronic data transaction request messages for the same one of the data items based on multiple transaction parameters from different client computers over a data communication network, the computer system comprising:a processor and a non-transitory memory coupled therewith wherein the memory stores computer executable instructions that when executed by the processor, cause the processor to implement:
a file array generator that:
receives a training data set comprising a plurality of labeled files, each labeled file including an outcome label indicating whether or not the labeled file includes data generated by the data transaction processing system;
identifies whether each labeled file contains one or more attributes; and
generates a file array, the file array including, for each labeled file, the outcome label associated with the labeled file and, for each attribute, data indicating the absence or presence of the attribute in the labeled file;
a decision tree generator coupled to the file array generator that:
evaluates the file array to determine a relationship between the attributes and the outcome labels, the relationship defining hierarchical levels of the attributes and interconnections between the attributes and the outcome labels, the interconnections representing the absence or presence of the attribute in a labeled file; and
generates, based on the evaluation, a decision tree data structure that represents the determined relationship between the attributes and the outcome labels,
wherein the decision tree data structure comprises a plurality of interconnected nodes, wherein at least some of the nodes represent the one or more attributes,
wherein the determined relationship represented by the decision tree data structure is based on a decision tree algorithm; and
stores the decision tree data structure in a memory;
a file processor coupled to the decision tree generator that, for each of a plurality of test files, each of which comprises one of a parent website or a webpage associated with a child website link identified by the file processor having crawled the parent web site, accesses the decision tree data structure from the memory and evaluates whether the test file includes the one or more attributes in a sequence defined by the decision tree data structure, the evaluating resulting in a determination of whether the test file contains data generated by the data transaction processing system;
an entity checker coupled to the file processor that, for a test file containing data generated by the data transaction processing system:
identifies an entity associated with the test file; and
determines if the entity name is in a database of authorized entities; and
a message transmitter coupled to the entity checker that, upon determining that the identified entity name is not in the database of authorized entities, generates and transmits a warning message to an address of the identified entity if the entity name is not in the database of authorized entities.

US Pat. No. 11,030,690

METHOD AND APPARATUS FOR MATCHING BUYERS WITH SELLERS IN A MARKETPLACE TO FACILITATE TRADE

City University of Hong K...

1. A method for facilitating trade of mobile data in an online marketplace provided by a service provider, the method comprising steps of:receiving from one or more sellers who are customers of the service provider, and via a user interface, respective seller bids to sell unused mobile data, each respective seller bid including information related to an amount of unused mobile data the corresponding seller wants to sell and a corresponding price the corresponding seller wants to receive,
receiving from one or more buyers who are customers of the service provider, and via the user interface, respective buyer bids to buy unused mobile data, each respective buyer bid including an amount of unused mobile data the corresponding buyer wants to buy and a corresponding price the corresponding buyer wants to pay,
facilitating, using a processor operably connected with the user interface, matching of at least a portion of a buyer bid with at least a portion of a seller bid based on: a revenue maximization function to maximize the service provider's revenue based on a relationship between an administration revenue and a bid revenue, the administration revenue being generated by the service provider charging an administration fee for matching the one or more buyers with the one or more sellers, the administration fee being proportional to an amount of unused mobile data to be traded, and the bid revenue being generated by the difference between the buyer bid and the seller bid that are matched;wherein the facilitating step comprises steps of:determining, using the processor, an optimal seller bid for each of the one or more sellers by respectively:determining a seller utility that relates to utility from selling mobile data, taking into account anticipated future usage and usage habit of the seller; anddetermining a seller utility maximization that relates to the maximum amount of data the respective seller is willing to sell as a function of a bid value;wherein the seller utility maximization is mathematically determined using the seller utility, and the seller utility maximization is determined as an amount of data to be sold by the seller, using a mathematical relationship relating at least the administration fee, seller price, mobile data cap of the corresponding seller, and leftover mobile data of the corresponding seller prior to selling; anddetermining, using the processor, an optimal buyer bid for each of the one or more buyers by respectively:determining a buyer utility that relates to the utility of the respective buyer from buying mobile data, taking into account anticipated future usage and usage habit of the buyer; anddetermining a buyer utility maximization that relates to the maximum amount mobile data the respective buyer is willing to buy as a function of the bid value;wherein the buyer utility maximization is determined as an amount of data to be purchased, using a mathematical relationship relating at least the amount of data to be purchased with a buyer's price, a mobile data cap of the buyer and the buyer's expected monthly usage;providing, via the user interface, the optimal buyer bid as a suggested bid to the respective one or more sellers;providing, via the user interface, the optimal seller bid as a suggested bid to the respective one or more buyers;receiving, from one or more sellers, via the user interface, a respective updated seller bid; andreceiving, from one or more buyers, via the user interface, a respective updated buyer bid;matching, using the processor, based on the facilitating step, at least the portion of the buyer bid with at least the portion of the seller bid;
transferring, using the processor, the amount of unused mobile data to be traded from the one or more sellers to the one or more buyers, following the matching of the one or more buyers with the one or more sellers, through the service provider;
updating, at a storage device operably connected with the processor, account information of the one or more matched buyers and the one or more matched sellers based on the step of transferring, to correspondingly increase an amount of useable mobile data of the one or more matched buyers and correspondingly decrease an amount of useable mobile data of the one or more matched sellers; and
facilitating, using the processor, a transfer of funds between the respective one or more sellers and the respective one or more buyers, following the matching of the one or more buyers with the one or more sellers,wherein the processor is in electronic communication with a web server, the online marketplace being an online web based marketplace that is hosted by the web server, and wherein the user interface is a web interface that allows interaction between the one or more buyers, the one or more sellers, and the processor, andfurther comprising a step of tracking, using the processor, trading dynamics of the online marketplace, wherein the step of tracking trading dynamics comprises steps of determining the bid value of the respective buyer bids supplied by the one or more buyers and the bid value of the respective seller bids supplied by the one or more sellers; anddetermining a number of the matchings of the one or more buyers and the one or more sellers.

US Pat. No. 11,030,689

AUCTIONING MECHANISMS FOR DARK ORDER BLOCK TRADING

NYSE Euronext Holdings LL...

1. A computer-implemented method of controlling an auction by a programmed computer, said method comprising:automatically initiating an online auction, at a pre-determined auction time, said pre-determined auction time being unknown to market participants;
determining, by a randomized timer of the programmed computer, a randomized auction duration, said randomized auction duration being unknown to market participants;
transmitting a notification of the auction to one or more computer devices associated with a plurality of market participants, said notification excluding information as to the pre-determined auction time and the randomized auction duration;
receiving one or more firm limit orders from the one or more computer devices;
storing, in an order book, only those orders among the one or more firm limit orders that are received during the randomized auction duration;
upon an expiration of the randomized auction duration, terminating the auction and preventing the storage of any additional orders among the one or more firm limit orders that are received after said terminating;
determining a reference price based on market data received from the one or more computer devices during said auction;
determining whether each stored firm limit order is eligible or ineligible to be filled based on the reference price;
removing, from the order book, each firm limit order determined to be ineligible to be filled; and
filling each firm limit order determined to be eligible to be filled based on the reference price.

US Pat. No. 11,030,688

ELECTRONIC OUTCRY MESSAGING FOR ELECTRONIC TRADING

Chicago Mercantile Exchan...

1. A computer implemented method for automatic restricted distribution of electronic messages among a subset of a plurality of market participants in an electronic communications system, the method comprising:receiving, by a specifically configured processor coupled with the electronic communications system, an electronic request from each of the subset of the plurality of market participants to be enabled by the electronic communications system to generate unsolicited electronic messages not responsive to another electronic message, copies of which will be transmitted via the electronic communications system to each other market participant of the subset of the plurality of market participants, and to receive copies of unsolicited electronic messages generated by any other market participant of the subset of the plurality of market participants that has been enabled by the electronic communications system to generate unsolicited electronic messages, copies of which will be transmitted via the electronic communications system to each other market participant of the subset of the plurality of market participants;
receiving, by the processor via the electronic communications system, a first electronic message generated by a market participant of the subset of the plurality of market participants, the first electronic message not being responsive to another message previously communicated to the market participant;
determining, by the processor, that the received first electronic message is unsolicited based on the first electronic message not being responsive to another previously received electronic message;
transmitting, automatically by the processor via the electronic communications system based on the determination that the received first electronic message is unsolicited, the received first electronic message to all other market participants of the subset of the plurality of market participants;
receiving, by the processor via the electronic communications system, a second electronic message generated by another of the subset of the plurality of market participants responsive to the first electronic message;
determining, by the processor that the second electronic message is a solicited message based on the second electronic message being responsive to the first electronic message; and
transmitting, automatically by the processor via the electronic communications system subsequent to the receipt thereof based on the determination that the second electronic message is a solicited message responsive to the first electronic message, the second electronic message only to the market participant who generated the first electronic message, the electronic second message not being transmitted via the electronic communications system to the others of the subset of the plurality of market participants.

US Pat. No. 11,030,687

BID/OFFER SPREAD TRADING

BGC PARTNERS, INC., New ...

1. A method comprising the steps of:by a computer of a computer trading system,
receiving, over a communication network, a bid/offer spread order placed at and provided from a first communication device of an originating party, the bid/offer spread order identifying an instrument to be traded, a first spread, and a second spread;
transmitting, over the communication network, to a plurality of second communication devices respectively of a plurality of second parties, bid/offer spread order information describing the bid/offer spread and serving as a request to counterparties for two-sided counterorders for the instrument, a two-sided order for an instrument being a bid to buy the instrument at a bid price and an offer to sell the instrument at an offer price, the two-sided order to be satisfied by a party executing on either the bid or the offer, in which the bid/offer spread order obligates the originating party to trade on one side of a two-sided counterorder for the instrument received from a counterparty, in the event that a received two-sided counterorder has a bid price and an offer price that differ by no more than the first spread identified in the originating party's bid/offer spread order, and
in which the bid/offer spread order information solicits from counterparties an acceptance of the second spread, a counterparty's acceptance to obligate the originating party to issue a two-sided order to the accepting counterparty, the bid price and offer price of the originating party's two-sided order to differ by no more than the second spread, and the originating party's two-sided order to obligate the accepting counterparty to trade one side of the originating party's two-sided order;
responsive to receiving, over the communication network, a command from a second communication device of a given second party of the second parties, in which the command includes at least one of an acceptance of the first spread and an acceptance of the second spread, transmitting, over the communication network, display information to cause to refresh a display, on a graphical user interface of the second communication device of the given second party, to display timer indicia indicating time remaining in which the second party can select on the graphical user interface a second command to be transmitted over the communication network indicating at least one of a buy and a sell of the instrument; and
enforcing the obligations created by the responding counterparty's counterorder or acceptance and the originating party's two-sided order.

US Pat. No. 11,030,686

SECURE SYSTEM

Capital One Services, LLC...

1. A method, comprising:receiving, by a device and from a first group of devices, credit information for an individual;
receiving, by the device and from a second group of devices, rating information that includes one or more ratings associated with the individual based on historical transactions with one or more particular organizations;
determining, by the device, a credit worthiness score as a function of the credit information and the rating information;
storing, by the device, the credit worthiness score in a distributed ledger as a particular transaction associated with the individual;
receiving, by the device and from a user device associated with the individual, a particular request to remove particular credit worthiness information, of the credit information or the rating information, from the distributed ledger,
wherein the particular request includes a private key and a blockchain identifier for the individual;
identifying, by the device and in the distributed ledger, the particular credit worthiness information using the blockchain identifier; and
performing, by the device, one or more actions associated with the private key to remove the particular credit worthiness information from the distributed ledger.

US Pat. No. 11,030,685

REFINANCING TOOLS FOR PURCHASING TRANSACTIONS

AFFIRM, INC., San Franci...

1. A method for displaying refinancing information for eligible prior purchases on a client computer in communication with a finance server, comprising:determining by the finance server, that a checking account for the client meets predetermined account requirements of predetermined minimum assets, a predetermined FICO score, and predetermined minimum transaction activity;
displaying on the client computer, approval of the client for refinancing by the finance server based upon the bank checking account information having the predetermined minimum assets and the predetermined minimum transaction activity;
determining by the finance server, credit card transactions that are the eligible prior purchases for refinancing and the credit card transactions that are ineligible prior purchases for refinancing, wherein the eligible prior purchases for refinancing are within a predetermined refinance value range;
displaying on the client computer, a first eligible prior purchase for a first credit card and a first transfer button for refinancing the first eligible prior purchase;
displaying on the client computer, a second eligible prior purchase for a second credit card and a second transfer button for refinancing the first eligible prior purchase, wherein the first eligible prior purchase, the first transfer button, the second eligible prior purchase, and the second transfer button are displayed simultaneously;
actuating the first transfer button for refinancing the first eligible prior purchase;
displaying on the client computer, an authorization to refinance the first eligible prior purchase by the finance server; and
displaying on the client computer, a payment plan for repayment of the first eligible prior purchase wherein the repayment plan includes multiple monthly payments at an interest rate.

US Pat. No. 11,030,684

METHOD FOR MANAGING AN ELECTRONIC ACCOUNT ASSOCIATED WITH A PIGGY BANK

Capital One Services, LLC...

1. A method, comprising:assigning, by a device, an identifier to a piggy bank associated with a first user,
wherein the piggy bank is provided with a security mechanism to detect tampering with the piggy bank;
associating, by the device, the identifier with:
a first transaction account associated with the first user, and
a second transaction account associated with a second user,
wherein the first transaction account is a sub-account of the second transaction account;
receiving, by the device and from one or more sensors associated with the piggy bank, first information indicating a value of money deposited into the piggy bank;
updating, by the device, the first transaction account, based on the first information indicating the value of the money, to generate an updated balance for the first transaction account;
providing, by the device and to the piggy bank or a user device associated with the piggy bank, second information indicating the updated balance for the first transaction account;
receiving, by the device, third information indicating a measurement associated with the piggy bank,
wherein the measurement includes a quantity of money provided in the piggy bank, and
wherein the third information is received from one or more of:
a financial kiosk,
a first financial institution associated with the first transaction account and the second transaction account,
a second financial institution not associated with the first transaction account and the second transaction account,
a courier delivery service,
a postal service, or
an ecommerce service;
verifying, by the device, the updated balance for the first transaction account based on comparing the measurement associated with the piggy bank and the value of the money associated with the first information,
wherein the updated balance for the first transaction account is adjusted based on a difference between the measurement associated with the piggy bank and the value of the money associated with the first information; and
causing, by the device and based on verifying the updated balance, the updated balance for the first transaction account to be associated with a third transaction account.

US Pat. No. 11,030,683

SYSTEMS AND METHODS FOR AGGREGATING AND MANAGING FINANCIAL SERVICE ACCOUNTS

Capital One Services, LLC...

1. A system for providing an aggregated financial account for a user, the system comprising:one or more memory devices storing software instructions; and one or more processors executing the instructions to perform operations comprising:
providing, via a financial service provider server, an aggregated account record for an aggregated financial account, the aggregated financial account comprising a first financial account associated with a first merchant and a second financial account associated with a second merchant, the aggregated account record comprising:
a first set of parameters associated with the first financial account, a second set of parameters associated with the second financial account, and a third set of parameters based on an aggregation of at least one of the first set of parameters or the second set of parameters;
providing commands to a user device to display an aggregated account graphical user interface, wherein the interface is configured to communicate with the financial service provider server and further receive input and provide requests for information to the user device;
receiving, from a merchant server associated with a third merchant in response to a user request by the user via an online application web page associated with the third merchant, an account modification request to add a third financial account of the user;
modifying the third set of parameters based on a set of factors associated with the third merchant;
dynamically modifying, via the financial service provider server, the first set of parameters and the second set of parameters based on the user request by the user via an online application web page associated with the third merchant; and
providing, via the financial service provider server and based on the modified third set of parameters, commands to the user device to modify the aggregated account graphical user interface to display at least one or more of: a logo, a merchant-branded test, a graphic, a graphic, an electronic endorsement, or a hyperlink, such that the aggregated account graphical user interface appears to be associated with and provided by at least the third merchant.

US Pat. No. 11,030,682

SYSTEM AND METHOD FOR PROGRAMMATICALLY ACCESSING FINANCIAL DATA

Plaid Inc., San Francisc...

1. A computer system comprising:one or more hardware computer processors configured to execute a plurality of computer executable instructions to cause the computer system to:
receive, from a first computing device, a request for data associated with a user, the request including authentication credentials associated with the user;
initiate a simulated instance of a software application, the software application being associated with an institution associated with the request, the software application specifically configured to interface via an API of the institution with computing devices associated with the institution, wherein:
the simulated instance of the software application is also configured to interface, via the API of the institution, with computing devices associated with the institution, and
the simulated instance of the software application is configured to appear to the computing devices of the institution to be the software application executing on a physical computing device of the user;
request, by the simulated instance of the software application and via the API, data associated with the user from a second computing device of the institution; and
receive the data associated with the user from the second computing device,
wherein the computer system is configured to initiate simulated instances of any of a plurality of software applications, the simulated instances of the plurality of software applications being associated with different institutions or users, and the simulated instances of the plurality of software applications being configured to interface, via APIs of the different institutions, with computing devices associated with the different institutions.

US Pat. No. 11,030,681

INTERMEDIATE BLOCKCHAIN SYSTEM FOR MANAGING TRANSACTIONS

International Business Ma...

1. A method, comprising:identifying a first blockchain request of a first user on a first blockchain and a second blockchain request of a second user on a second blockchain which are capable of satisfying each other;
transmitting an identifier of a temporary address controlled by a software agent on the first blockchain to the first user and an identifier of a temporary address controlled by the software agent on the second blockchain to the second user;
monitoring, via the software agent, the temporary address on the first blockchain and the temporary address on the second blockchain;
determining, via the software agent, that a data value requested by the second blockchain request has been stored at the monitored temporary address on the first blockchain and a data value requested by the first blockchain request has been stored at the monitored temporary address on the second blockchain; and
releasing, via the software agent, the data value stored at the monitored temporary address on the first blockchain to an address of the second user on the first blockchain and the data value stored at the monitored temporary address on the second blockchain to an address of the first user on the second blockchain.

US Pat. No. 11,030,680

AUTOMATED FEE-BASED DATA PROCESSING FROM REMOTELY-BASED IMAGE MERGING

PANGRAM ACQUISITIONS L.L....

1. A host application server, comprising:at least one computer processor; and
a non-transitory computer-readable storage medium containing stored programming instructions that, when executed by the at least one computer processor, instruct the at least one computer processor to perform operations comprising:
providing an image merging feature that includes:
accessing one or more seller websites to obtain images of articles offered for sale or rent,
providing an interface that allows selection, by a customer, of the obtained images,
accessing image data of a foundational structure,
merging data of the selected images with the image data of the foundational structure by image processing to generate merged image data, and
providing the merged image data for viewing by the customer;
providing a predetermined amount of memory storage space to a virtual closet;
receiving a selection by the customer of one or more of the articles for storage in the virtual closet for access by the customer; and
electronically linking the customer and at least one merchant of the articles selected for storage in the virtual closet.

US Pat. No. 11,030,679

DISPLAYING AN ONLINE PRODUCT ON A PRODUCT SHELF

Advanced New Technologies...

1. A computer-implemented method comprising:obtaining, by one or more processors, data representing online products and priorities of the online products, wherein the online products are products to be displayed by a display screen on a product shelf of the display screen;
arranging, by the one or more processors, display slots on the display screen according to whether an aspect ratio of the display screen falls within a first ratio range, a non-overlapping second ratio range, or neither comprising:
in response to determining that the aspect ratio of the display screen falls within the first ratio range, arranging, by the one or more processors, the display slots using a 3×3 nine-rectangular-grid,
in response to determining that the aspect ratio of the display screen falls within the second ratio range, arranging, by the one or more processors, the display slots using a 2×4 eight-rectangular-grid, or
in response to determining that the aspect ratio of the display screen falls neither within the first ratio range nor the second ratio range, arranging, by the one or more processors, the display slots using a 3×2 six-rectangular-grid;
determining, by the one or more processors, a respective attention rank of each display slot based on a distance between each display slot and a visual center of the product shelf; and
displaying, by a particular display screen of a particular display slot, an online product with a highest obtained priority in the particular display slot having a highest determined attention rank.

US Pat. No. 11,030,678

USER-ADAPTIVE RESTAURANT MANAGEMENT SYSTEM

Toast, Inc., Boston, MA ...

1. A user-adaptive order processing terminal, comprising:a display, configured to display electronic menu items in a first area for selection by a user, wherein said electronic menu items are displayed when said user selects one or more sub-menu hyperlinks on said display;
a microphone, configured to detect speech spoken by said user;
a configuration manager, coupled to said display and said microphone, configured to capture said speech from said microphone, to transmit said speech via first messages to a backend server, to receive second messages from said backend server providing one or more keywords that correspond to said speech, to access suggested menu items that correspond to said one or more keywords, and to modify a second area of said display to present said suggested menu items for selection, wherein said suggested menu items would otherwise be presented in said first area through selection of said one or more sub-menu hyperlinks, wherein said backend server is not on-premise with the terminal; and
a motion sensor, coupled to said configuration manager, configured to subsequently detect distance to and movements performed by said user, wherein said configuration manager captures said movements from said motion sensor, transmits said movements via third messages to said backend server, receives fourth messages from said backend server providing 3-dimensional (3D) gestures that correspond to said movements, and to access and execute commands corresponding to said 3D gestures to add one or more of said suggested menu items to one of a plurality of electronic orders, wherein said 3D gestures do not require said user to have physical contact with said motion sensor.

US Pat. No. 11,030,677

INTERACTIVE PRODUCT REVIEW INTERFACE

eBay Inc., San Jose, CA ...

1. A system comprising:one or more processors;
memory; and
one or more programs stored in the memory, the one or more programs comprising instructions that when executed by the one or more processors, cause the one or more processors to perform operations comprising:
receiving a purchase request for a product from a current user;
generating a user feedback page in response to receiving the purchase request, the generating the user feedback page comprising:
identifying a set of user feedback questions associated with the product;
generating a set of aspect cards to receive user feedback, each aspect card in the set of aspect cards comprising a graphical product feedback element linked to a user feedback question from the set of user feedback questions and a product image window, the graphical product feedback element comprising a first graphic indicator arranged at a first bottom side portion of the aspect card and a second graphic indicator arranged at a second bottom side portion of the aspect card, the product image window comprising a graphical image of the product associated with the purchase request arranged at a top portion of each aspect card above the first and second graphic indicator;
determining an order of the set of aspect cards based on an importance of each user feedback question;
causing display of a first aspect card from the set of aspect cards based on the determined order of the set of aspect cards, the first aspect card associated with a first user feedback question; and
generating a user feedback graphic based on the first user feedback question;
causing presentation of the generated user feedback page comprising the user feedback graphic;
receiving, via the first graphic indicator, a user selection to provide user feedback for the first user feedback question;
generating, by the one or more processors, a second user feedback graphic, the second user feedback graphic being arranged at the top portion of the aspect card above the first graphic indicator and the second graphic indicator of the graphical product feedback element and graphically representing the received user feedback and a stored user feedback for the first user feedback question in response to the received user feedback; and
transmitting the second user feedback graphic to a client device for display.

US Pat. No. 11,030,676

SYSTEMS AND METHODS FOR PRIORITIZING LOCAL SHOPPING OPTIONS

eBay Inc., San Jose, CA ...

1. A method comprising:receiving, by a computing device, a list of items from a client device associated with a user profile;
determining a current location of the computing device using a location determination application, the location determination application executing at a server system that is separate from the computing device;
identifying a first geographic location associated with a merchant based on the current location of the computing device;
accessing a set of data entries posted to at least a first social network service that is accessible to the computing device via a communication network, the set of data entries being associated with location information identifying geographic locations of client devices used to post the data entries to the first social network service;
identifying, based on the location information associated with the set of data entries, a subset of data entries that were posted by client devices while located within a threshold distance of the first geographic location;
generating, based on a number of data entries in the subset of data entries, busyness data describing a traffic level of the first geographic location associated with the merchant identified by the merchant identifier;
correlating the busyness data to the merchant; and
causing display of a visualization of the busyness data by the client device, the visualization of the busyness data comprising a map image that depicts one or more buildings within the threshold distance of the first geographic location, and that includes a portion of the map image that is color coded based on the busyness data to convey the traffic level of the first geographic location, the portion of the map image representative of a building associated with the merchant among the one or more buildings.