US Pat. No. 10,796,287

SYSTEMS AND METHODS FOR PROCESSING TRAILER REPAIR REQUESTS SUBMITTED BY CARRIERS

Walmart Apollo, LLC, Ben...

1. A system for facilitating submission of repair requests by carriers having to repair trailers to domicile facilities associated with the trailers, the system comprising:a plurality of computing devices of a plurality of carriers, the computing devices of the carriers configured send and receive signals over a communication network;
a plurality of computing devices of a plurality of domicile facilities associated with the trailers, the computing devices of the domicile facilities associated with the trailers configured send and receive signals over the communication network;
a central computing device including a processor-based control circuit and configured for communication with the computing devices of the carriers and with the computing devices of the domicile facilities associated with the trailers over the communication network;
an electronic database in communication with the central computing device, the computing devices of the plurality of carriers, and with the computing devices of the domicile facilities associated with the trailers over the communication network;
wherein the central computing device is configured to:
provide a first graphical interface accessible on the central computing device over the communication network by a computing device of a carrier having to repair a trailer, the first graphical interface including a plurality of drop-down menus, text input fields, and clickable graphical buttons configured to permit the carrier to submit, to the central computing device and via the computing device of the carrier, a repair request indicating repair needed for the trailer;
generate, based on the repair request submitted by the carrier, an electronic invoice for the repair to the trailer, the electronic invoice being directed to a domicile facility associated with the trailer;
transmit an alert including the electronic invoice over the communication network to a computing device of the domicile facility associated with the trailer;
receive from the computing device of the domicile facility associated with the trailer and over the communication network, a response of the domicile facility to the repair request submitted by the carrier; and
transmit, over the communication network to the computing device of the carrier, a notification indicating whether the repair request submitted by the carrier has been approved by the domicile facility associated with the trailer.

US Pat. No. 10,796,286

LIVE MEETING OBJECT IN A CALENDAR VIEW

Microsoft Technology Lice...

1. A system comprising:one or more data processing units; and
a computer-readable medium having encoded thereon computer-executable instructions to cause the one or more data processing units to:
render a representation of a calendar view on a user interface (UI), the calendar view indicative of one or more calendar days, the calendar days comprising a time span including a plurality of sequential time slots;
render, within the calendar view comprising the time span including the plurality of sequential time slots, a representation of a scheduled meeting in at least one of the sequential time slots, wherein the scheduled meeting is rendered without an icon or button operative to provide an interactive control to join the scheduled meeting;
determine that a start time for the scheduled meeting is within a threshold time; and
in response to determining that the start time for the scheduled meeting is within the threshold time, convert the representation of the scheduled meeting to a live meeting object and replace, within the calendar view, the representation of the scheduled meeting with the converted live meeting object, the live meeting object including an icon or button operative to provide an interactive control to join the scheduled meeting.

US Pat. No. 10,796,285

RESCHEDULING EVENTS TO DEFRAGMENT A CALENDAR DATA STRUCTURE

Microsoft Technology Lice...

1. A computer-implemented calendar system, comprising:a data store storing a data structure representing a plurality of calendars;
a processing device; and
a storage resource storing machine-readable instructions which, when executed by the processing device, cause the processing device to:
while the data structure is in an unlocked state, add events to the data structure in response to messages received over a wide-area or local-area network from a plurality of remote end-user devices;
identify a triggering event that triggers defragmentation of the data structure, the triggering event being received when the data structure has a first level of fragmentation with respect to contiguous blocks of free time;
in response to the triggering event, place the data structure stored in the data store into a locked state that prevents modification of the data structure by the plurality of remote end-user devices;
while the data structure is in the locked state, defragment the data structure by:
identifying candidate time slots that satisfy participant-related constraints associated with individual events in the data structure;
determining respective numbers of contiguous blocks of free time for the candidate time slots;
based at least on the respective numbers of contiguous blocks of free time, determining new time slots for the individual events; and
modifying the data structure by moving the individual events to the new time slots in the data structure, the modifying resulting in a defragmented version of the data structure that, as a whole, exhibits a second level of fragmentation with respect to the contiguous blocks of free time, the second level of fragmentation being reduced relative to the first level of fragmentation of the data structure;
store the defragmented version of the data structure in the data store;
place the data structure in the data store in the unlocked state after the defragmenting; and
after placing the data structure in the unlocked state, add further events to the data structure in response to further messages received over the wide-area or local-area network from the plurality of remote end-user devices.

US Pat. No. 10,796,284

COLLABORATIVE SCHEDULING

FUJITSU LIMITED, Kawasak...

1. A method of collaborative scheduling, the method comprising:setting, by a calendar server, an initial value for at least one of a selflessness trait or a selfishness trait of each member of a plurality of members who are included in a group to which a collaborative schedule pertains, the at least one of the selflessness trait or the selfishness trait related to behavior of a given member as indicated by preferences of the given member;
receiving, at the calendar server, group inputs that include individual member tasks, mutual member tasks, individual member constraints, and mutual member constraints;
receiving, at the calendar server, environmental data collected by a data collection engine;
calculating, by the calendar server, first collaborative schedule information based on the environmental data, the initial value for the at least one of the selflessness trait and the selfishness trait, and the group inputs, the first collaborative schedule information including a number of first feasible schedules for a first task, event, or responsibility and total schedule costs that are associated with the first feasible schedules for each member, the total schedule costs for each member being based on an individual member performance cost, a flexible task violation cost, and a behavior cost, the behavior cost being based on the initial value for the at least one of the selflessness trait and the selfishness trait;
receiving, by the calendar server, feedback from each member, the feedback including a selection of a feasible schedule of the first feasible schedules as a preferred schedule;
determining, by the calendar server, an updated value for the at least one of the selflessness trait and the selfishness trait of each member, the selfishness trait being assumed when the feasible schedule that is selected by the member places a higher total schedule cost on other members of the plurality of members, and the selflessness trait being assumed when the feasible schedule that is selected by the member places a lower total schedule cost on the other members;
generating, by the calendar server, a collaborative schedule based on the feasible schedules selected by the plurality of members;
storing, by the calendar server, the updated value for the at least one of the selflessness trait and the selfishness trait of each member based on the received feedback; and
calculating, by the calendar server, second collaborative schedule information based on the updated value for at least one of the selflessness trait and the selfishness trait of at least one member, the second collaborative schedule information including second feasible schedules for a second task, event, or responsibility.

US Pat. No. 10,796,283

DYNAMICALLY DELETING RECEIVED DOCUMENTS BASED ON A GENERATED EXPIRATION DEADLINE FOR AN EVENT LAPSING

International Business Ma...

1. A method for dynamically managing an electronic mail (e-mail) message, the method comprising:evaluating text included in the e-mail message, wherein evaluating text included in the e-mail message includes evaluating text of an attachment included with the e-mail message;
identifying a future event based on the evaluated text included in the e-mail message;
generating an expiration deadline for the identified future event based on the evaluated text included in the e-mail message;
determining if the generated expiration deadline for the identified future event has lapsed;
in response to determining the generated expiration deadline for the identified future event has lapsed, dynamically adjusting a status of the e-mail message; and
in response to determining the generated expiration deadline for the identified future event has lapsed, automatically deleting the attachment included with the e-mail message from the e-mail message, wherein determining if the generated expiration deadline for the identified future event has lapsed includes determining if the e-mail message has been responded to.

US Pat. No. 10,796,282

ASSEMBLING A PRESENTATION BY PROCESSING SELECTED SUB-COMPONENT PARTS LINKED TO ONE OTHER SUB-COMPONENT PART

1. A method of assembling a presentation, the method comprising:structuring a plurality of inputs of sub-component parts comprising an ordered-linear series of the sub-component parts,
storing each of the sub-component parts in a non-transitory memory,
wherein each of the sub-component parts is comprised of change data,
wherein the sub-component parts are further comprised of a reference that provides a link to one of: another one of the sub-component parts and a position in presentation data,
wherein the change data provides information on how to modify a part of presentation data responsive to the associated reference by one of:
the presentation data created by another one of the sub-component parts and
said position in the presentation data;
assembling the sub-component parts for a selected set of sub-component parts from the non-transitory memory into presentation data responsive to said change data and the associated said reference,
wherein the selected set of sub-component parts is comprised of at least two of the sub-component parts having a link to another one of the sub-component parts in the selected set of sub-component parts; and,
displaying the presentation for viewing by a user responsive to the presentation data.

US Pat. No. 10,796,281

COMPUTER IMPLEMENTED SYSTEM FOR MONITORING MEETINGS AND ACTION ITEMS AND METHOD THEREOF

ZENSAR TECHNOLOGIES LTD.,...

1. A computer implemented system (100) for monitoring meetings and action items comprising:a repository (10) configured to store agenda identification rules, action item identification rules, action item prioritization rules, and action item assignment rules;
a registration module (20) configured to receive employee information data from a plurality of employees and further configured to facilitate registration of the plurality of employees;
a login module (22) configured to receive login details from the plurality of employees and further configured to facilitate login of the plurality of the employees;
an agenda receiver (26) configured to receive agenda inputs from the plurality of the logged-in employees;
an agenda identifier (28) configured to cooperate with the agenda receiver (26) to receive agenda inputs and further configured to identify at least one agenda by re-arranging and shortlisting the received agenda inputs based on agenda identification rules;
an audio recorder (30) configured to record audio of a currently held meeting with respect to the identified agenda and generates audio data;
a speech to text converter (32) configured to cooperate with the audio recorder (30) to receive the audio data of the currently held meeting and further configured to convert the audio data into a text format to generate minutes of meeting data;
a meeting database (34) configured to cooperate with the agenda identifier (28) and the speech to text converter (32) to receive and store the identified agenda and the minutes of meeting data, respectively;
an action item identifier (36) configured cooperate with the meeting database (34) to receive the minutes of meeting data and further configured to identify a plurality of action items from the minutes of meeting data, based on the action item identification rules;
an action item priority assignor (38) configured to assign priority to each of the identified action items in the currently held meeting, based on the action item prioritization rules including a task impact and a closure of action item;
an action item assignor (40) configured to assign each of the action items to at least one of the employees from the plurality of the employees, based on the action item assignment rules;
an action item tracker (42) configured to track each of the action items to determine a progress status of the action item, and further configured to send a reminder to the employee with respect to the assigned action item;
a dashboard (50) configured to cooperate with the agenda identifier (28), the meeting database (34), the action item identifier (36), the action item assignor (40) and the action item tracker (42), and further configured to display identified agendas, minutes of the meeting data, identified action items, assignment of each of the action items, the progress status of action items,
wherein the registration module (20), the login module (22), the agenda receiver (26), the agenda identifier (28), the speech to text converter (32), the action item identifier (36), the action item priority assignor (38), the action item assignor (40), the action item tracker (42), and the dashboard (50) are configured to implemented using one or more processor(s).

US Pat. No. 10,796,280

SYSTEM FOR PREPARATION OF MODIFIABLE RECIPE-BASED PRODUCTS

Simplified Technologies, ...

1. A method of preparing recipe-based products, comprising:receiving, via a terminal, order information for a recipe-based product;
transmitting, via the terminal, the order information to a server computer;
receiving, via the terminal, a recipe for the recipe-based product from the server computer;
generating, using a token generator coupled to the terminal, a machine-readable, physical token having the recipe stored therein;
reading, via a first token reader at a first station, the generated physical token;
communicating, via a first human-machine interface (HMI) at the first station, only the one or more steps of the recipe that are to be completed at the first station;
reading, via a second token reader at a second station, the generated physical token; and
communicating, via a second HMI at the second station, only the one or more steps of the recipe that are to be completed at the second station.

US Pat. No. 10,796,279

SYSTEMS AND METHODS FOR AUTOMATED OUTBOUND PROFILE GENERATION

1. A computer-implemented system for generating an automated outbound profile, comprising:at least one processor;
at least one database; and
a memory comprising instructions that, when executed by the at least one processor, performs steps comprising:
receiving data comprising a capacity of a fulfillment center (FC);
receiving, a plurality of product identifiers associated with a plurality of incoming products to the FC;
periodically collecting and storing transactional logs for the plurality of products at the FC using the product identifier;
determining a current inventory for the plurality of products stored at the FC using the product identifier;
storing in a database a plurality of transactional logs and current inventories from a plurality of FCs, the plurality transactional logs and current inventories containing transactional data;
dividing transactional data into a training dataset and a validation dataset, the training dataset having more data than the validation dataset;
generating, using a machine learning algorithm, a predictive model based on the training data set;
validating the predictive model using the validation dataset;
generating an outbound profile for the FC by applying the predictive model to the associated transactional logs and current inventory, wherein the outbound profile comprises an expected percentage of outgoing products for a plurality of categories of products; and
managing network outbound using the generated outbound profile of the FC by comparing the outbound profile to actual outbound capacity of the FC, wherein managing network outbound comprises assigning orders to different FCs in order to prevent one or more FCs from being assigned to fulfill orders outside of their capacity.

US Pat. No. 10,796,278

OPTIMIZING PALLET LOCATION IN A WAREHOUSE

Lineage Logistics, LLC, ...

1. A system for managing a plurality of pallets in a warehouse, the system comprising:a plurality of storage racks having a plurality of rack openings;
a database that is programmed to store pallet allocation data that associate pallet storage duration with a plurality of sections of the storage racks, the plurality of sections being arranged by distance from an entrance of the warehouse and mapped to the pallet storage durations, wherein the plurality of storage racks includes one or more horizontal bars adjustable along a plurality of elevations on the storage racks to define the plurality of rack openings within the storage racks; and
a computer system including one or more processors that are programmed to perform operations including:
identifying a pallet delivered to the warehouse;
determining an expected storage duration of the pallet in the warehouse;
determining a height of the pallet;
determining a storage location of the pallet in the warehouse based on the expected storage duration and the height of the pallet;
transmitting information identifying the storage location to equipment for placement of the pallet;
identifying a plurality of candidate rack openings that are available in the storage racks;
calculating optimization values for the candidate rack openings based on the expected storage duration and the height of the pallet; and
determining a rack opening from the candidate rack openings having an optimization value exceeding a threshold value, the rack opening being the storage location for the pallet,
wherein each of the optimization values includes a combination of a duration match value and a height match value for each of the candidate rack openings, wherein the duration match value for a candidate rack opening represents proximity in distance between a section of the storage racks suited for the expected storage duration of the pallet and a section of the storage racks to which the candidate rack opening belongs, and wherein the height match value for the candidate rack opening represents proximity in measurement between the height of the pallet and a height of the candidate rack opening.

US Pat. No. 10,796,277

SYSTEMS AND METHODS FOR ELECTRONIC PLATFORM FOR TRANSACTIONS OF WEARABLE ITEMS

CaaStle, Inc., New York,...

1. A computer-implemented method for dynamically managing data associated with electronic transactions of wearable items, the method comprising:receiving, by one or more processors, wearable item data, the wearable item data describing one or more wearable items made available for physical shipment to users of a subscription-based wearable item distribution service via electronic transactions;
hosting, by the one or more processors, an electronic retailer portal for a plurality of retailers, the electronic retailer portal comprising one or more user interfaces allowing each retailer to create, modify, or update one or more wearable item catalogs for wearable items of the subscription-based wearable item distribution service;
hosting, by the one or more processors, a plurality of retailer storefronts associated with respective wearable item catalogs, the retailer storefronts each comprising one or more of a web site, a web-based application, and a mobile device application, the retailer storefronts each having an interface customized for a respective retailer of the plurality of retailers;
receiving, by the one or more processors, one or more electronic user transactions initiated at one or more user platforms, including an electronic user transaction initiated by an interaction with one of the retailer storefronts to order a wearable item, each of the one or more electronic user transactions associated with at least one unique user identifier and at least one unique item identifier identifying a wearable item described in the received wearable item data and contained in one of the wearable item catalogs, wherein the one or more user platforms comprise one or more user interfaces accessible from one or more user devices over the one or more networks;
in response to receiving the electronic user transaction for the ordered wearable item contained in one of the wearable item catalogs, updating, by the one or more processors, one or more transaction databases based on the one or more electronic user transactions, the one or more transaction databases comprising one or more data sets comprising flags indicative of whether previously-shipped wearable items were actually worn by a subscribing user associated with the unique user identifier of the subscription-based wearable item distribution service;
receiving, by the one or more processors, a performance report for one or more of the retailer storefronts including the retailer storefront for the wearable item catalog that includes the ordered wearable item, the performance report including an analysis of user segmentation information corresponding to visitors of the one or more retailer storefronts;
in response to receiving the one or more electronic user transactions for the wearable item, initiating retailer storefront jobs including:
calling a size information component of a data service application programming interface (API), based on the wearable item data corresponding to the ordered wearable item and data associated with the unique user identifier; and
calling a recommendation component of the data service API for generating a recommendation for one or more wearable items of the subscription-based wearable item distribution service based on the flags included in the one or more data sets indicative of whether previously-shipped wearable items of the subscription-based wearable item distribution service were actually worn by the subscribing user associated with the unique user identifier of the subscription-based wearable item distribution service;
receiving, by the one or more processors, one or more wearable item operations requests to initiate order processing of the ordered wearable item identified by the unique user identifier and the unique item identifier for a user identified by the received unique user identifier;
in response to receiving the one or more wearable item operations requests, initiating one or more services to fulfill the one or more wearable item operations requests, the services including the size information component of the data service API; and
updating at least one of the one or more transaction databases based on completion of the one or more wearable item operations requests.

US Pat. No. 10,796,276

SYSTEMS AND METHODS FOR ELECTRONIC PLATFORM FOR TRANSACTIONS OF WEARABLE ITEMS

CaaStle, Inc., New York,...

1. A computer-implemented method for dynamically managing data associated with electronic transactions of wearable items, the method comprising:receiving, by one or more processors, wearable item data from one or more electronic tenant interfaces, the wearable item data describing one or more wearable items made available for physical shipment to users via electronic transactions, wherein the one or more electronic tenant interfaces comprise one or more user interfaces accessible from one or more tenant devices over one or more networks;
hosting, by the one or more processors, a wearable items warehouse operations portal, a customer service portal, and a marketing portal, each associated with providing the wearable items as a service, the wearable items warehouse operations portal, the customer service portal, and the marketing portal each comprising a user interface accessible from one or more employee devices over the one or more networks;
receiving, by the one or more processors, one or more electronic user transactions initiated at one or more user platforms for subscribing to, purchasing, or renting one or more of the wearable items provided as a service, each of the one or more electronic user transactions associated with at least one unique user identifier and at least one unique item identifier identifying wearable items described in the received wearable item data, wherein the one or more user platforms comprise one or more user interfaces accessible from one or more user devices over the one or more networks;
in response to receiving the one or more electronic user transactions for the one or more wearable items provided as a service, updating, by the one or more processors, one or more transaction databases and one or more analytics databases, based on the one or more electronic user transactions;
periodically synchronizing, by the one or more processors, a state of one or more microservices for providing the wearable items as a service and one or more external services, in response to tasks triggered by each of the wearable items warehouse operations portal, the customer service portal, and the marketing portal, by an action of the one or more employee devices, the one or more external services including a storefront service for managing a storefront for the wearable items provided as a service for a particular tenant;
maintaining one or more data warehouse systems, by the one or more processors, comprising data consolidated from the one or more transaction databases, the one or more analytics databases, and one or more external systems;
generating, by the one or more processors, an electronic display based on the consolidated data and based on a request for an analytics report received from the one or more employee devices, the consolidated data including data from the one or more updated analytics databases which was updated based on at least an action of the one or more employee devices via the wearable items warehouse portal, wherein the electronic display is accessible from the one or more employee devices;
receiving, by one or more processors, one or more wearable item operations requests associated with providing the wearable items as a service, from at least one of the wearable items warehouse operations portal, the customer service portal, the marketing portal, and the one or more electronic tenant interfaces to initiate order processing of a wearable item identified by the unique user identifier and the unique item identifier for a user identified by the received unique user identifier;
in response to receiving the one or more wearable item operations requests to initiate order processing of the wearable item identified by the unique user identifier and the unique item identifier, initiating the one or more microservices to fulfill the one or more wearable item operations requests received from at least one of the wearable items warehouse operations portal, the customer service portal, the marketing portal, and the one or more electronic tenant interfaces; and
updating the one or more transaction databases and the one or more analytics databases based on completion of the one or more wearable item operations requests.

US Pat. No. 10,796,275

SYSTEMS AND METHODS FOR INVENTORY CONTROL AND DELIVERY USING UNMANNED AERIAL VEHICLES

Amazon Technologies, Inc....

1. A method comprising:receiving, from a central control at an unmanned aerial vehicle (UAV), a signal to survey an orchard;
flying to a first location within the orchard to assess a first example of a product in the first location;
activating one or more sensors on the UAV to collect first data regarding a first readiness for harvesting the first example of the product;
flying to a second location within the orchard to assess a second example of the product in the second location;
activating the one or more sensors on the UAV to collect second data regarding a second readiness for harvesting the second example of the product; and
determining, based at least in part on the first readiness or the second readiness, to harvest at least one of the first example of the product or the second example of the product; and
causing the UAV to harvest the at least one of the first example of the product or the second example of the product.

US Pat. No. 10,796,274

CONSUMABLE ITEM ORDERING SYSTEM

Walmart Apollo, LLC, Ben...

1. A retail customer consumption tracking and automated ordering system comprising:an item tracking system comprising a tracking processor configured to track purchases of at least one item purchased for a user, and for each purchase updating a supply of a corresponding one of the at least one item available for use at the user's residence by a home appliance;
a consumption system communicatively coupled with the item tracking system and configured to receive notifications from a networking-enabled device operating at the user's residence, the networking-enabled device being the appliance used in combination with the at least one item and each of the notifications comprising a notification of use of one of the at least one item;
receive, from a user computing device associated with the user, (I) a unique item identifier for each of the at least one item captured by the user using the user computing device in scanning the at least one item to capture the unique item identifier, and (II) a corresponding subscription notification of a request from the user to subscribe to start a process of having consumption of the at least one item tracked over time and authorize automatically making subsequent purchases of the at least one item on behalf of the customer; and
initiate, in response to receiving the unique item identifier and the subscription notification of the request to start tracking the consumption of the at least one item, an automatic tracking over time of the consumption of the at least one item based on the unique item identifier;
a profile system communicatively coupled with the tracking system and the consumption system, and configured to:
obtain a first set of rules that when applied determine item consumption by the appliance as a function of usage per instance;
apply the first set of rules to determine consumption of the at least one item based on the notifications; and
obtain a second set of rules that when applied determine a runout date for the at least one item as a function of the consumption of the at least one item; and
apply the second set of rule to determine a runout date for the at least one item according to the consumption of the at least one item; and
causes a listing of a plurality of items, comprising the at least one item, being tracked to be displayed to the user, receive a selection from the listing of a first item, and in response to the selection from the user regarding the first item, distributes a graphical user interface to be displayed on the user computing device pictorially illustrating a consumed and remaining amounts of the first item and a first item identifier of the first item and a scheduled shipment date;
wherein the consumption system in applying the first set of rules is configured to determine a quantity per use by the appliance based on tracking over time a number of uses between at least one cycle of a purchase of the at least one item and a subsequent purchase of the same item, wherein the number of uses is based on the received notifications from the network-enabled device;
an ordering system communicatively coupled with the profile system and configured to: obtain a third set of rules that when applied invoke automatic ordering of the at least one item as a function of the runout date; and apply the third set of rules to invoke automatic ordering of the at least one item causing delivery of the at least one item at least by the runout date.

US Pat. No. 10,796,273

PLATFORM FOR MANAGEMENT AND ORGANIZATION OF PERSONAL PROPERTY

Livible, Inc., Seattle, ...

1. A method for managing an inventory of items over a network using a network computer that includes one or more processors that perform actions, comprising:instantiating an inventory platform to perform actions, including:
employing item information associated with each of one or more items that is provided by an owner of the one or more items to a memory to store the item information, wherein the item information includes at least a location, owner information, item dimensions, and a unique label identifier, and wherein the location and the unique label identifier are mapped to each other;
in response to a selection of a version of the item information, employing a preference to restrict communication for each selected version of the item information having a text based media format to one or more available cellular networks and employing another preference to restrict communication for each selected version of the item information having one or more of an image or audio based media format to one or more available wifi networks, wherein the one or more cellular or wifi networks are periodically tested for qualifications to determine which networks are available to communicate item information having one or more types of media formats that include one or more of text, images, or audio, and wherein a user includes one or more of the owner or a new owner of the one or more items; and
in response to a request by the owner to transfer the one or more items at a current location of the one or more items to an off-premises storage location, perform further actions, including:
providing one or more scheduling options to collect the one or more items from the current location and transfer them to the off-premises storage location for the owner of the one or more items;
providing collection instructions to a distribution organization, wherein the collection instructions are based on a scheduling option selected by the owner and the current location of the one or more items, wherein the collection instructions include that portion of the item information that includes, the location, the item dimensions and the unique label identifier;
in response to a notification that the one or more items are delivered to the off-premises storage location, generating a new current location and updating the location in the corresponding item information for the one or more items to indicate that they are stored at the off-premises storage location which is geographically different than an old current location, and wherein a machine vision system and machine learning based classifiers and models are employed to identify the one or more items and provide additional item information, including one or more of a size, a volume, a name, a brand name, a value, or a related item; and
employing geolocation information provided by a global positioning systems (GPS) device on a client computer to modify a visual presentation of a client application and one or more of a database, a user interface, an internal process, or a report based on a location of the client computer employed by the user, wherein the modifications include one or more of time zones, languages, or calendar formatting.

US Pat. No. 10,796,272

INVENTORY DELIVERY SYSTEM FOR MOBILE STOREFRONTS USING AUTONOMOUS CONVEYANCE

MAIN GRADE ASSETS, LLC, ...

1. An order and delivery system to be deployed over a regional delivery area comprising:a plurality of mobile storefronts operating within the regional delivery area; and
an order server configured to communicate over a closed network with said plurality of mobile storefronts through a GPS-based routing program, with said order server further configured to perform the following
receive an order from a consumer via an Internet connected device,
determine the consumer's geolocation corresponding to the order based on the consumer signing into an account,
verify whether or not the consumer's geolocation is the same as an address associated with the account;
verify that the consumer's geolocation is within the regional delivery area by plotting the consumer's geolocation using a mapping program,
calculate via the GPS-based routing program distances between the consumer's geolocation and a current location of each of said plurality of mobile storefronts, and
send the order to one of said plurality of mobile storefronts over the closed network based on the calculated distances; each mobile storefront comprising
a transceiver configured to receive the order from said order server and to communicate with the consumer,
a satellite receiver configured to determine the current location of the mobile storefront,
a navigation terminal, and
an onboard computer configured to
evaluate the current location and heading of said mobile storefront,
provide navigation and routing on the consumer's geolocation, and
send a message via said transceiver directly to the device used by the consumer to place the order, with the message including an arrival notification when said mobile storefront that received the order is within a pre-determined proximity to the consumer's geolocation;said mobile storefront receiving the order facilitates preparation of the order with its stored inventory for delivery to the consumer at the consumer's geolocation.

US Pat. No. 10,796,271

COMMUNICATION SYSTEM FOR MOBILE STOREFRONTS USING ARTIFICIAL INTELLIGENCE

MAIN GRADE ASSETS, LLC, ...

1. An order and delivery system to be deployed over a regional delivery area comprising:a plurality of mobile storefronts operating within the regional delivery area; and
an order server configured to communicate over a closed network with said plurality of mobile storefronts through a GPS-based routing program, with said order server further configured to perform the following
receive an order from a consumer via an Internet connected device,
determine the consumer's geolocation corresponding to the order based on the consumer signing into an account,
verify whether or not the consumer's geolocation is the same as an address associated with the account;
verify that the consumer's geolocation is within the regional delivery area by plotting the consumer's geolocation using a mapping program,
calculate via the GPS-based routing program distances between the consumer's geolocation and a current location of each of said plurality of mobile storefronts,
send the order to one of said plurality of mobile storefronts over the closed network based on the calculated distances, and
store a transaction history each time the consumer places an order, and send to the consumer at least one promotional message based on the transaction history as part of a rewards program;
each mobile storefront comprising
a transceiver configured to receive the order from said order server and to communicate with the consumer,
a satellite receiver configured to determine the current location of the mobile storefront,
a navigation terminal, and
an onboard computer configured to evaluate the current location and heading, of said mobile storefront, and
send a message via said transceiver directly to the device used by the consumer to place the order, with the message including an arrival notification when said mobile storefront that received the order is within a pre-determined proximity to the consumer's geolocation;
said mobile storefront receiving the order facilitates preparation of the order with its stored inventory for delivery to the consumer.

US Pat. No. 10,796,270

SYSTEMS AND METHODS FOR SYNCHRONIZED DELIVERY

UNITED PARCEL SERVICE OF ...

1. A computer-implemented method comprising:detecting, via one or more Global Positioning System (GPS) devices, geocode of one or more travel paths traveled by a vehicle, the geocode corresponds to one or more latitude and longitude coordinates detected along the one or more travel paths;
based at least in part on the detecting of the geocode, determine one or more street segments by associating the geocode with the one or more street segments;
storing, in a data structure within computer memory, serviceable point data for each of a plurality of serviceable points, the serviceable point data for each serviceable point comprising data identifying a street segment identifier for the corresponding serviceable point, the street segment identifier identifies a particular portion of a street that the corresponding serviceable point is within based on the one or more street segments, the street segment identifier corresponding to a name of the street and at least one cross street that intersects with the street, each serviceable point of the plurality of serviceable points corresponding to a specific location to deliver one or more parcels to within a particular street segment of the one or more street segments;
receiving, over a computer network, first electronic shipping data indicating that a first shipment is to be delivered to a first serviceable point of the plurality of serviceable points; and
responsive to the receiving, over the computer network, the first electronic shipping data indicating that the first shipment is to be delivered to the first serviceable point, determining, based at least on a processor that accesses the data structure within the computer memory, whether a second shipment to be delivered to a second serviceable point is available for synchronized delivery with the first shipment, the first serviceable point and the second serviceable point being different delivery locations, the determining includes:
determining whether a first street segment identifier corresponding to a first street that the first serviceable point is on is connected to a second street segment identifier corresponding to a second street that the second serviceable point is on, wherein the first street and the second street are different streets; and
at least partially responsive to determining that the first street segment identifier of the first street is connected to the second street segment identifier of the second street, providing an indication that the first shipment and the second shipment are available for synchronized delivery.

US Pat. No. 10,796,269

METHODS FOR SENDING AND RECEIVING NOTIFICATIONS IN AN UNMANNED AERIAL VEHICLE DELIVERY SYSTEM

UNITED PARCEL SERVICE OF ...

1. A method for providing a notification regarding delivery of a parcel by an unmanned aerial vehicle (UAV), the method comprising:after navigating a UAV to a serviceable point, establishing, via a UAV computing entity, a direct communication link between the UAV computing entity and a user computing entity, wherein (a) a UAV comprises the UAV computing entity, a UAV chassis, and a parcel carrier coupled to the UAV chassis, (b) the parcel carrier comprises an engagement housing selectively coupled to the UAV chassis, (c) a parcel carrying mechanism is engaged with and securing a parcel to the engagement housing, and (d) the user computing entity is associated with the serviceable point;
releasing the parcel from the parcel carrying mechanism of the parcel carrier; and
after releasing the parcel from the parcel carrying mechanism of the parcel carrier, providing, via the UAV computing entity, a notification to the user computing entity through the direct communications link, wherein the notification comprises information indicative of the release of the parcel at the serviceable point.

US Pat. No. 10,796,268

APPARATUS AND METHOD FOR PROVIDING SHIPMENT INFORMATION

GTJ VENTURES, LLC, Yonke...

1. An apparatus, comprising:a shipment conveyance device, wherein the shipment conveyance device is a shipping container, a pallet, or a piece of luggage;
a global positioning device, wherein the global positioning device is located in, on, or at, the shipment conveyance device, and further wherein the global positioning device determines a position or location of the shipment conveyance device;
a processor, wherein the processor generates a message in response to an occurrence of an event, or in response to a request for information regarding the shipment conveyance device which is automatically received by a receiver, wherein the message contains information regarding a shipment of the shipment conveyance device; and
a transmitter, wherein the transmitter is located in, on, or at, the shipment conveyance device, and further wherein the transmitter transmits the message to a communication device associated with an owner of the shipment conveyance device or an individual authorized to receive the message.

US Pat. No. 10,796,267

INSTRUMENT INVENTORY SYSTEM AND METHODS

RST AUTOMATION LLC, Bron...

1. An instrument inventory system, comprising:a hood creating an enclosed space within which each of a plurality of instruments is selectively receivable;
a conveyer to load each of the selectively received instruments into the enclosed space;
an instrument interface having at least one instrument sensing element for sensing characteristics of each of the selectively received instruments;
a catch and release mechanism for catching and releasing each of the selectively received instruments for which the characteristics of the selectively received instrument correspond to identified characteristics for a known group of instruments for retention and for which a catch and release characteristic is present; and
a computer having a database, the database storing instrument identification data and characteristics of each of the selectively received instruments, the database having an instrument inventory, the computer further having a hardware microprocessor, the hardware microprocessor being configured to execute:
an instrument data analyzer that analyzes the characteristics data received from the instrument interface for each of the selectively received instruments and compares the received data with the stored instrument identification data to determine whether each selectively received instrument is confirmed for catching and release, and
an instrument processor that receives instrument identifications from the instrument data analyzer, controls the database, and determines data to be recorded in the database thereby creating instrument records in the database for an instrument inventory, and wherein, the instrument processor further compares an identity of each selectively received instrument to identities for each of said known set of grouped instruments and determines whether each selectively received is of a type that is included in said known set of grouped instruments, wherein said known group of instruments includes two or more distinct types of instruments and wherein at least one of the two or more distinct types of instruments is not a sub-type of at least one other instrument type in the known group.

US Pat. No. 10,796,266

AUTOMATED CONTEXT DRIVEN BUILD PLAN LIFECYCLE

The Boeing Company, Chic...

1. A control system for updating a context-driven build plan for production of a physical vehicle, the control system comprising:a design engineering database having a plurality of design digital data objects associated with a particular physical vehicle;
a manufacturing database having a plurality of manufacturing digital data objects that include process-related information associated with the plurality of design digital data objects;
a production database having a plurality of production digital data objects that include production information associated with the plurality of design digital data objects;
a criterion module comprising a data processing system configured to assign a context criterion to any of the plurality of design digital data objects, the plurality of manufacturing digital data objects, or the plurality of production digital data objects, wherein the criterion module is further configured to assign the context criterion to a particular design digital data object based on properties or rules associated with the particular design digital data object;
a user interface comprising the data processing system configured to receive an input of a requested change for the particular design digital data object associated with the physical vehicle;
a mapping module comprising the data processing system configured to, in response to receiving the input of the requested change, use the assigned context criterion to establish a mapping between the particular design digital data object and any other of the plurality of design digital data objects, the plurality of manufacturing digital data objects, or the plurality of production digital data objects, wherein the mapping module is configured to establish the mapping based on the properties or rules associated with the particular design digital data object to which the requested change pertains, and wherein the mapping module is further configured to automatically send a digital change request to an authority associated with whichever of the plurality of design digital data objects, the plurality of manufacturing digital data objects, or the plurality of production digital data objects to which the requested change for the particular digital data object pertains, wherein the digital change request includes information based on the properties or rules associated with the particular design digital data object and also on information derived from the mapping; and
a change module comprising the data processing system configured, upon approval by the authority, to update the context-driven build plan with the digital change request to reflect a change to process-related information or production-related information associated with the particular digital data object, whereby an updated build plan is generated.

US Pat. No. 10,796,265

METHOD AND SYSTEM FOR EVALUATING PERFORMANCE OF ONE OR MORE EMPLOYEES OF AN ORGANIZATION

Wipro Limited, Bangalore...

1. A computer-implemented method of evaluating performance of one or more employees of an organization, the method comprising:providing, by a performance evaluating system, a review matrix corresponding to one of plurality of review contexts assigned to one or more recommenders on corresponding end user devices;
receiving, by the performance evaluating system via a communication network, a feedback in the review matrix for each of the one or more employees from the one or more recommenders, wherein the feedback comprises a recommender's review score and review comments;
assigning, by the performance evaluating system, a unique ID for the recommender's review score and review comments in the feedback received for each of the one or more employees;
generating, by the performance evaluating system, a system review score for each of the one or more employees by analysing the review comments using a Natural Language Processing (NLP) technique;
computing, by the performance evaluating system, a compound review score for each of the one or more employees for each of the plurality of review contexts using the recommender's review score, the system review score, and the unique ID, wherein computing the compound review score comprises:
computing, by the performance evaluating system, a Square Error (SE) value between the recommender's review score and the system review score corresponding to the review context for each of the one or more recommenders;
comparing, by the performance evaluating system, the SE value with a predefined SE threshold stored in a memory to identify a first weightage value corresponding to the recommender's review score and a second weightage value corresponding to the system review score; and
correlating, by the performance evaluating system, the recommender's review score with the first weightage value and the system review score with the second weightage value to generate the compound review score;
computing, by the performance evaluating system, a cumulative evaluation score for each of the one or more employees using at least one of linear weighting technique or an exponential-down weighting technique on the compound review score generated for each of the plurality of review contexts, predefined organizational weights and historical evaluation score of each of the one or more employees that are stored in the memory; and
analysing, by the performance evaluating system, the cumulative evaluation score of each of the one or more employees to evaluate performance of each of the one or more employees, wherein the evaluated performance of each of the one or more employees is used for project management and task assignment in the organization.

US Pat. No. 10,796,264

RISK ASSESSMENT IN ONLINE COLLABORATIVE ENVIRONMENTS

International Business Ma...

1. A method comprising:receiving, by one or more computer processors, text content provided to one or more content providers that contains one or more textual elements describing an issue experienced by a user interacting with a product in development, wherein the text content is reported by a first user in a first online forum used by a plurality of users for collaboration;
parsing, by one or more computer processors, the received text content into one or more textual elements of interest associated with the issue,
wherein the one or more textual elements are weighed based on one or more conditions to identify one or more textual elements of interest by utilizing a term-frequency-inverse-document (TFIDF) weighing mechanism, one or more textual elements not of interest is identified and discarded by utilizing the TFIDF weighing mechanism, and wherein the one or more conditions includes a pre-determined importance of the text elements, a frequency of text elements, a plurality of views for an identified instance, a plurality of replies to an identified instance, or a plurality of responses to an identified instance;
determining, by one or more computer processors, whether there exists issue entries in a repository that include one or more of the textual elements of interest, wherein each entry includes a predetermined risk level associated with the issue that indicates a likelihood that the issue will hinder the user interaction with the product;
responsive to determining that there exists a plurality of entries in a repository that include one or more of the textual elements of interest:
identifying, by one or more computer processors, an entry in the repository that most closely matches the one or more textual elements of interest,
wherein a cosine similarity evaluation of a plurality of results generated from a database is automatically triggered to compare and identify the most closely matched entry from the plurality of entries, and the database includes a plurality of known error message strings utilized to poll one or more online community forums;
determining, by one or more computer processors, a risk level for the issue, based, in part, on the predetermined risk level of the identified entry, wherein the determined risk level for the issue is stored in the database, and when two or more of the entries of the plurality of entries match the one or more textual elements of interest, determining a different risk level for each of the two or more entries;
assigning, by one or more computer processors, a work item to an administrative user, wherein the work item indicates to the administrative user to resolve the issue, wherein the work item is assigned based on the nature of the work item, the availability of the administrative user, and the experience of the administrative user to a plurality of previous work items associated with the identified entry;
modifying, by the administrative user, the module of the product in development based on the received text content associated with the issue; and
responsive to determining that no entries in the repository include one or more of the textual elements of interest, determining, by one or more computer processors, a risk level based, in part, on the one or more textual elements of interest by:
identifying, by one or more computer processors, one or more metrics of the content providers, including at least a number of unique views received in the online forum for the issue reported to the content providers, and
assigning, by one or more computer processors, a risk level to the issue, based, at least in part on, the one or more identified metrics.

US Pat. No. 10,796,263

SYSTEM AND METHOD FOR ASSESSING CLIENT PROCESS HEALTH

GENPACT LUXEMBOURG S.A.R....

1. A computer-implemented method executed by at least one computer processor for determining health of at least one internal process within an organization, the method comprising:communicating, via a network, with an external storage unit connected to at least one client system, wherein the client system uploads data pertaining to the process, for:
receiving at least one set of sub-processes comprising at least one sub-process, from the external storage unit, wherein the at least one sub-process is assessable across a plurality of dimensions;
computing a process health index value using a sub-process health index value and a sub-process weight value, of at least one sub-process selected automatically from said at least one set of sub-processes, wherein the sub-process health index value is computed in response to the automatic selection of the at least one sub-process, using:
a set of dynamically received responses to a set of unique evaluators associated with at least one dimension in the plurality of dimensions, wherein each response is a weighted option selected from a plurality of weighted options and the weight associated with each weighted option corresponds to a maturity level of the process, and
a dimensional weight value in the set of dimensional weight values assigned to said at least one dimension;
comparing said process health index with a target health index and comparing said process health index with a best-in-class health index, wherein the target health index and the best-in-class health index are values computed for at least one of a best-in-class process and a process that the organization wants to achieve for said evaluated process;
storing said process health index value, said sub-process health index value, and result of the comparisons in the storage unit;
generating at least one of a graphical and statistical output based on said process health index value and the result of the comparison process to determine health of said internal process; and
determining areas to facilitate improvement in the health of the internal process based on said output.

US Pat. No. 10,796,262

INTERACTIVE PRODUCT AUDITING WITH A MOBILE DEVICE

The Nielsen Company (US),...

1. An interactive product auditing method comprising:performing, with a processor of an auditing device, image recognition based on a first set of candidate patterns accessed by the auditing device to identify a first product depicted in a first region of interest of a segmented image;
presenting, via a display of the auditing device, a message requesting user input associated with a first grid of the first region of interest displayed on a display of the auditing device, the first grid including the first product;
selecting, with the processor of the auditing device, a second set of candidate patterns from a pattern database based on the user input and a group of products identified in the segmented image in a neighborhood of the first region of interest;
identifying a second product in a second region of interest of the segmented image based on the second set of candidate patterns; and
displaying, on the display of the auditing device, an image-based result for the first and second products identified in the first and second regions of interest.

US Pat. No. 10,796,261

AGRICULTURAL ENTERPRISE MANAGEMENT METHOD AND SYSTEM

Decisive Farming Corp., ...

1. A computer-implemented method for management of an agricultural enterprise comprising the steps for:(a) collecting and inputting into one or more cloud-based databases and being processed by a first data processing module for each individual agricultural field selected from an agricultural producer's farmlands, a set of historical annual plurality of physicochemical data sets and a plurality of topographical data sets collected from a set of predetermined locations in each of selected individual agricultural fields comprising the farmlands, and a set of current annual plurality of physicochemical data sets and a plurality of topographical data sets for each of said selected individual agricultural fields;
(b) obtaining and inputting into one or more cloud-based databases and being processed by a second data processing module for each selected individual agricultural field, a set of current annual pre-sowing crop production planning data records and crop selection data records, and optionally, a set of historical annual pre-sowing crop production planning data records and crop selection data records;
(c) obtaining and inputting into one or more cloud-based databases and being processed by a third data processing module for each selected individual agricultural field, a set of historical annual crop production data records, said production data records including identification of the crop produced, crop growth rate data, harvested crop biomass yield data and/or harvested crop seed yield data, chemical fertilizer input data, pesticide input data, growth modulating product input data, and a set of current annual crop production data records;
(d) obtaining and inputting into one or more cloud-based databases and being processed by a fourth data processing module for each selected individual agricultural field, a set of historical annual data set of agronomy service providers and cost data records listing each agronomy service delivered prior to and during each crop production cycle, and a set of current annual data set of agronomy service providers and cost data records;
(e) obtaining and inputting into one or more cloud-based databases and being processed by a fifth data processing module for each selected individual agricultural field, a set of historical annual data records listing harvested crop inventory records, sales records, and revenue records, and a set of current annual data records listing harvested crop inventory records, sales records, and revenue records;
(f) obtaining and inputting into one or more cloud-based databases and being processed by a sixth data processing module, data records pertaining to overhead expenditures incurred during one or more historical crop production cycle(s);
(g) automatically performing a computer-implemented analysis of the set of historical annual data and the set of current annual data for each of the data processing modules and producing therefrom one or more analysis summaries for each of said data processing modules;
(h) automatically creating with a computer-implemented program an agronomic prescription for each of two or more selected crops being considered for a next crop production cycle on a first selected field and generate therefrom, harvested crop yield projection, a crop production cost projection, and a return-on-investment revenue projection for each of the selected crops on the first selected field;
(i) automatically performing a computer-implemented analysis of the analysis summaries in reference to each of the crop production prescriptions for the first selected field, wherein the analysis provides a comparison of the harvested crop yield projection and the crop production cost projection for each of the two or more selected crops;
(j) repeating (1) the creation of an agronomic prescription, a harvested crop yield projection and a crop production cost projection for each of two or more selected crops being considered for a next crop production cycle on a second selected field, and (2) the computer-implemented analysis of the analysis summaries in reference to each of the agronomic prescriptions for the second selected field;
(k) from the inputted selections of a crop for the first selected field and a crop for a second selected field, generating with a computer-implemented program a work order comprising one or more of a supply of seed, a supply of fertility products, a supply of pesticides, performance of agronomic services, performance of equipment maintenance services, and performance of overhead services;
(l) electronically transmitting the work order over a network to one or more selected suppliers and/or one or more selected service providers pertaining to planting and growing of said selected crops;
(m) generating a series of alerts associated with the work order to enable tracking of delivery of the ordered products and/or services; and
(n) generating a series of current status reports for each of the data processing modules, said current status reports electronically accessible by the producer and by an authorized and authenticated supplier or a service provider.

US Pat. No. 10,796,260

PRIVACY MANAGEMENT SYSTEMS AND METHODS

OneTrust, LLC, Atlanta, ...

1. A data processing system for determining readiness to comply with a set of privacy regulations, the system comprising:one or more processors; and
computer memory storing computer-executable instructions that, when executed by the one or more processors, cause the one or more processors to perform operations comprising:
generating a master compliance readiness questionnaire comprising a plurality of questions;
detecting, on a graphical user interface, a user selection of a first territory;
at least partially in response to detecting the user selection of the first territory:
determining a first set of regulations based at least in part on the first territory;
and generating a first compliance readiness questionnaire based at least in part on the first set of regulations, the first compliance readiness questionnaire comprising a plurality of questions;
detecting, on the graphical user interface, a user selection of a second territory;
at least partially in response to detecting the user selection of the second territory:
determining a second set of regulations based at least in part on the second territory; and
generating a second compliance readiness questionnaire based at least in part on the second set of regulations, the second compliance readiness questionnaire comprising a plurality of questions;
generating an ontology mapping a first question of the plurality of questions of the master compliance readiness questionnaire to a first question of the plurality of questions of the first compliance readiness questionnaire for the first set of regulations and to a first question of the plurality of questions of the second compliance readiness questionnaire for the second set of regulations, wherein the first question of the plurality of questions of the master compliance readiness questionnaire solicits information regarding one or more privacy policies;
receiving a request to determine an extent of compliance with a plurality of sets of regulations, wherein the plurality of sets of regulations comprises the set of regulations;
at least partially in response to receiving the request to determine the extent of compliance with the plurality of sets of regulations, generating a prompt to a user requesting an answer to the first question of the plurality of questions of the master compliance readiness questionnaire;
receiving input from the user indicating the answer to the first question of the plurality of questions of the master compliance readiness questionnaire;
storing the answer to the first question of the plurality of questions of the master compliance readiness questionnaire;
accessing the ontology;
populating the first question of the plurality of questions of the first compliance readiness questionnaire for the first set of regulations with the answer to the first question of the plurality of questions of the master compliance readiness questionnaire using the ontology;
determining, based at least in part on the answer to the first question of the plurality of questions of the master compliance readiness questionnaire, an extent of compliance with the first set of regulations; and
automatically generating a notification of the extent of compliance with the first set of regulations.

US Pat. No. 10,796,259

RISK AND DEPENDENCY TRACKING AND CONTROL SYSTEM

Microsoft Technology Lice...

1. A computing system, comprising:at least one processor; and
memory storing instructions executable by the at least one processor, wherein the instructions, when executed, provide:
a connection detection system that identifies a set of connected deliverables among a plurality of different deliverables in a data store, the set of connected deliverables comprising a first deliverable connected to a second deliverable by a given connection indicating a dependency in which the first deliverable is dependent on the second deliverable;
timeline generator logic that generates a representation of a timeline having nodes connected by edges, each node representing one of the connected deliverables, in the set of connected deliverables, and each edge connecting a pair of nodes corresponding to an identified connection between the deliverables represented by the pair of nodes connected by the edge;
a user interaction system that controls interaction with the set of connected deliverables and comprises link reverse logic that generates a reverse dependency user interface mechanism; and
surfacing logic that generates a display control signal to control a display device to display a timeline display pane comprising:
node display elements representing the first and second deliverables, and
an edge display element that:
visually represents the given connection between the first deliverable and the second deliverable, and
includes the reverse dependency user interface mechanism;
wherein the link reverse logic is configured to:
in response to actuation of the reverse dependency user interface mechanism, generate a link control signal to reverse the given connection between the first deliverable and the second deliverable.

US Pat. No. 10,796,258

DECORRELATING EFFECTS IN MULTIPLE LINEAR REGRESSION TO DECOMPOSE AND ATTRIBUTE RISK TO COMMON AND PROPER EFFECTS

Triad National Security, ...

1. A computer program for controlling an amount of unexplained correlation that remains in data after accounting for common hidden variables, the program embodied on a non-transitory computer-readable storage medium, the program configured to cause at least one processor to:determine residual matrices R1 and R2 comprising a first residual part and a second residual part for a first set of risk factors and a second set of risk factors, respectively;
when R1tR2=0:
perform a three-way risk decomposition approach enforcing orthogonality of the first residual part and the second residual part that also decomposes risk into a common part associated with a set of common hidden variables common to R1 and R2 that minimize a correlation between the first set of risk factors and the second set of risk factors, the common hidden variables modeled using a hidden factor model, and
generate a computer-based data structure corresponding to linear vector spaces of unobserved latent variables, wherein the unobserved latent variables are represented as two matrices A and B whose inner-product ATB=0;
when R1tR2?0, perform a generalized risk decomposition approach without enforcing orthogonality of the first residual part and the second residual part;
quantify how correlated the terms of the risk decomposition are based on the performed risk decomposition approach; and
output the quantification, wherein
a maximum correlation is used between linear combinations of explanatory variables for each of the first set of risk factors and the second set of risk factors, given by:

where X1 and X2 are linear spaces spanned by columns of two design matrices X1 and X2 associated with the explanatory variables of the first set of risk factors and the second set of risk factors, respectively.

US Pat. No. 10,796,257

METHOD FOR PROVIDING BUSINESS PROCESS ANALYSES

CELONIS SE, Munich (DE)

1. A computer-implemented method for providing at least one analytics package to a process mining system, wherein the analytics package provides analyses of business processes to a user of the process mining system, wherein the method is executed in a computer system having a processor and a memory device operatively coupled to the processor, and wherein the method comprises(a) providing at least one event log to the processor, the event log comprising process data of the business processes, the process data being derived from raw data stored in a source system comprising at least one data table of a database system, wherein the process data comprise at least one process element and the at least one process element comprises at least one process step, the event log being stored with the memory device according to a predetermined data structure, the predetermined data structure comprising at least
a first attribute for storing a unique identifier of the process element of the respective process step,
a second attribute for storing an identifier of the respective process step, and
a third attribute for storing an order of the process steps within the process element;
(b) providing auxiliary data to the processor, the auxiliary data being stored in a set of tables, where the auxiliary data belong to the event log;
(c) providing a data model to the processor, the data model describing the predetermined data structure, the set of tables and the relations between the tables and the predetermined data structure; and
(d) creating, based on the data model, the at least one analytics package, the at least one analytics package comprising executable program modules for creating and displaying at least one graphical analysis of the process data stored with the event log and of the auxiliary data stored with the set of tables;
wherein the method further comprises providing an event log package to the processor, the event log package comprising at least one process sensor, wherein the at least one process sensor
derives the process data from the raw data, and
generates from the derived process data
the unique identifiers of process elements,
the identifiers of process steps which are assigned to the process elements, and
the order of the process steps,
wherein the at least one process sensor is executed by the processor and wherein the processor stores the generated data as the event log with the memory device according to the predetermined data structure, and
wherein the relationship between the event log packages and the analytics package is a one-to-many relationship and wherein the execution of the program modules of the analytics package on the process mining system depends on whether the corresponding event log package is available on the process mining system.

US Pat. No. 10,796,256

PROCESS VALIDATION AND ELECTRONIC SUPERVISION SYSTEM

Paragon Health, Dallas, ...

1. A system for automating a compounding process selection based on the content of a prescription order received by the system and remote safe-guarding workflow in a pharmacy comprising:a management system connectable to an external database;
at least one mobile device having image capture capabilities selectively connected to the management system and the at least one mobile device selectively disposed inside a pharmacy compounding facility including a clean room;
the management system receiving video chat data, images and records from the at least one mobile device and sensor input from a sensor array to form a storable quality assurance record for at least one workflow, the workflow automatically determined and selected by the system based on validation requirements of a prescription received and detected;
the management system configured to initiate and store selective remote requests including at least one of entry of override information, selective connection of process review requests, and selective initiation of an in process approval signal generated in response to a pharmacist device located outside the clean room, the pharmacist device configured to enable selection of a signal for at least one of confirming or modifying the workflow automatically selected by the system, wherein any selective requests generated from the pharmacist device are transmitted to and received by the system, and generate one or more updates in the system that selectively modify the selected workflow, and store the one or more updates in the storable record for the workflow.

US Pat. No. 10,796,255

MANAGING PROJECT TASKS USING CONTENT ITEMS

Dropbox, Inc., San Franc...

1. A method comprising:receiving, by a content management system, an identification of a first project;
identifying, by the content management system, multiple content items associated with the first project;
determining, by the content management system, a first set of tasks defined in a first content item of the multiple content items associated with the first project, wherein the first set of tasks includes at least one task assigned to a first user;
determining, by the content management system, a second set of tasks defined in a second content item of the multiple content items, wherein the second set of tasks comprises at least one task assigned to another user; and
creating, by the content management system, a first project task list for the first project, wherein the first project task list includes:
a first heading based on the first content item;
a second heading based on the second content item;
the first set of tasks organized under the first heading; and
the second set of tasks organized under the second heading.

US Pat. No. 10,796,254

SYSTEMS, METHODS AND APPARATUS FOR INTEGRATED OPTIMAL OUTAGE COORDINATION IN ENERGY DELIVERY SYSTEMS

SIEMENS INDUSTRY, INC., ...

1. A method of coordinating scheduled maintenance outages for one or more generator resources and one or more transmission resources of an energy delivery system, to control a shutdown of the one or more generator resources or the one or more transmission resources, by the energy delivery system, comprising:determining a set of input and data validation functions, the determining of the set of input and data validation functions comprising receiving at least one maintenance outage request from at least one market participant of the energy delivery system for at least one of the one or more generator resources and the one or more transmission resources, the at least one maintenance outage request including an outage cost as a function of time, duration, or time and duration, one or more allowed repair windows with penalties for window violations, or a combination thereof;
obtaining, by an integrated optimal outage coordination (IOOC) system, initial resource schedules for network analysis using the set of input and data validation functions;
performing, by the IOOC system, a first network analysis with a full AC power flow in the energy delivery system;
executing, by the IOOC system, a first security constraint unit commitment function using transmission constraints output from the network analysis;
determining, by the IOOC system, a system-wide optimized maintenance outage schedule using an output of the first security constraint unit commitment function;
distributing, by the IOOC system, the system-wide optimized maintenance outage schedule to a generation control and load management system; and
controlling operation of the one or more generator resources, by the energy delivery system, such that the one or more generator resources are shut down in accordance with a first date, a first time, and a first duration of the system-wide optimized maintenance outage schedule, and controlling operation of the one or more transmission resources such that the one or more transmission resources are shut down in accordance with a second date, a second time, and a second duration of the system-wide optimized maintenance outage schedule;
wherein the system-wide optimized maintenance outage schedule specifies a schedule of planned maintenance outages.

US Pat. No. 10,796,253

SYSTEM FOR RESOURCE USE ALLOCATION AND DISTRIBUTION

BANK OF AMERICA CORPORATI...

1. A system for resource use allocation for a shared use service, the system comprising:a memory device with computer-readable program code stored thereon;
a communication device;
a processing device operatively coupled to the memory device and the communication device, wherein the processing device is configured to execute the computer-readable program code to:
link a smart device systems to an object of a shared use service, wherein the object is a product;
generate a communication area within the object of the shared use service;
identify one or more user devices within the communication area;
receive historic data from a user device, wherein the historic data comprises a unique device identifier, and wherein the historic data comprises time, date, and end destinations where the user device has traveled;
identify, using the historic data, a pattern of end destinations for a user of the user device;
based on the identified pattern of end destinations for the user, determine an end destination of the user associated with user device;
identify one or more transfers to a second object of the shared use service for the user to reach the end destination;
communicate, via a secure communication linkage established in the communication area, an interface with a resource requirement for the end destination, wherein communicating the interface with the resource requirement comprises displaying the resource requirement to the user;
lock the screen of the user device until the resource requirement is accepted by the user for the end destination and triggering of representative action;
receive resource requirement from the user from a shared account, wherein the shared account provides a distribution of resources required for the shared use services;
trigger representative action upon identification of the user end destination; and
transmit, to a representative operating the object, the user end destination for the shared use service for termination of the service at the end destination.

US Pat. No. 10,796,252

INDUCED MARKOV CHAIN FOR WIND FARM GENERATION FORECASTING

Arizona Board of Regents ...

1. A method for forecasting power generation in a wind farm, the method comprising:utilizing, by a processor, an induced Markov chain model to generate a forecast of power generation of the wind farm, wherein the forecast is at least one of a point forecast or a distributional forecast; and
modifying at least one of: (i) a generation of electricity at a power plant coupled to a common power grid as the wind farm; or (ii) a distribution of electricity in the common power grid based on the forecast of power generation of the wind farm, wherein utilizing the induced Markov chain model to generate the forecast of the power generation of the wind farm comprises:
determining a series of time adjacent power output measurements based on historical wind power measurements of the wind farm;
transforming time adjacent power output measurements into discrete states, the discrete states comprising ranges of power, the transforming comprising determining at least one discrete state for each time adjacent power output measurement, wherein the discrete states comprise at least one overlapping state, the overlapping state having a first range of power overlapping with a second range of power of another state; and
calculating a time series of difference values based on the series of time adjacent power output measurements including calculating a difference value between adjacent power output measurements of the series of time adjacent power output measurements.

US Pat. No. 10,796,251

SYSTEM AND METHOD FOR MOBILE SOCIAL NETWORKING WITHIN A TARGET AREA

CAPITAL ONE SERVICES, LLC...

1. A method for updating a mobile social network, comprising:receiving a first social networking profile of a user of a first mobile computing device, the first social networking profile comprising at least one user preference;
determining a target area based on at least one of a current location of the user of the first mobile computing device, a previous location of the user, or a predicted location of the user;
receiving, via a communications network, one or more second social networking profiles associated with respective one or more members of a social network, each social networking profile associated with the one or more members including respective preference data of the one or more members;
receiving location information corresponding to a current location of respective members of the one or more members;
comparing the received location information and preference data of the one or more members of the social network to the target area and the at least one user preference to determine a mobile social network member of the mobile social network within the target area having a preference corresponding to the at least one user preference of the user of the first mobile computing device;
automatically transmitting a notification to the mobile social network member;
receiving updated location information corresponding to an updated current location of respective members of the one or more members;
dynamically removing a first mobile social network member of the mobile social network from the mobile social network in response to determining that the updated current location of the first mobile social network member is not within the target area;
dynamically adding a new mobile social network member to the mobile social network within the target area in response to determining that the updated current location of the new mobile social network member is within the target area, the new mobile social network member having a preference corresponding to the at least one user preference of the user of the first mobile computing device; and
automatically transmitting a notification to the new mobile social network member within the target area.

US Pat. No. 10,796,250

USER INTERFACE FOR TRAVEL PLANNING

Google LLC, Mountain Vie...

1. A computer-implemented method to display travel planning information on a display device, the method comprising:receiving, using one or more computing devices, a user selected earliest departure date, a user selected latest departure date, and a user selected length of stay;
in response to receiving the user selected earnest departure date, the user selected latest departure date, and the user selected length of stay, transmitting, using the one or more computing devices, instructions causing to display a user interface on a display device to render date cells corresponding to the user selected earliest departure date, the user selected latest departure date, each possible departure date between the user selected earliest departure date and the user selected latest departure date, and a lowest fare value for each date cell based upon the user selected length of stay, wherein the displayed user interface comprises a depiction of a calendar view of at least a portion of a month including the date cells corresponding to calendar days of the month;
receiving, using the one or more computing devices, a selection of a particular one of the date cells corresponding to a user selected date of departure; and
in response to receiving the user selected date of departure, transmitting, using the one or more computing devices, instructions causing an update to the displayed user interlace on the display device, the updated user interface comprising a visual indicator that starts at a date cell of the calendar view corresponding to the user selected date of departure and automatically extends across the date cells corresponding to calendar days that the user would be traveling according to the user selected length of stay.

US Pat. No. 10,796,249

TICKETING METHOD AND SYSTEM

FAIRTIQ AG, Bern (CH)

1. A ticketing method for charging a passenger for using a transport system, comprising:checking-in the passenger via a mobile device of the passenger upon accessing a vehicle of the transport system;
checking-out the passenger via the mobile device upon exiting the vehicle of the transport system;
a server computer automatically calculating a price for a travel of the passenger within the transport system by evaluating check-in data representing the checking-in of the passenger and check-out data representing the checking-out of the passenger;
a sensor of the mobile device of the passenger generating a sensor data signal; and
transferring the sensor data signal from the mobile device of the passenger to the server computer;
the server computer automatically calculating a travel movement pattern dataset based on sensor data associated to the transferred sensor data signal;
the server computer automatically comparing the travel movement pattern dataset to a transport system movement pattern dataset;
the server computer automatically identifying a non-compliance of the travel movement pattern dataset with regard to the transport system movement pattern dataset; and
the server computer generating the check-out data using the identified non-compliance.

US Pat. No. 10,796,248

RIDE-SHARING JOINT RENTAL GROUPS

Ford Global Technologies,...

1. A system comprising:a ride-sharing server configured to
receive, from a first user, a vehicle rental request including trip characteristics specifying an origin, a destination, and time constraints;
identify a second user having rental criteria matching the vehicle rental request;
send a rent-share request to the first and second users to form a joint rental group;
rent a vehicle to the joint rental group when the rent-share request is confirmed;
determine that a better-matched vehicle is now available for the joint rental group that was unavailable when the vehicle was rented; and
send a vehicle update request to the users of the joint rental group indicating that the better-matched vehicle is now available.

US Pat. No. 10,796,247

SYSTEM FOR MANAGING RISK IN EMPLOYEE TRAVEL

WorldAware Inc., Annapol...

1. A method for tracking a location of users of a travel risk management system during travel, the method comprising:storing travel itinerary information for a user of the travel risk management system in a user travel database of the system, wherein the travel itinerary information includes user identification information, travel date information for a time period in which the user is traveling, and geographic location information for at least one scheduled destination of the user during the travel time period;
receiving a location message from a mobile device of the user at an application server of the travel risk management system, the location message including an actual location of the user;
comparing the actual location of the user, as determined from the location message, to the at least one scheduled destination of the user included in the travel itinerary information; and
determining, at the application server, whether the actual location of the user is different from the at least one scheduled destination of the user;
updating the at least one scheduled destination of the user included in the stored travel itinerary information with the actual location of the user based on the determination that the actual location of the user is different from the at least one scheduled destination of the user, wherein each geographic location of a plurality of countries is assigned a risk level value, and wherein updating the at least one scheduled destination of the user comprises:
determining a geographic location in which the user is located based on the actual location of the user from the location message; and
determining whether the assigned risk level value for the geographic location in which the user is located is a high risk level value;
updating the at least one scheduled destination of the user with the actual location of the user from the location response message in response to determining that the assigned risk level value for the geographic location in which the user is located is a high risk level value;
determining, at the application server, an occurrence of a risk management event and a geographic area affected by the risk management event;
triggering a risk management response in response to the determination of the risk management event, the risk management response comprising determining, at the application server, whether the at least one updated scheduled location of the user is within the geographic area affected by the risk management event; and
transmitting a notification message to the mobile device of the user when it is determined that the at least one updated scheduled location of the user is within the geographic area affected by the risk management event.

US Pat. No. 10,796,246

BRAIN-MOBILE INTERFACE OPTIMIZATION USING INTERNET-OF-THINGS

Arizona Board of Regents ...

1. A Brain-Mobile Interface (BMoI) system comprising:an input interface coupled to a selected communication medium;
an output interface coupled to the selected communication medium; and
a control circuit coupled to the input interface and the output interface and configured to:
receive a first type sensory data via the input interface;
receive a second type sensory data via the input interface within a time window from receiving the first type sensory data;
extract a plurality of signal features from the received first type sensory data;
execute a predictive model to generate a defined number of predicted signal features in at least one prediction window based on the plurality of extracted signal features;
validate the defined number of predicted signal features based on the second type sensory data;
generate at least one predicted future mental state based on the defined number of predicted signal features; and
provide the at least one predicted future mental state to the output interface.

US Pat. No. 10,796,245

SYSTEMS AND METHODS FOR SELECTING CONTENT TO SEND TO LABELERS FOR PREVALENCE ESTIMATION

Facebook, Inc., Menlo Pa...

1. A computer-implemented method comprising:selecting an estimator of a prevalence of a class of content within an online system, wherein the estimator relies upon labeled content items that have been labeled by one or more human labelers as being of the class of content;
sampling a plurality of content items from the online system;
using, for each of the plurality of content items, a machine-learning classification model to generate a score for the content item, the score indicating a likelihood that the content item is of the class of content;
generating a plurality of buckets, wherein each of the plurality of buckets:
is assigned a range of scores from the machine-learning classification model; and
contains a subset of the plurality of content items whose scores fall within the range of scores;
determining a sampling rate for each of the plurality of buckets that minimizes a variance metric of the estimator;
selecting, from each of the plurality of buckets, a portion of content items according to the sampling rate of the bucket; and
sending the portion of content items from each of the plurality of buckets to the one or more human labelers for labeling.

US Pat. No. 10,796,244

METHOD AND APPARATUS FOR LABELING TRAINING SAMPLES

BAIDU ONLINE NETWORK TECH...

1. An apparatus for labeling training samples, comprising:one or more processors; and
a memory having one or more programs stored thereon to be executed by said one or more processors, the programs including instruction for:
inputting M unlabeled first training samples into a first classifier to obtain a first forecasting result of each first training sample in the M first training samples, M being an integer greater than or equal to 1;
selecting N first training samples as second training samples from the M first training samples according to the first forecasting result of each first training sample, N being an integer greater than or equal to 1 and less than or equal to M;
inputting the N second training samples into a second classifier to obtain a second forecasting result of each second training sample in the N second training samples, the first classifier and the second classifier being independent of each other;
selecting P second training samples from said N second training samples according to the second forecasting result of each second training sample, P being an integer greater than or equal to 1 and less than or equal to N;
selecting Q first training samples from other first training samples according to first forecasting results of the other first training samples in the M first training samples apart from the N second training samples and the value of P, Q being an integer greater than or equal to 1 and less than or equal to a difference of M?N; and
generating P labeled second training samples according to second forecasting results of the P second training samples and each of the second training samples; and
generating Q labeled first training samples according to first forecasting results of the Q first training samples and each of the first training samples therein.

US Pat. No. 10,796,243

NETWORK FLOW CLASSIFICATION

Hewlett Packard Enterpris...

1. A system, comprising:a processor;
a storage device coupled to the processor and storing instructions which when executed by the processor cause the processor to perform a method, the method comprising:
clustering a number of network flows within a network into a number of clusters;
computing a cluster centroid for each cluster, wherein the cluster centroid denotes a flow signature for the cluster, and wherein the flow signature comprises at least sizes of a predetermined number of packets;
determining a cluster size associated with each cluster based on the computed cluster centroid;
in response to determining that the cluster size of a particular cluster exceeds a predetermined threshold value, removing the particular cluster from the number of clusters and transferring network flows associated with the removed particular cluster to a residual database;
detecting a network flow that is not within any one of the remaining number of clusters of network flows;
calculating a distance between the network flow and the cluster centroid of each of the remaining number of clusters;
determine a threshold distance; and
classifying the network flow based on whether or not the distance between the network flow and each of the remaining number of clusters falls within the threshold distance.

US Pat. No. 10,796,242

ROBUST TRAINING TECHNIQUE TO FACILITATE PROGNOSTIC PATTERN RECOGNITION FOR ENTERPRISE COMPUTER SYSTEMS

Oracle International Corp...

1. A method for training a prognostic pattern-recognition system to detect incipient anomalies that arise during execution of a computer system, comprising:gathering and storing telemetry data obtained from n sensors in the computer system during operation of the computer system;
using the telemetry data gathered from the n sensors to train a baseline model for the prognostic pattern-recognition system;
using the prognostic pattern-recognition system with the baseline model in a surveillance mode to detect incipient anomalies that arise during execution of the computer system;
using the stored telemetry data to train a set of additional models, wherein each additional model is trained to operate with one or more missing sensors;
storing the additional models to be used in place of the baseline model when one or more sensors fail in the computer system; and
when one or more of the n sensors in the computer system fails:
selecting a substitute model from the set of additional models, wherein the substitute model was trained without using telemetry data from the one or more failed sensors,
updating the prognostic pattern-recognition system to use the substitute model while operating in the surveillance mode, and
while the prognostic pattern-recognition system is operating using the substitute model, training supplemental models to be included in the set of additional models, wherein the supplemental models are trained without using telemetry data from the one or more failed sensors, and without using telemetry data from one or more other non-failed sensors.

US Pat. No. 10,796,241

FORECASTABLE SUPERVISED LABELS AND CORPUS SETS FOR TRAINING A NATURAL-LANGUAGE PROCESSING SYSTEM

International Business Ma...

1. A natural language processing training (NLP-training) system comprising a processor, a memory coupled to the processor, and a computer-readable hardware storage device coupled to the processor, the storage device containing program code configured to be run by the processor via the memory to implement a method for forecastable supervised labels and corpus sets for training a natural-language processing system, the method comprising:the training system selecting an oracle, wherein the oracle is a human expert or a computerized expert system that possesses expertise in a particular field of endeavor;
the training system receiving from the oracle identifications of a first label, a set of extrinsic electronic sources, and a set of rationales,
wherein the first label identifies an answer to a predictive question in the particular field of endeavor, given a set of conditions specified by the predictive question,
wherein each rationale of the set of rationales comprises an identification of at least one source of the set of extrinsic sources, and
wherein each source of the set of extrinsic sources is a source of initial natural-language content upon which the oracle based the selection of the first label;
the training system adding training datasets to a first set of corpora, wherein each training dataset of the training datasets is associated with a subset of natural-language content located at one or more sources of the set of extrinsic sources;
the training system retrieving from the one or more extrinsic sources, at a second time, later versions of the natural-language content;
the training system creating a second set of corpora by inserting into the first set of corpora the later versions of the natural-language content;
the training system communicating the second set of corpora to the oracle;
the training system accepting from the oracle, in response to the communicating, a second label;
the training system deleting at least a first training dataset from the second set of corpora when a degree of relevance of the first training dataset falls below a predetermined threshold,
wherein the degree of relevance of the first training dataset is proportional to a degree to which i) a difference between the initial and the later versions of the subset of natural-language content associated with the first training dataset influences ii) a difference between the first label and the second label; and
the training system training the natural-language processing system by submitting the second set of corpora to a training function of a machine-learning application.

US Pat. No. 10,796,240

PERFORMING FAULT TREE ANALYSIS ON QUANTUM COMPUTERS

QC Ware Corp., Palo Alto...

1. A method implemented on a digital computer system comprising a processor, the processor executing instructions to effect a method for fault tree analysis, the method comprising:receiving a description of a fault tree;
converting the fault tree to a polynomial unconstrained binary optimization (PUBO) form, the PUBO form including binary variables that represent primary events, intermediate events, and a top event associated with the fault tree;
converting the fault tree from the PUBO form to a quadratic unconstrained binary optimization (QUBO) form;
sending the fault tree in the QUBO form to a quantum annealing device;
embedding the fault tree in the QUBO form onto the quantum annealing device;
determining, by the quantum annealing device, a minimal cut set of the fault tree; and
based on the minimal cut set determined by the quantum annealing device, running the digital computer system to identify additional minimal cut sets.

US Pat. No. 10,796,239

METHOD AND/OR SYSTEM FOR RECOMMENDER SYSTEM

Oath Inc., New York, NY ...

1. A computer-implemented method for a recommendation system, comprising:identifying a first user in an online social network as a source of online content, the online social network being represented by a social network graph including a plurality of nodes, each of the nodes in the plurality of nodes representing a corresponding one of a plurality of users in the online social network, the first user corresponding to a first node of the plurality of nodes in the social network graph;
identifying a set of nodes of the plurality of nodes in the social network graph, each of the nodes in the set of nodes being connected, directly or indirectly, to the first node via one or more edges of the social network graph, each of the edges in the one or more edges representing a relationship or interaction between two different users of the plurality of users, the two different users corresponding to two different nodes of the set of nodes;
predicting diffusion of the online content in the online social network from the first node;
determining, for each of the nodes in the set of nodes, a probability that a corresponding user of the plurality of users will receive the online content from one or more sources different than the recommendation system based, at least in part, on the diffusion of the online content in the online social network;
determining, for each of the nodes in the set of nodes, an engagement weight that represents a likelihood of further diffusion of the online content in the online social network from the corresponding node;
deciding not to recommend the online content to a second user of the plurality of users in the online social network based, at least in part, on the probability determined for a second node in the set of nodes being higher than a first threshold, the second node in the set of nodes corresponding to the second user, wherein the probability determined for the second node is indicative of the second user likely receiving the online content from at least one source different than the recommendation system; and
recommending the online content to a third user of the plurality of users in the online social network based, at least in part, on the engagement weight determined for a third node in the set of nodes being higher than a second threshold and the probability determined for the third node being lower than the first threshold, the third node in the set of nodes corresponding to the third user, wherein the probability determined for the third node is indicative of the third user likely not receiving the online content from a source different than the recommendation system,
wherein at least one of:
predicting diffusion of the online content in the online social network is based, at least in part, on the nodes in the set of nodes having corresponding path lengths from the first node that are less than a threshold repost distance;
predicting diffusion of the online content in the online social network is based, at least in part, on the nodes in the set of nodes having corresponding retransmission probabilities greater than a threshold retransmission probability;
the engagement weight for the third node being higher than the second threshold is based, at least in part, on the third user exhibiting higher online content diffusion relative to one or more other users; or
recommending the online content to the third user is based, at least in part, on a specified maximum number of recommended online content items for the third user not being exceeded.

US Pat. No. 10,796,238

COGNITIVE PERSONAL ASSISTANT

Cognitive Scale, Inc., A...

1. A cognitive method comprising:monitoring a user interaction of a user;
generating user interaction data based upon the user interaction;
receiving data from a plurality of data sources;
processing the user interaction data and the data from the plurality of data sources to perform a cognitive learning operation, the processing being performed via a cognitive inference and learning system, the cognitive learning operation comprising analyzing the user interaction data, the cognitive learning operation generating a cognitive learning result based upon the user interaction data, the cognitive learning operation implementing a cognitive learning technique according to a cognitive learning framework, the cognitive learning framework comprising a plurality of cognitive learning styles and a plurality of cognitive learning categories, each of the plurality of cognitive learning styles comprising a generalized learning approach implemented by the cognitive inference and learning system to perform the cognitive learning operation, each of the plurality of cognitive learning categories referring to a source of information used by the cognitive inference and learning system when performing the cognitive learning operation, an individual cognitive learning technique being associated with a primary cognitive learning style and bounded by an associated primary cognitive learning category, the learning operation applying the cognitive learning technique via a machine learning algorithm to generate the cognitive learning result, the cognitive inference and learning system comprising a cognitive platform, the cognitive platform and the information processing system performing a cognitive computing function, the cognitive platform comprising:
a cognitive graph, the cognitive graph being derived from the plurality of data sources, the cognitive graph comprising an application cognitive graph, the application cognitive graph comprising a cognitive graph associated with a cognitive application, interactions between the cognitive application and the application cognitive graph being represented as a set of nodes in the cognitive graph, and,
a cognitive engine, the cognitive engine comprising a dataset engine, a graph query engine and an insight/learning engine, the graph query engine accessing knowledge elements stored within the cognitive graph and the application cognitive graph when providing the cognitive learning result;
associating a cognitive profile with the user based on the cognitive learning result; and,
performing a cognitive personal assistant operation based upon the cognitive profile, the cognitive personal assistant operation assisting the user by performing a personal assistant task; and wherein
the plurality of cognitive learning techniques comprising a direct correlations cognitive learning technique, an explicit likes/dislikes cognitive learning technique, a patterns and concepts cognitive learning technique, a behavior cognitive learning technique, a concept entailment cognitive learning technique, and a contextual recommendation cognitive learning technique, the direct correlations cognitive learning technique being associated with a declared learning style and bounded by a data-based cognitive learning category, an explicit likes/dislikes cognitive learning technique being associated with the declared learning style and bounded by an interaction-based cognitive learning category, the patterns and concepts cognitive learning technique being associated with an observed learning style and bounded by the data-based cognitive learning category, the behavior cognitive learning technique being associated with the observed learning style and bounded by the interaction-based cognitive learning category, the concept entailment cognitive learning technique being associated with an inferred learning style and bounded by the data-based cognitive learning category, and a contextual recommendation cognitive learning technique being associated with the inferred learning style and bounded by the interaction-based cognitive learning category.

US Pat. No. 10,796,237

PATIENT-LEVEL ANALYTICS WITH SEQUENTIAL PATTERN MINING

INTERNATIONAL BUSINESS MA...

1. A computer-implemented method for patient-level analytics with sequential pattern mining, the method comprising:transforming, by a processing system, a patient record into a sequence table that identifies the event sets for a plurality of patients, the patient record comprising event sets for the plurality of patients;
transforming, by the processing system, the sequence table into a bitmap representation, wherein the bitmap representation displays each of the event sets, each of the event sets being displayed as one or more events, and wherein the bitmap representation further displays, for each of the event sets, a corresponding patientID and a corresponding timestamp, the corresponding timestamp being associated with a time at which a corresponding event set for the corresponding patient ID occurred; and
analyzing, by the processing system, the bitmap to identify a sequential pattern within the patient record on a per patient basis.

US Pat. No. 10,796,236

MESSAGING SYSTEM

Microsoft Technology Lice...

1. A system configured to generate an automatic response to a communication between first and second users associated with first and second devices, respectively, the system comprising:a processor; and
a memory in communication with the processor, the memory comprising executable instructions that, when executed by the processor, cause the processor to control the device to perform functions of:
receiving, from the first device via a communication network, a first communication sent from the first user to the second user;
analyzing, using a machine-based language processing, a payload of the received first communication;
automatically determining, based on analyzing the payload of the received first communication, that the first communication includes a proposal to schedule a meeting at a future time;
in response to determining that the first communication includes the proposal to schedule the meeting, automatically searching a data storage containing schedule data of the second user to identify the second user's available future time slot for the meeting;
automatically generating, based on the identified second user's available future time slot, a second communication responding to the first communication on behalf of the second user, the second communication including an indication of the identified second user's available future time slot for the meeting; and
causing the second communication to be displayed via a user interface of at least one of the first and second devices.

US Pat. No. 10,796,235

COMPUTER SYSTEMS AND METHODS FOR PROVIDING A VISUALIZATION OF ASSET EVENT AND SIGNAL DATA

Uptake Technologies, Inc....

1. A computing system comprising:a network interface configured to communicatively couple the computing system to (a) a plurality of assets that are each located remote from the computing system and (b) a plurality of client stations that are each running a software application for visualizing asset data handled by the computing system;
at least one processor;
a non-transitory computer-readable medium; and
program instructions stored on the non-transitory computer-readable medium that are executable by the at least one processor to cause the computing system to:
receive, from a given client station of the plurality of client stations, visualization parameters comprising (i) an asset identifier for a given asset of the plurality of assets and (ii) an event identifier for a given type of asset event related to the operation of the given asset;
based on the visualization parameters, identify one or more instances of the given type of asset event that occurred at the given asset within a given timeframe in the past;
identify one or more signal sources of the given asset that are relevant to the identified one or more instances of the given type of asset event that occurred at the given asset;
for each respective signal source of the identified one or more signal sources, obtain a respective set of signal summary data that was generated by (i) segmenting a sequence of raw signal values generated by the respective signal source into a sequence of time windows of a given duration of time that each comprise multiple raw signal values from the sequence of raw signal values, (ii) for each respective time window in the sequence of time windows, summarizing the multiple raw signal values within the respective time window into a summarized value that is representative of the multiple raw signal values within the respective time window, and (iii) compiling the summarized value for each respective time window in the sequence of time windows into a sequence of summarized values that is less granular than the sequence of raw signal values generated by the respective signal source, wherein the sequence of summarized values comprises the respective set of signal summary data for the respective signal source; and
cause the given client station to display a visual representation of the identified one or more instances of the given type of asset event together with the respective set of signal summary data obtained for each respective signal source of the identified one or more signal sources.

US Pat. No. 10,796,234

RANKED INSIGHT MACHINE LEARNING OPERATION

Cognitive Scale, Inc., A...

1. A computer-implementable method for generating a cognitive insight comprising:receiving training data, the training data being based upon interactions between a user and a cognitive learning and inference system;
performing a cognitive learning operation via a cognitive inference and learning system using the training data, the cognitive learning operation implementing a cognitive learning technique according to a cognitive learning framework, the cognitive learning framework comprising a plurality of cognitive learning styles and a plurality of cognitive learning categories, each of the plurality of cognitive learning styles comprising a generalized learning approach implemented by the cognitive inference and learning system to perform the cognitive learning operation, each of the plurality of cognitive learning categories referring to a source of information used by the cognitive inference and learning system when performing the cognitive learning operation, an individual cognitive learning technique being associated with a primary cognitive learning style and bounded by an associated primary cognitive learning category, the cognitive learning operation applying the cognitive learning technique via a machine learning operation to generate a cognitive learning result;
performing a ranked insight machine learning operation on the training data, the machine learning operation comprising the ranked insight machine learning operation;
generating a cognitive profile based upon the information generated by performing the ranked insight machine learning operation; and,
generating a cognitive insight based upon the cognitive profile generated using the ranked insight machine learning operation.

US Pat. No. 10,796,233

SYSTEMS AND METHODS FOR SUGGESTING CONTENT

Facebook, Inc., Menlo Pa...

1. A computer-implemented method comprising:determining, by a social networking system, that a user of the social networking system is eligible for a cover photo suggestion;
providing, by the social networking system, a set of images associated with the user and one or more sets of words determined from content associated with the user along with each image as input to a machine learning model, wherein the content includes at least one of posts, comments, or groups of the user;
obtaining, by the social networking system, respective scores for the set of images as output of the machine learning model, wherein a score for a corresponding image measures a likelihood that the user selects the corresponding image as a cover photo in a social profile of the user; and
selecting, by the social networking system, among the set of images, an image having the highest score as a cover photo suggestion for use in a social profile of the user.

US Pat. No. 10,796,232

EXPLAINING DIFFERENCES BETWEEN PREDICTED OUTCOMES AND ACTUAL OUTCOMES OF A PROCESS

salesforce.com, inc., Sa...

1. A method for analyzing differences between (x) an actual outcome of a process and (y) an outcome predicted by a model of the process, the method comprising a computer system automatically performing the following:processing a data set containing observations of the process, wherein:
each of the observations is expressed as values for a plurality of variables associated with the process and for the actual outcome of the process,
processing the data set comprises estimating contributions for each of multiple different variable combinations with respect to the difference between (x) the actual outcome and (y) the outcome predicted by the model of the process,
each of the variable combinations is defined by values for one or more of the variables, and at least some of the variable combinations are defined by values for at least two of the variables;
estimating the contribution of each of the different variable combinations with respect to the difference between (x) the actual outcome and (y) the outcome predicted by the model is based on
(a) a behavior of that variable combination with respect to affecting the outcome of the process, and
(b) a population of that variable combination within the data set of observations of the process; and
based on the estimated contributions of each of the variable combinations, automatically reporting which variable combinations have he largest estimated contributions on the difference between (x) the actual outcome and (y) the outcome predicted by the model, wherein the automatic reporting comprises an animated briefing comprising a sequence of graphs with overlays on the graphs describing which variable combinations have the largest estimated contributions.

US Pat. No. 10,796,231

COMPUTER-IMPLEMENTED SYSTEMS AND METHODS FOR PREPARING COMPLIANCE FORMS TO MEET REGULATORY REQUIREMENTS

INTUIT INC., Mountain Vi...

1. A system for preparing a plurality of types of compliance forms for submission to a respective responsible agency which reviews the compliance forms, comprising:a computing device having a computer processor and memory;
a data store in communication with the computing device, the data store configured to store entity-specific compliance data for a plurality of compliance form data fields and calculated compliance form data fields;
a compliance form software program executable by the computing device, the compliance software program having a calculation engine, a logic agent, a user interface manager, and a first domain model for preparing a first type of compliance form, the first domain model including a first calculation graph and a first completeness model;
the first calculation graph defining data dependent calculations and logic operations for processing the first type of compliance form, the first calculation graph comprising a plurality of interconnected nodes including one or more of input nodes, function nodes, and functional nodes;
the calculation engine configured to read the entity-specific compliance data from the shared data store, calculate a compliance calculation graph by performing calculations and logic operations based on the compliance calculation graph using the entity-specific compliance data, and write calculated compliance data to the shared data store;
the first completeness model including one or more decision tables representing questions and logic for determining missing compliance data required to complete the first type of compliance form;
the logic agent configured to read runtime data of the compliance form and utilize the first completeness model to evaluate missing compliance data needed to complete the compliance form and determine one or more suggested compliance questions for obtaining the missing compliance data;
the user interface manager configured to receive the one or more suggested compliance questions from the logic agent, analyze the one or more suggested compliance questions, determine a compliance question to present to a user, and present the compliance question to the user;
an error graph defining a plurality of error rules for identifying errors in the preparation of the compliance form, the error graph comprising a plurality of interconnected nodes including one or more of input nodes, function nodes, and functional nodes; and
an error check engine configured to process the error graph to identify one or more errors in the preparation of the compliance form.

US Pat. No. 10,796,230

CONTENT BASED REMOTE DATA PACKET INTERVENTION

PEARSON EDUCATION, INC., ...

1. A system for remote intervention comprising:memory comprising:
a user profile database comprising information identifying one or several attributes of a user;
a content database comprising data identifying predetermined content levels and data identifying some of the predetermined content levels as acceptable; and
a model database comprising data identifying a plurality of response demands and data identifying algorithms for determining the plurality of response demands;
a supervisor device comprising:
a network interface configured to exchange data via the communication network; and
an I/O subsystem configured to convert electrical signals to user-interpretable outputs via a user interface;
a content management server, wherein the content management server is configured to:
receive a first electrical signal from the supervisor device, wherein the first electrical signal comprises a request for access to a content authoring interface;
generate and send an electrical signal to the supervisor device directing the launch of the content authoring interface;
receive a second electrical signal from the supervisor device, wherein the second electrical signal comprises content received by the content authoring interface;
identify a plurality of response demands in the received content;
determine a level of the received content based on the identified plurality of response demands;
determine the acceptability of the received content based on the identified plurality of response demands; and
generate and send an alert to the supervisor device, wherein the alert comprises computer code to trigger activation of the I/O subsystem of the supervisor device to provide an indication of the acceptability of the received content and a change recommendation for the received content.

US Pat. No. 10,796,229

BUILDING AN INTERACTIVE KNOWLEDGE LIST FOR BUSINESS ONTOLOGIES

INSIDEVIEW TECHNOLOGIES, ...

1. A method comprising:receiving, using one or more processors, a first request from a user to generate a knowledge list of information corresponding to one or more business events associated with one or more business ontologies;
collecting, using the one or more processors, data analytics on the one or more business events;
receiving, using the one or more processors, a prior set of knowledge describing a set of conceptual dependencies between the one or more business events;
generating, using the one or more processors, for presentation to the user, the knowledge list of information, the knowledge list of information including a knowledge frame and a document organizer, the knowledge frame summarizing a chronological unfolding of an individual business event through a plurality of stages using a stream of text messages and the document organizer collating a set of non-repetitive documents corroborating the stream of text messages in the knowledge frame based on processing the data analytics;
generating, using the one or more processors, a current hypothesis corresponding to a sentiment about the one or more business events based on intersecting the prior set of knowledge, the knowledge frame in the knowledge list of information, and a previous hypothesis;
receiving, using the one or more processors, a natural language query from the user for interacting with the knowledge list of information; and
generating, using the one or more processors, for presentation to the user, a response from the knowledge list of information using the current hypothesis.

US Pat. No. 10,796,228

MACHINE-LEARNING-BASED PROCESSING OF DE-OBFUSCATED DATA FOR DATA ENRICHMENT

Oracle International Corp...

1. A computer-implemented method comprising:receiving, from a client system, a request to perform a machine-learning communication workflow;
receiving, in association with the request, a set of obfuscated identifiers for which processing via the machine-learning communication workflow is requested, wherein each obfuscated identifier of the set of obfuscated identifiers corresponds to an identification of an obfuscated version of a profile stored at a data management system, the obfuscated version lacking personally identifiable information (PII);
for each obfuscated identifier in the set of obfuscated identifiers:
mapping the obfuscated identifier to a non-obfuscated identifier that identifies a non-obfuscated version of the profile that includes PII;
retrieving, from the data management system, user data from the non-obfuscated version;
retrieving learned data generated by training a machine-learning model using other user data;
executing the machine-learning model configured with the learned data to process at least part of the user data;
identifying one or more communication specifications based on the execution of the machine-learning model configured with the learned data;
causing content to be transmitted to a destination address identified in the user data in accordance with the one or more communication specifications, wherein a time of the content transmission, a type of communication transmission used for the content transmission and/or part or all of the content correspond to the one or more communication specifications; and
generating non-obfuscated communication-activity data for the non-obfuscated profile identifier based on any communications detected in response to the content transmission;
obfuscating the non-obfuscated communication-activity data to generate a set of obfuscated data for the set of obfuscated identifiers;
transmitting at least part of the set of obfuscated data to the client system; and
causing, for each of at least some of the set of obfuscated identifiers, at least some of the non-obfuscated communication-activity data to be stored in association with the non-obfuscated profile identifier to the data management system.

US Pat. No. 10,796,227

RANKING OF PARSE OPTIONS USING MACHINE LEARNING

Cognitive Scale, Inc., A...

1. A system comprising:a processor;
a data bus coupled to the processor; and
a non-transitory, computer-readable storage medium embodying computer program code, the non-transitory, computer-readable storage medium being coupled to the data bus, the computer program code interacting with a plurality of computer operations and comprising instructions executable by the processor and configured for:
receiving data from a data source;
processing the data, the processing comprising performing a parsing operation on the data, the processing the data identifying a plurality of knowledge elements based upon the parsing operation, the parsing operation comprising ranking of parse options, the processing being performed via a cognitive inference and learning system, the cognitive inference and learning system executing on a hardware processor of an information processing system and interacting with the plurality of data sources, the cognitive inference and learning system and the information processing system providing a cognitive computing function, the cognitive inference and learning system comprising a cognitive platform, the cognitive platform comprising a cognitive engine, the cognitive engine processing the data from the data source; and,
storing the knowledge elements within the cognitive graph of a universal knowledge repository as a collection of knowledge elements, the storing universally representing knowledge obtained from the data, the cognitive graph comprising integrated machine learning functionality, the integrated machine learning functionality using extracted features of newly-observed data from user feedback received during a learning phase to improve accuracy of knowledge stored within the cognitive graph;
performing mapping operations on a query to generate query related knowledge elements based upon the user feedback, the mapping operations generating a set of parse trees using a parse rule set, the mapping operations comprising mapping structural elements to resolve ambiguity, the mapping operations comprising mapping structural elements of the query around a verb of the query, the mapping of the structural elements transforming the structural elements into words higher up an inheritance chain within the cognitive graph, the parse trees being ranked by a conceptualization ranking rule set, the parse trees representing ambiguous portions of the text of the query, the parse trees being used when performing the parsing operations;
submitting an insight agent query from an insight agent to the universal knowledge repository; and,
providing matching results to the insight agent responsive to the insight agent query based upon a matching rule set and a plurality of answer related knowledge elements in the universal knowledge repository.

US Pat. No. 10,796,226

LASER PROCESSING APPARATUS AND MACHINE LEARNING DEVICE

FANUC CORPORATION, Yaman...

1. A laser processing apparatus, comprising:a laser processing head configured to output laser light for processing a workpiece;
a laser power sensor configured to detect an output of the laser light for a predetermined time period; and
a controller configured to:
calculate a fluctuation in the output of the laser light detected by the laser power sensor; and
command an angle by which the laser processing head is to be inclined with respect to a position of the laser processing head perpendicular to the workpiece, based on the calculated fluctuation in the output of the laser light,
wherein the fluctuation is a deviation between a maximum value and a minimum value of an actual laser light output curve, which is captured for the predetermined time period when processing the workpiece.

US Pat. No. 10,796,225

DISTRIBUTING TENSOR COMPUTATIONS ACROSS COMPUTING DEVICES

Google LLC, Mountain Vie...

1. A computer-implemented method comprising:receiving specification data that specifies a distribution of tensor computations among a plurality of computing devices,
wherein each tensor computation (i) is defined as receiving, as input, one or more respective input tensors each having one or more respective input dimensions, (ii) is defined as generating, as output, one or more respective output tensors each having one or more respective output dimensions, or (iii) is defined as both receiving, as input, one or more respective input tensors each having one or more respective input dimensions and generating, as output, one or more respective output tensors each having one or more respective output dimensions, and
wherein the specification data specifies a respective layout for each input and output tensor that assigns each dimension of the input or output tensor to one or more of the plurality of computing devices;
assigning, based on the layouts for the input and output tensors, respective device-local operations to each of the plurality of computing devices, comprising determining that the layout, if assigned, does not cause data to be lost for a respective tensor corresponding to the layout; and
causing the tensor computations to be executed by the plurality of computing devices by causing each of the plurality of computing devices to execute at least the respective-device local operations assigned to the computing devices.

US Pat. No. 10,796,224

IMAGE PROCESSING ENGINE COMPONENT GENERATION METHOD, SEARCH METHOD, TERMINAL, AND SYSTEM

Alibaba Group Holding Lim...

1. A method comprising:obtaining a to-be-recognized target image;
performing image content information recognition processing upon the target image;
inputting the target image into an image feature vector conversion model;
acquiring image content feature information of the target image, the image content feature information including an image feature vector, the acquiring the image content feature information of the target image including acquiring image feature vectors of the target image;
designating the image feature vectors as the image content feature information of the target image; and
adding, while generating an image processing engine component, the image content feature information of the target image to index tables of the image processing engine component.

US Pat. No. 10,796,223

HIERARCHICAL NEURAL NETWORK APPARATUS, CLASSIFIER LEARNING METHOD AND DISCRIMINATING METHOD

MITSUBISHI ELECTRIC CORPO...

1. A hierarchical neural network apparatus comprising:a computer processor; and
a memory storing instructions which, when executed by the computer processor, perform a process including,
forming a hierarchical neural network, wherein the hierarchical neural network comprises an input layer, an intermediate layer, and an output layer, and wherein each layer comprises nodes;
generating loose couplings between nodes in the layers, wherein the loose couplings are generated in accordance with a sparse parity-check matrix comprising at least two error correcting codes, wherein the generation of said loose couplings is operative to eliminate couplings between respective nodes in said input and intermediate layers and to eliminate couplings between respective nodes in said intermediate and output layers, thereby eliminating the necessity to learn weights between said respective nodes in said input and intermediate layers and between said respective nodes in said intermediate and output layers, wherein each of the at least two error correcting codes is a pseudorandom number code, finite geometry code, cyclic code, pseudo-cyclic code, or spatially-coupled code, and wherein each of the at least two error correcting codes is a different error correcting code type;
learning weights between a plurality of nodes in the hierarchical neural network based on the loose couplings between nodes; and
solving a classification problem or a regression problem using the hierarchical neural network whose weights between the nodes coupled are updated by values of the learned weights.

US Pat. No. 10,796,222

CONTACTLESS POSITION/DISTANCE SENSOR HAVING AN ARTIFICIAL NEURAL NETWORK AND METHOD FOR OPERATING THE SAME

Balluff GmbH, Neuhausen ...

1. A device comprising:(a) a sensor module comprising first and second sensor elements;
(b) first and second artificial neural networks, wherein each of the first and second artificial neural network is configured to evaluate jointly sensor signals delivered from the first and second sensor elements;
(c) a first pre-processing module connected to the sensor module;
wherein the first pre-processing module and the sensor module are configured to exchange signals and/or data therebetween;
wherein the first pre-processing module is configured to pre-process the sensor signals delivered from the sensor module into pre-processed signals and supply the pre-processed signals to the first artificial neural network;
wherein sensor signals delivered from the first and second sensor elements are supplied to the first artificial neural network, the output data of the first artificial neural network are supplied to a second pre-processing module and pre-processed into pre-processed data and the pre-processed data are supplied to the second artificial neural network;
wherein the sensor signals delivered from the first and second sensor elements are additionally supplied to the second pre-processing module;
wherein the device is configured to determine at least one of a distance and a spatial orientation of a target object relative to the device; and
wherein the first artificial neural network is trained by a calibration or learning process in respect of the sensor signals delivered from the first and second sensor elements.

US Pat. No. 10,796,221

DEEP LEARNING ARCHITECTURE FOR AUTOMATED IMAGE FEATURE EXTRACTION

General Electric Company,...

1. A convolutional neural network system, comprising:a memory that stores computer executable components;
a processor that executes computer executable components stored in the memory, wherein the computer executable components comprise:
a machine learning component that generates learned imaging output regarding imaging data based on a convolutional neural network that receives the imaging data, wherein the convolutional neural network comprises a plurality of sequential spring blocks in series with a plurality of parallel spring blocks, wherein a spring block comprises a sequence of downsampling layers in series with a corresponding sequence of upsampling layers.

US Pat. No. 10,796,220

SYSTEMS AND METHODS FOR VECTORIZED FFT FOR MULTI-DIMENSIONAL CONVOLUTION OPERATIONS

Marvell Asia Pte, Ltd., ...

1. A hardware-based programmable deep learning processor (DLP), comprising:an on-system memory (OSM) and one or more controllers configured to access a plurality of external memory resources via direct memory access (DMA);
a plurality of programmable tensor engines configured to perform a plurality of convolution operations by applying one or more kernels on multi-dimensional input data to generate deep learning processing results for pattern recognition and classification based on a neural network, wherein each of the plurality of tensor engines further comprises:
a data engine configured to prefetch the multi-dimensional input data and/or the kernels from the OSM and/or the external memory resources for the convolution operations;
one or more vector processing engines each configured to:
vectorize the multi-dimensional input data at each layer of the neural network to generate a plurality of vectors;
perform multi-dimensional fast Fourier transform (FFT) on the generated vectors and/or the kernels to create output for the convolution operations;
a programmable CPU having its own instruction cache and data cache configured to store a plurality of instructions from a host and the retrieved data from the OSM and/or the external memory resources, respectively.

US Pat. No. 10,796,219

SEMANTIC ANALYSIS METHOD AND APPARATUS BASED ON ARTIFICIAL INTELLIGENCE

BAIDU ONLINE NETWORK TECH...

1. A semantic analysis method based on artificial intelligence, comprising:matching input information to be processed with a preset semantic template, wherein, the preset semantic template is generated according to semantic slot information and equipment information corresponding to an application scenario;
when the input information to be processed is successfully matched with the preset semantic template, converting the input information to formative data according to the semantic template;
normalizing the formative data into a data structure that is recognizable by target equipment, and generating a semantic analysis result corresponding to the input information;
when the input information is not matched with the semantic template, matching non target equipment information in the input information with the semantic slot information, and processing successfully matched semantic slot information to obtain candidate semantic slot information;
when the input information contains target equipment information, matching the target equipment information with the equipment information;
when the target equipment information is successfully matched with the equipment information, selecting target semantic slot information from the candidate semantic slot information according to preset semantic slot information corresponding to the target equipment information; and
converting the input information to the formative data according to the target equipment information and the target semantic slot information.

US Pat. No. 10,796,218

COMMUNICATIONS SYSTEM WITH SMART AGENT ROBOTS FOR ACCESSING MESSAGE DATA

Gemtek Technology Co., Lt...

1. A communications system with smart agent robots comprising:a message interface configured to input message data;
a first agent robot connected to the message interface and configured to automatically integrate and transceive the message data after a first user accesses the first agent robot by using a first personal identity;
a second agent robot connected to the message interface and configured to automatically integrate and transceive the message data after the first user accesses the second agent robot by using a second personal identity;
a friend agent robot operated by a second user and connected to the first agent robot for communicating with the first agent robot; and
a friend message interface connected to the friend agent robot and configured to communicate with the first agent robot;
wherein each of the first agent robot, the second agent robot, and the friend agent robot comprises a chat bot for automatically responding to the message data, the first user has a plurality of identities, the first agent robot automatically requests the second agent robot for providing a service of the second agent robot according to the message data.

US Pat. No. 10,796,217

SYSTEMS AND METHODS FOR PERFORMING AUTOMATED INTERVIEWS

Microsoft Technology Lice...

1. A system for automated interviewing of software engineers, the system comprising:at least one processor; and
a memory for storing and encoding computer executable instructions that, when executed by the at least one processor is operative to:
receive a first answer to a first question given to a candidate, wherein the first question is a first technical question;
analyze the first answer to determine a time and space complexity of the first answer, the analyzing comprising:
determining that the first answer is not in code;
identifying related code to the first answer by comparing the first answer to a reference answer for the first question in a collection of technical question-reference answer pairs utilizing at least a deep semantic similarity model; and
analyzing the related code utilizing one or more heuristic rules to determine the time and space complexity of the first answer;
compare the time and space complexity of the first answer to a time and space complexity of a reference answer for the first question;
determine a relevance score of the first answer based on the comparison of the time and space complexity of the first answer to the time and space complexity of the reference answer;
analyze at least one of a voice input or a text input of the first answer to determine an emotional state of the candidate;
determine whether a first reply to the candidate should be in a chat domain or in a technical domain based on the relevance score and the emotional state to form a domain determination;
select the first reply from a collection of chat replies or from the collection of technical question-reference answer pairs based on the domain determination; and
provide the first reply to the candidate in response to the first answer,
wherein the next technical question provided to the candidate is selected from the collection of technical question-reference answer pairs, the next technical question having a difficulty level that is based at least on the relevance score, and
wherein a next chat reply provided to the candidate is selected from the collection of chat replies.

US Pat. No. 10,796,216

CONTEXT-AWARE DIGITAL PERSONAL ASSISTANT SUPPORTING MULTIPLE ACCOUNTS

Microsoft Technology Lice...

1. A system to implement a context-aware digital personal assistant application program having multiple accounts, the system comprising:a memory; and
one or more processors coupled to the memory, the one or more processors configured to:
determine to which of multiple accounts of a common digital personal assistant application program a user is signed-in;
selectively combine content from a plurality of content streams that are associated with a plurality of respective accounts of the common digital personal assistant application program to which the user is signed-in based on at least a determination to which of the multiple accounts of the common digital personal assistant application program the user is signed-in and further based on at least a context of the user who is signed-in with the plurality of accounts of the common digital personal assistant application program to generate a selectively combined content stream, wherein each of the plurality of accounts of the common digital personal assistant application program is associated with one or more preferences of the user that indicate which content the common digital personal assistant application program selectively includes in the respective content stream that is associated with the respective account; and
cause the common digital personal assistant application program to provide the selectively combined content stream for presentation to the user.

US Pat. No. 10,796,215

TAG ASSEMBLY METHODS

Interlake Research, LLC, ...

1. A method for assembling a radio frequency identification (RFID) tag, the method comprising:wire-bonding a first connection pad of an RFID die on a substrate to a wire at a first bonding location;
wire-bonding a second connection pad of the RFID die to the wire at a second bonding location; and
cutting the wire between the first bonding location and the second bonding location such that a first wire segment and a second wire segment severed by the cutting form segments of an antenna for the RFID tag.

US Pat. No. 10,796,214

TRANSACTION CARD HAVING AN ELECTRICALLY APPLIED COATING

Capital One Services, LLC...

1. A transaction card, comprising: a card component comprising: a substrate; an electrically conductive material applied to the substrate; and an electrically conductive surface defining an outer surface of the transaction card, and a coating material applied to the electrically conductive surface of the card component by positively or negatively charging at least one of the coating material or the electrically conductive surface, wherein the electrically conductive material comprises at least one of metals, metal alloys, or metal-containing materials, wherein the electrically applied coating comprises at least one of an applied metal, a metal alloy, a metal oxide, an electrostatically applied material, thermoplastic, or thermoset polymer; and a coating layer disposed on the electrically applied coating wherein the coating layer comprises a non-opaque electrically applied material.

US Pat. No. 10,796,213

METHOD AND APPARATUS FOR PROVIDING A COMMUNICATIONS SERVICE USING A LOW POWERED RADIO TAG

1. A radio tag operable to be attached to an item, the radio tag comprising:a first radio comprising a first antenna, wherein the first radio is for listening for a wake-up signal in an idle state of the radio tag;
a second radio, coupled to the first radio, the second radio comprising a second antenna, wherein the second radio is for transmitting or receiving data, wherein the radio tag operates using a carrier signal in a pre-determined frequency range for communicating with a device, wherein the second radio does not draw power in the idle state of the radio tag;
a power source;
a processor; and
a computer-readable storage medium storing a plurality of instructions which, when executed by the processor, cause the processor to perform operations, the operations comprising:
entering an active state of the radio tag and activating the second radio when the wake-up signal is received, wherein the second radio draws power from the power source in the active state of the radio tag; and
transmitting the data to the device or receiving the data from the device via the second radio when the radio tag is in the active state.

US Pat. No. 10,796,212

ORIENTATION-AGNOSTIC METHOD TO INTERFACE TO PRINTED MEMORY LABEL

XEROX CORPORATION, Norwa...

1. A memory device, comprising:a printed memory comprising a substrate, a plurality of contact pads overlying the substrate, a plurality of wiring lines electrically coupled to the plurality of contact pads, and ferroelectric layer overlying the substrate; and
a printed circuit comprising a plurality of concentric, endless contact lines electrically coupled to the plurality of contact pads wherein, at least in plan view, each of the plurality of concentric, endless contact lines encloses an electrical insulator region.

US Pat. No. 10,796,211

GENERATING AUTHENTICATION IMAGE TO VERIFY A TWO-DIMENSIONAL CODE OFFLINE

Alibaba Group Holding Lim...

1. A computer-implemented method, comprising:parsing a two-dimensional (2D) code into a two-dimensional array, wherein the two-dimensional array comprises binary digits; and
generating an authentication image by using the two-dimensional array according to a target image, comprising:
parsing the target image into a binary image;
performing a bit operation on a corresponding value in the two-dimensional array according to a value of each pixel in the binary image, wherein the bit operation selects and flips some of the binary digits; and
generating the authentication image according to a result of the bit operation, wherein the target image appears when the authentication image overlaps with the 2D code, comprising:
generating black pixels in the authentication image for locations that are black in the target image but white in the 2D code,
generating white pixels in the authentication image for locations that are white in the target image but black in the 2D code, and
generating transparent pixels in the authentication image for locations having matching colors in the target image and in the 2D code.

US Pat. No. 10,796,210

PLOTTER, METHOD FOR DRAWING WITH PEN CONTAINING LIQUID USING PLOTTER, AND PEN MOUNTABLE ON PLOTTER

BROTHER KOGYO KABUSHIKI K...

1. A plotter comprising:a mounting portion configured to mount with a pen containing a liquid;
a first movement mechanism configured to relatively move the mounting portion and a workpiece in a movement direction, the movement direction being a direction for the mounting portion and the workpiece to move close to and away from each other;
a second movement mechanism configured to relatively move the mounting portion and the workpiece in a direction intersecting the movement direction by the first movement mechanism;
a processor; and
a memory configured to store computer-readable instructions that, when executed by the processor, instruct the processor to perform processes comprising:
acquiring plot data instructing a position at which drawing is performed on the workpiece using the pen;
acquiring information relating to a remaining amount of the liquid of the pen;
setting a relative movement speed of the mounting portion and the workpiece by the second movement mechanism, on the basis of the acquired information relating to the remaining amount; and
controlling the first movement mechanism and the second movement mechanism on the basis of the acquired plot data and the set movement speed, relatively moving the workpiece and the mounting portion at the movement speed, and performing drawing on the workpiece.

US Pat. No. 10,796,209

INK JET PRINT HEAD WITH STANDARD COMPUTER INTERFACE

XEROX CORPORATION, Norwa...

1. A print head, comprising:a standardized computer interface to allow the print head to connect directly to a standard computer;
an array of jets to deposit ink on a substrate in accordance with image data from the standard computer;
a processing element having a buffer, the buffer included in the processing element and under control of the processing element, the processing element programmed to:
receive image data through the standardized computer interface as data words corresponding to each jet;
store the words of image data received through the standardized computer interface in the buffer;
read the image data from the buffer as a subset of each word of image data in a different sequence than the image data was written;
transmitting the subset of the word of image data to the array of jets when triggered by a dot clock, the buffer having a flexible depth of storage to store varying amounts of image data, wherein the buffer receives image data according to a clock from the standard computer and outputs data according to the dot clock; and
a driver to trigger individual ones of the array of jets in accordance with the subset of the word of image data, wherein the standardized computer interface, the array of jets, the processing element, the buffer and the driver all reside in the print head.

US Pat. No. 10,796,208

INLINE PRINTABLE DUPLEX COLOR FILTERS

Xerox Corporation, Norwa...

1. A method for inline rendering of a color filter within a print job, comprising:a program associated with a printing system is run for enabling selection of an overhead transparency media to serve as a color filter during inline print job finishing;
a duplex path and finishing for the overhead transparency media is enabled on the printing system by the program;
a first side of the overhead transparency media is printed on with one or more colors by the printing system by the program; and
a transform is used on a second side of the overhead transparency media to correct front to back registration and includes one or more colors printed directly underneath color printed on the first side of the overhead transparency media by the program.

US Pat. No. 10,796,207

AUTOMATIC DETECTION OF NOTEWORTHY LOCATIONS

Apple Inc., Cupertino, C...

1. A method for viewing images by a device, comprising:capturing a first image, the first image associated with location data corresponding to a location from which the first image was captured;
receiving a request to display the first image;
identifying a predefined three-dimensional representation based on the location data, wherein the predefined three-dimensional representation comprises a three-dimensional model;
detecting one or more features in the first image;
matching the detected one or more features in the first image with one or more features of the identified predefined three-dimensional representation;
determining a three-dimensional location from which the first image was captured based, at least in part, on the matched one or more features and the location data;
compositing the first image with a second image on the predefined three-dimensional model, the second image from a collection of images and associated with the detected one or more features in the first image; and
presenting the composited image along with a least a portion of the predefined three-dimensional model.

US Pat. No. 10,796,206

METHOD FOR INTEGRATING DRIVING IMAGES ACQUIRED FROM VEHICLES PERFORMING COOPERATIVE DRIVING AND DRIVING IMAGE INTEGRATING DEVICE USING SAME

StradVision, Inc., Gyeon...

1. A method for integrating driving images acquired from one or more vehicles performing a cooperative driving, comprising steps of:(a) a main driving image integrating device, installed on at least one main vehicle among said one or more vehicles, performing (i) a process of inputting at least one main driving image, acquired from at least one main camera installed on the main vehicle, into a main object detector, to thereby allow the main object detector to (i-1) generate at least one main feature map by applying at least one convolution operation to the main driving image via a main convolutional layer, (i-2) generate one or more main ROIs (Regions Of Interest), corresponding to one or more regions where one or more main objects are estimated as located, on the main feature map, via a main region proposal network, (i-3) generate one or more main pooled feature maps by applying at least one pooling operation to one or more regions, corresponding to the main ROIs, on the main feature map, via a main pooling layer, and (i-4) generate multiple pieces of main object detection information on the main objects located on the main driving image by applying at least one fully-connected operation to the main pooled feature maps via a main fully connected layer;
(b) the main driving image integrating device performing a process of inputting the main pooled feature maps into a main confidence network, to thereby allow the main confidence network to generate each of one or more main confidences of each of the main ROIs corresponding to each of the main pooled feature maps; and
(c) the main driving image integrating device performing a process of acquiring multiple pieces of sub-object detection information and one or more sub-confidences from each of one or more sub-vehicles in the cooperative driving, and a process of integrating the multiple pieces of the main object detection information and the multiple pieces of the sub-object detection information by using the main confidences and the sub-confidences as weights, to thereby generate at least one object detection result of the main driving image,
wherein the multiple pieces of the sub-object detection information and the sub confidences are generated by each of one or more sub-driving image integrating devices, installed on each of the sub-vehicles,
wherein each of the sub-driving image integrating devices performs (i) a process of inputting each of sub-driving images into corresponding each of sub-object detectors, to thereby allow said each of the sub-object detectors to (i-1) generate each of sub-feature maps by applying at least one convolution operation to each of the sub-driving images via corresponding each of sub-convolutional layers, (i-2) generate one or more sub-ROIs, corresponding to one or more regions where one or more sub-objects are estimated as located, on each of the sub-feature maps, via corresponding each of sub-region proposal networks, (i-3) generate each of one or more sub pooled feature maps by applying at least one pooling operation to one or more regions, corresponding to each of the sub-ROIs, on each of the sub-feature maps, via corresponding each of sub-pooling layers, (i-4) generate the multiple pieces of the sub-object detection information on the sub-objects located on each of the sub-driving images by applying at least one fully connected operation to each of the sub-pooled feature maps via corresponding each of sub fully connected layers, and (i-5) input each of the sub-pooled feature maps into corresponding each of sub-confidence networks, to thereby allow each of the sub-confidence networks to generate the sub-confidences of the sub-ROIs corresponding to each of the sub pooled feature maps,
wherein the main object detector and the main confidence network have been learned by a learning device,
wherein the learning device has learned the main object detector by performing, if training data including one or more driving images for training are acquired, (i) a process of sampling (i-1) 1-st training data including a (1_1)-st driving image for training to a (1_m)-th driving image for training wherein m is an integer larger than 0 and (i-2) 2-nd training data including a (2_1)-st driving image for training to a (2_n)-th driving image for training wherein n is an integer larger than 0, from the training data, (ii) a process of inputting a (1 _j)-th driving image for training, among the (1_1)-st driving image for training to the (1_m)-th driving image for training, into the main convolutional layer, to thereby allow the main convolutional layer to generate at least one 1-st feature map by applying at least one convolution operation to the (1 _j) th driving image for training, (iii) a process of inputting the 1-st feature map into the main region proposal network, to thereby allow the main region proposal network to generate one or more 1-st ROIs, corresponding to one or more objects for training, on the 1-st feature map, (iv) a process of instructing the main pooling layer to generate one or more 1-st pooled feature maps by applying at least one pooling operation to one or more regions, corresponding to the 1-st ROIs, on the 1-st feature map, (v) a process of instructing the main fully connected layer to generate multiple pieces of 1-st object detection information corresponding to the objects for training located on the (1 _j)-th driving image for training by applying at least one fully-connected operation to the 1-st pooled feature maps or at least one 1-st feature vector corresponding to the 1-st pooled feature maps, (vi) a process of instructing a 1-st loss layer to calculate one or more 1 st losses by referring to the multiple pieces of the 1-st object detection information and at least one object ground truth of the (1 _j)-th driving image for training, and (vii) a process of updating at least one parameter of the main fully connected layer and the main convolutional layer via backpropagation using the 1-st losses such that the 1-st losses are minimized, for each of the (1_1)-st driving image for training to the (1 _m)-th driving image for training, and
wherein the learning device has learned the main confidence network by performing (i) a process of acquiring each of one or more 1-st confidences of each of the 1-st ROIs by referring to the object ground truth and the multiple pieces of the 1-st object detection information corresponding to each of the (1_1)-st driving image for training to the (1 _m)-th driving image for training, (ii) a process of inputting a (2 _k)-th driving image for training, among the (2_1)-st driving image for training to the (2 _n)-th driving image for training, into the main convolutional layer, to thereby allow the main convolutional layer to generate at least one 2-nd feature map by applying at least one convolution operation to the (2 _k)-th driving image for training, (iii) a process of inputting the 2-nd feature map into the main region proposal network, to thereby allow the main region proposal network to generate one or more 2-nd ROIs corresponding to the objects for training located on the 2-nd feature map, (iv) a process of instructing the main pooling layer to generate one or more 2-nd pooled feature maps by applying at least one pooling operation to one or more regions, corresponding to the 2-nd ROIs, on the 2-nd feature map, (v) a process of inputting the 2-nd pooled feature maps into the main confidence network, to thereby allow the main confidence network to generate one or more 2-nd confidences corresponding to the 2-nd pooled feature maps through deep learning, (vi) a process of instructing a 2-nd loss layer to calculate one or more 2-nd losses by referring to the 2-nd confidences and the 1-st confidences, and (vii) a process of updating at least one parameter of the main confidence network via backpropagation using the 2-nd losses such that the 2-nd losses are minimized, for each of the (2_1)-st driving image for training to the (2 _n)-th driving image for training.

US Pat. No. 10,796,205

MULTI-VIEW VECTOR PROCESSING METHOD AND MULTI-VIEW VECTOR PROCESSING DEVICE

FUJITSU LIMITED, Kawasak...

1. A method of multi-view vector processing by a processor, where a multi-view vector x represents an object containing information on at least two non-discrete views, the method comprising:establishing a model of the multi-view vector x, where the model includes at least components of: a population mean ? of the multi-view vector x, a view component of a view among the at least two non-discrete views of the multi-view vector x and noise ; and
using training data of the multi-view vector x to obtain the population mean ?, parameters of the view component and parameters of the noise ,
where,
the multi-view vector is obtained by processing a feature vector with a classifier, and the feature vector is obtained by directly vectorizing the object, and
the classifier is configured to relatively separate the multi-view vector from the feature vector obtained by directly vectorizing the object to be represented, and a discreteness between an excluded view and the two non-discrete views of the multi-view vector x is higher than a discreteness between the two non-discrete views of the multi-view vector x.

US Pat. No. 10,796,204

PLANNING SYSTEM AND METHOD FOR CONTROLLING OPERATION OF AN AUTONOMOUS VEHICLE TO NAVIGATE A PLANNED PATH

Huawei Technologies Co., ...

1. A planning system for vehicle, the planning system comprising:a plurality of hierarchal software layers including:
a mission planning layer comprising one or more neural networks configured to determine an optimal route for the vehicle or mobile robot based on a start point, an end point and a digital map of an environment surrounding the vehicle;
a behaviour planning layer comprising one or more neural networks, the behaviour planning layer configured to receive the optimal route determined by the mission planning layer and sensor data sensed by a plurality of sensors of the vehicle, each neural network of the behaviour planning layer configured to predict a respective behavior task for the vehicle based on the sensor data and the optimal route; and
a motion planning layer comprising one or more neural networks, the motion planning layer configured to receive each behaviour task predicted by the behaviour planning layer and the sensor data, each neural network of the motion planning layer configured to predict a respective motion task for the vehicle based on the received behavior tasks and the sensor data, wherein the behaviour planning layer is feed-associated with the motion planning layer, and wherein the behaviour planning layer is configured to feed-forward information to the motion planning layer and receive feedback of information from the motion planning layer.

US Pat. No. 10,796,203

OUT-OF-SAMPLE GENERATING FEW-SHOT CLASSIFICATION NETWORKS

International Business Ma...

1. A method comprising:training a model using a plurality of pairs of feature vectors related to a first class;
providing a sample feature vector related to a second class as an input to the model;
receiving at least one synthesized feature vector as an output from the model;
training a classifier to recognize the second class using a training data set comprising the sample feature vector related to the second class and the at least one synthesized feature vector;
providing a query feature vector as an input to the classifier; and
receiving output from the classifier that identifies the query feature vector as being related to the second class, wherein the output is used to perform an action.

US Pat. No. 10,796,202

SYSTEM AND METHOD FOR BUILDING AN EDGE CNN SYSTEM FOR THE INTERNET OF THINGS

VIMOC Technologies, Inc.,...

19. A method of training on-site processors to analyze image data from multiple cameras and identify transitory objects in near real time, the method including:using a trio of convolutional neural networks (abbreviated CNN) running on cloud-based and on-site hardware: a big cloud CNN, a small cloud CNN and an on-site CNN, wherein the small cloud CNN shares structure and coefficients with the on-site CNN, such that coefficients trained on the small cloud CNN can be transferred to the on-site CNN;
collecting at least five hundred site-specific images from cameras to be analyzed by the on-site CNN;
analyzing the site-specific images using the big cloud CNN to produce a machine generated training set for training the small cloud CNN, wherein the machine generated training set includes
an image that has regions,
for each region, coordinates of bounding boxes for any transitory objects, the bounding boxes anchored in the region, and
classification of contents of the bounding boxes as a transitory object;
using the machine generated training set to train the small cloud CNN; and
transferring coefficients from the trained small cloud CNN to the on-site CNN, thereby configuring the on-site CNN to recognize the transitory objects in images from the cameras.

US Pat. No. 10,796,201

FUSING PREDICTIONS FOR END-TO-END PANOPTIC SEGMENTATION

TOYOTA RESEARCH INSTITUTE...

1. A method for controlling a vehicle based on a panoptic map, comprising:receiving an input from at least one sensor of the vehicle;
generating an instance map and a semantic map from the input;
generating, based on the input, a context map identifying at least one of scene depth, an edge of the objects, surface normals of the objects, or an optical flow of the objects;
generating a binary mask based on the input, the instance map, and the semantic map;
generating the panoptic map by applying the binary mask to the instance map, the context map, and the semantic map; and
controlling the vehicle based on the panoptic map.

US Pat. No. 10,796,200

TRAINING IMAGE SIGNAL PROCESSORS USING INTERMEDIATE LOSS FUNCTIONS

Intel Corporation, Santa...

1. An apparatus for training image signal processors, comprising:an image signal processor to be trained, the image signal processor to generate a reconstructed image based on a sensor image;
an intermediate loss function generator to generate an intermediate loss function based on a comparison of intermediate outputs of one or more corresponding intermediate layers of a computer vision network and a copy of the computer vision network, wherein the computer vision network generates an intermediate output based on the reconstructed image and the copy of the computer vision network generates an intermediate output at a corresponding intermediate layer based on an image from a dataset used to generate the sensor image; and
a parameter modifier to modify one or more parameters of the image signal processor based on the intermediate loss function.

US Pat. No. 10,796,199

IMAGE RECOGNITION AND AUTHENTICATION

Alibaba Group Holding Lim...

1. A computer-implemented method, comprising:obtaining a target image of a target object, wherein the target image comprises a recognition feature formed from a recognition identifier mapped onto the target object, wherein the recognition identifier comprises an optical image projected onto the target object;
determining, from the target image, an attribute resulting from the projection of the optical image onto the target object, wherein a the attribute is mapped to the recognition feature; and
authenticating the target object based on a determining result, wherein the determining result comprises the attribute and extracted information from the target object.

US Pat. No. 10,796,198

ADJUSTING ENHANCEMENT COEFFICIENTS FOR NEURAL NETWORK ENGINE

Western Digital Technolog...

1. A device for training a convolutional neural network comprising a plurality of layers, the device comprising:an array comprising a plurality of processing units including processing circuitry and memory, wherein the array is configured to transmit data systolically between particular processing units;
a computer-readable memory storing instructions for using the array to perform computations of the neural network during the training; and
a controller configured by the instructions to:
provide input data representing an image into the array, the image including an array of pixels;
perform a forward pass of the input data through the plurality of layers;
for a particular location in the array of pixels, generate a pixel vector representing values output by the plurality of layers for that particular location, wherein the pixel vector includes a first value generated by a first layer of the plurality of layers and a second value generated by a second layer of the plurality of layers, wherein the second layer is deeper along the plurality of layers of the convolutional neural network than the first layer; and
adjust an enhancement coefficient of the first value of the first layer based on the second value of the second layer.

US Pat. No. 10,796,197

AUTOMATIC METHOD AND SYSTEM FOR SIMILAR IMAGES AND IMAGE FRAGMENTS DETECTION BASING ON IMAGE CONTENT

1. Automatic method for similar images and image fragments detection based on image content, characterized in that through a computer system, which control unit contains additional interconnected modules:module for input configuration, to setup data storage and input image;
module for image segmentation;
module for segments processing;
module for numeric vector development;
module for numeric vector storage;
user module for data representation, are carried out the following stages:
a) specifying at least one input image;
b) specifying options of the input image processing;
c) carrying out the input image processing, according to the options, selected at stage b);
d) choosing through the segmentation, at least one geometric shape;
e) normalizing indicated geometric shapes and computing area ratio to the length of the outline Sn/Ln of the normalized geometric shape for creation of the numerical vector Ven, and additionally geometric parameters: area Sn, outline length Ln, shortest projection Psn and longest projection Pln on the X and Y axis, and a number of outline angles An of the geometric shape;
f) creating numeric vector Ven, which includes geometric parameters calculated at stage e), for each geometric shape selected at stage d), wherein the numeric vector Ven expressed by the following formula:
Ven=[Sn1/Ln Sn2/Ln Sn3/Ln 3 . . . Snk/Lnk],where k—a number of geometrical shapes (segments) allocated on the image;g) saving numeric vector built at stage f) in data storage and in module for numeric vector storage of the control unit;
h) calculating the difference dV between the numerical vector Ven, built at stage f), and image numerical vector Vdb previously saved in a data storage for all comparative images;
i) identifying images as similar images, if the dV difference less than the specified boundary value.

US Pat. No. 10,796,196

LARGE SCALE IMAGE RECOGNITION USING GLOBAL SIGNATURES AND LOCAL FEATURE INFORMATION

Nant Holdings IP, LLC, C...

1. A computer-based method for conducting an image recognition search, comprising:obtaining, by a computing device, one or more global signatures for a query image, wherein a global signature is a full image descriptor that can represent an entire image, and wherein the one or more global signatures includes a machine learning signature;
determining, by the computing device, a ranking order for a plurality of document images based on nearest neighbor relations between document signatures corresponding to the plurality of document images and each one of the one or more global signatures for the query image;
selecting, by the computing device, a subset of the plurality of document images based on the determined ranking order;
obtaining, by the computing device, additional document data corresponding to the selected subset of the plurality of document images, wherein the obtained additional document data comprises, for each document image of the selected subset of the plurality of document images, an at least partially compressed data set that includes a global signature of the document image and, for each local feature of the document image, one or more of (1) an indication of at least one of a location, orientation and scale, and (2) an indication of at least one of a 3D location and a surface normal of the 3D location; and
generating, by the computing device, a search result of document images filtered by using a geometric verification between the additional document data corresponding to the selected subset of the plurality of document images and the query image, wherein the geometric verification, using a distance check threshold, compares at least a portion of the at least partially compressed data set for each document image of the selected subset of the plurality of document images with a feature descriptor from the query image.

US Pat. No. 10,796,195

PERCEPTUAL DATA ASSOCIATION

Disney Enterprises, Inc.,...

1. A method, comprising:receiving, from a first sensor disposed at a first position in an environment, a first series of local scene graphs comprising a first value of a first characteristic of an object in the environment, wherein the first series of local scene graphs is associated with first local timing information for the first sensor;
receiving, from a second sensor disposed at a second position in the environment, a second series of local scene graphs comprising a second value of the first characteristic of the object, wherein the second series of local scene graphs is associated with second local timing information for the second sensor and wherein the first series of local scene graphs are asynchronous with respect to the second series of local scene graphs; and
outputting a series of global scene graphs including global characteristics of the object derived from merging the first series of local scene graphs with the second series of local scene graphs using the first local timing information and the second local timing information.

US Pat. No. 10,796,194

MOTION-AWARE KEYPOINT SELECTION SYSTEM ADAPTABLE TO ITERATIVE CLOSEST POINT

NCKU Research and Develop...

1. A motion-aware keypoint selection system adaptable to iterative closest point (ICP), comprising:a pruning unit that receives an image and selects at least one region of interest (ROI) composed of a selected subset of points on the image;
a point quality estimation unit that receives the ROI and generates point quality; and
a suppression unit that receives the point quality and generates keypoints;
wherein a near edge region (NER) is selected as the ROI;
wherein the point quality estimation unit comprises:
a model selection unit that generates a plurality of mean distances associated with different point motions for an image pair; and
an estimation model unit that receives the mean distance for the image pair and the point depth according to the point motion, thereby generating the point quality.

US Pat. No. 10,796,193

DIGITAL IMAGE PRESENTATION

eBay Inc., San Jose, CA ...

15. A computer implemented method to present digital images, the method comprising:storing a digital image in a database;
receiving a request for information associated with the digital image from a digital device; and
in response to the request, providing instructions and the digital image for transmission to the digital device, the instructions directing the digital device to display only a portion of the digital image that is less than a whole of the digital image in a display area of the digital device while maintaining the digital image as stored in the database, the display area being one of a plurality of display areas that concurrently display images on a display of the digital device.

US Pat. No. 10,796,192

TRACK FEATURE DETECTION USING MACHINE VISION

HARSCO TECHNOLOGIES LLC, ...

1. A railroad track feature detection system comprising:a camera;
at least one light source; and
a computing apparatus comprising:
at least one memory comprising instructions; and
at least one processing device configured to execute the instructions, wherein the instructions cause the at least one processing device to perform the operations comprising:
capturing, using the camera, an image of a railroad track, wherein the at least one light source is used to capture the image, the image being composed of pixels;
determining, using a graphical processing unit (GPU) comprised in the at least one processing device, at least one color value of each pixel comprised in the image;
identifying, using a visual recognition unit comprised in the at least one processing device, an object in the image based on the determined color values, wherein the at least one processing device is configured to measure a shadow to identify a railroad track feature; and
assigning, using a tagging unit comprised in the at least one processing device, an identifier associated with the railroad track feature and a location to the identified object in a database.

US Pat. No. 10,796,191

DEVICE AND METHOD FOR PROCESSING A HISTOGRAM OF ARRIVAL TIMES IN AN OPTICAL SENSOR

STMicroelectronics SA, M...

1. A device, comprising:a plurality of optical emitters configured to emit incident radiation within a field of view of the device;
a plurality of optical detectors configured to receive reflected radiation and to generate a histogram based on the incident radiation and the reflected radiation, the histogram being indicative of a number of photon events detected by the plurality of optical detectors over a plurality of time bins, the plurality of time bins being indicative of a plurality of time differences between emission of the incident radiation and reception of the reflected radiation; and
a processor programmed to iteratively process the histogram by executing an expectation-maximization algorithm to detect a presence of objects located in the field of view of the device.

US Pat. No. 10,796,190

METHODS AND APPARATUS FOR IMAGING OF LAYERS

Massachusetts Institute o...

1. A method comprising:(a) illuminating with terahertz light an object that includes at least a first layer, a second layer and a third layer, the first, second and third layers being opaque in the visible spectrum and being occluded, in the visible spectrum, from direct view of a sensor;
(b) taking, at different times, measurements of terahertz light that has reflected from the first, second and third layers and is incident on the sensor, each measurement being a measurement of strength of an electric field at a pixel of the sensor at a particular time, the electric field being a function of intensity of incident terahertz light;
(c) identifying peaks in amplitude of a digital time-domain signal, which signal encodes the measurements; and
(d) for each particular identified peak
(i) selecting a time window that includes a time at which the particular identified peak occurs,
(ii) calculating a discrete Fourier transform (DFT) of the time-domain signal in the time window,
(iii) calculating a set of frequency frames, in such a way that each particular frequency frame in the set is calculated for a particular frequency in the amplitude spectrum of the DFT, which particular frequency is different than that of any other frequency frame in the set,
(iv) calculating kurtosis of each frequency frame in the set,
(v) selecting a subset of the frequency frames, in such a way that kurtosis of each frequency frame in the subset exceeds a specified threshold, and
(vi) averaging the subset of frequency frames, on a pixel-by-pixel basis, to produce a frequency image for that particular identified peak;wherein the method produces at least a first frequency image of the first layer, a second frequency image of the second layer, and a third frequency image of the third layer.

US Pat. No. 10,796,189

AUTOMATED SYSTEM AND METHODOLOGY FOR FEATURE EXTRACTION

Pictometry International ...

1. An automated computerized system, comprising:a computer system executing image display and analysis software reading:
at least one image having corresponding location data indicative of position and orientation of an image capturing device used to capture the image, the image depicting an object of interest; and,
at least one database storing data points of a point cloud of the object of interest; and
wherein the image display and analysis software executed by the computer system determines and isolates the object of interest within the point cloud forming a modified point cloud having one or more data points with first location coordinates of the object of interest; generates a boundary outline having second location coordinates of the object of interest using spectral analysis of at least one section of the at least one image identified with the first location coordinates; identifies, catalogues, and stores characteristics of the object of interest within the at least one database; and generates an inventory report compiling stored characteristics of the object of interest.

US Pat. No. 10,796,188

IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM TO IDENTIFY OBJECTS USING IMAGE FEATURES

NEC CORPORATION, Tokyo (...

1. An image processing apparatus, comprising:a non-transitory storage device storing instructions; and
one or more processors configured by the instructions to:
generate, with respect to a plurality of feature points to be detected from a first image, a first local feature amount group including local feature amounts representing feature amounts of a plurality of local regions containing the respective feature points;
generate one or more first coordinate position information groups which include coordinate position information;
calculate a correspondence information group representing a correlation between a correlation between the feature points of the first image and feature points of a second image based on an inter-feature amount distance between the first local feature amount group and second local feature amount groups which is formed from local feature amounts of feature points detected from the second images;
calculate a rotation amount of a subject of the first image by using the first coordinate position information groups, second coordinate position information groups, which are coordinate position information on the feature points detected from the second image, and the correspondence information groups;
cluster the feature points of the first image based on a coordinate position of a predefined reference point, in the first image, of the second image, the coordinate position being estimated based on a relative coordinate position of each of the feature points of the second image and the reference point, the correspondence information groups, and the first coordinate position information groups and the rotation amount;
divide the first image into regions in accordance with the result of the clustering;
the first local feature amount groups for the respective regions of the first image with the second local feature amount group from the second image; and
identify different subjects within the first and second images based on the collated first and second local feature amount groups.

US Pat. No. 10,796,187

DETECTION OF TEXTS

NEXTVPU (SHANGHAI) CO., L...

1. A reading assisting device, comprising:an image sensor for capturing a first image to be detected and a second image to be detected, of a text object to be detected;
a processor; and
a memory for storing a program, the program comprising instructions that, when executed by the processor, cause the processor to:
acquire the first image to be detected, captured by the image sensor, of the text object to be detected;
determine whether the first image to be detected contains a predetermined indicator;
determine, when the first image to be detected contains the predetermined indicator, a position of the predetermined indicator, and acquire the second image to be detected, captured by the image sensor, of the text object to be detected;
determine whether the second image to be detected contains the predetermined indicator; and
determine, when the second image to be detected does not contain the predetermined indicator, a text detecting region based on the determined position of the predetermined indicator,
wherein the program further comprises instructions that, when executed by the processor, cause the processor to:
output a second audio prompt and acquire a third image to be detected of the text object to be detected, before determining, when the second image to be detected does not contain the predetermined indicator, the text detecting region based on the determined position of the predetermined indicator,
wherein a resolution of the third image to be detected is higher than a resolution of the first image to be detected and a resolution of the second image to be detected.

US Pat. No. 10,796,186

PART RECOGNITION METHOD, INFORMATION PROCESSING APPARATUS, AND IMAGING CONTROL SYSTEM

FUJITSU LIMITED, Kawasak...

1. A part recognition method comprising:cutting, by a computer, out a plurality of partial images having different sizes using each of positions of an input image as a reference;
calculating a probability that each of the partial images is an image indicating a part;
calculating, for each of the positions, a score by integrating the probability for each of the partial images;
recognizing, based on the score for each of the positions, the part from the input image;
creating, for each of the positions, a heat map in which the score is stored in a pixel corresponding to the respective positions;
identifying, in the recognizing, a coordinate of a pixel having a maximum score in the heat map as a position coordinate of the part; and
correcting the score for each of the positions in the heat map based on a relative positional relationship between the adjacent positions.

US Pat. No. 10,796,185

DYNAMIC GRACEFUL DEGRADATION OF AUGMENTED-REALITY EFFECTS

Facebook, Inc., Menlo Pa...

1. A method, comprising, by a computing device:obtaining a first set of video frames associated with a scene;
generating, based on the first set of video frames, first tracking data using a first tracking algorithm;
generating, based on the first tracking data, a first confidence score associated with the first tracking algorithm, wherein the first confidence score is indicative of a confidence level of the first tracking algorithm in tracking objects in the first set of video frames;
displaying an augmented-reality effect based on the first tracking data;
generating a performance score based on a number of frames of the augmented-reality effect displayed based on the first tracking data;
selecting, in response to a determination that the first confidence score and the performance score fail to satisfy one or more criteria, a second tracking algorithm to be used for tracking objects within a second set of video frames subsequent to the first set of video frames;
switching from the first tracking algorithm to the second tracking algorithm based on the selecting;
obtaining the second set of video frames associated with the scene;
generating, based on the second set of video frames, second tracking data using the second tracking algorithm; and
displaying the augmented-reality effect based on the second tracking data.

US Pat. No. 10,796,184

METHOD FOR PROCESSING INFORMATION, INFORMATION PROCESSING APPARATUS, AND NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM

PANASONIC INTELLECTUAL PR...

1. A method for processing information achieved by a computer using a neural network, the method comprising:inputting an image including one or more objects to the neural network;
causing a convolutional layer included in the neural network to perform convolution on a current frame included in the image to calculate a current feature map; which is a feature map at a present time;
causing a combiner for combining two or more feature maps into one feature map to combine a past feature map, which is a feature map obtained by causing the convolutional layer to perform convolution on a past frame included in the image and preceding the current frame, and the current feature map;
causing a region proposal network included in the neural network to estimate an object candidate area using the combined past feature map and current feature map, the region proposal network being used to estimate the object candidate area;
causing a region of interest pooling layer included in the neural network to estimate positional information and identification information regarding the one or more objects included in the current frame using the combined past feature map and current feature map and the estimated object candidate area, the region of interest pooling layer being used to perform class estimation; and
outputting the positional information and the identification information regarding the one or more objects included in the current frame of the image estimated in the causing as object detection results.

US Pat. No. 10,796,183

FIDUCIAL MARKER, METHOD FOR FORMING THE FIDUCIAL MARKER, AND SYSTEM FOR SENSING THEREOF

ANADOLU UNIVERSITESI, Es...

1. A fiducial marker suited to be sensed by an image sensor, comprising an external region, an inner region positioned inside said external region and which has a substantially contrast color with respect to the external region, and a plurality of pattern elements positioned inside said inner region and which have a substantially contrast color with respect to the inner region; wherein the external region has a polygonal periphery, and the inner region has a substantially circular or elliptical periphery, the pattern elements have a circular or elliptical form and pattern elements are arranged in a manner defining a pattern in the inner region.

US Pat. No. 10,796,182

INTERACTIVE OPTICAL CODES

Hewlett Packard Enterpris...

1. A non-transitory computer-readable medium storing instructions executable by a computer to:scan an optical code having an optical code portion and an interactive portion; and
modifying the interactive portion of the optical code by adding content to the interactive portion associated with information represented in the optical code portion, wherein the content added to the interactive portion includes a plurality symbols, alphanumeric characters, or symbols and alphanumeric characters,
wherein the interactive portion includes a plurality of cells or includes a space between the optical code portion and a line surrounding the optical code portion.

US Pat. No. 10,796,181

MACHINE LEARNING BASED METHOD AND SYSTEM FOR ANALYZING IMAGE ARTIFACTS AND IMAGING SYSTEM FAILURE

GE PRECISION HEALTHCARE L...

1. A method for addressing malfunction of a medical imaging device, the method comprising:classifying a type of an image artifact in a medical image acquired by the medical imaging device by using a trained machine learning model;
analyzing system data associated with acquisition of the medical image to identify one or more system parameters that might have contributed to the type of image artifact;
determining whether the malfunction is caused by a software fault or a hardware fault based on the identified one or more system parameters; and
providing an action for addressing the image artifact based on the identified one or more system parameters.

US Pat. No. 10,796,180

PARALLEL IMAGE PROCESSING FOR MULTIPLE BIOMETRICS CAPTURE

Securiport LLC, Washingt...

1. A biometric camera device for capturing multiple biometrics at least partially in parallel, the biometric camera device comprising:a plurality of image sensors including a first image sensor and a second image sensor;
a plurality of biometric processors including a first biometric processor connected to the first image sensor and a second biometric processor connected to the second image sensor,
the first biometric processor configured to receive and process image data from the first image sensor according to a first biometric algorithm to extract a first biometric template relating to a first biometric,
the second biometric processor configured to receive and process image data from the second image sensor according to a second biometric algorithm to extract a second biometric template relating to a second biometric, the second biometric being different from the first biometric, wherein image processing and extraction of the first biometric template are performed at least partially in parallel with image processing and extraction of the second biometric template; and
a controller connected to each of the plurality of biometric processors, the controller configured to receive biometric data from each of the biometric plurality of biometric processors,
wherein the controller is configured to receive reprogramming information from a host computer and reprogram the first biometric processor based on the reprogramming information to update the first biometric algorithm or implement a new biometric algorithm at the first biometric processor.

US Pat. No. 10,796,179

LIVING FACE VERIFICATION METHOD AND DEVICE

TENCENT TECHNOLOGY (SHENZ...

1. A live human face verification method performed at a computing device having one or more processors and memory storing a plurality of programs to be executed by the one or more processors, the method comprising:acquiring, by the computing device, face images captured by at least two cameras without calibrating relative locations between the at least two cameras;
performing, by the computing device, feature point registration on the face images according to preset face feature points, to obtain corresponding feature point combinations between the face images, further including:
extracting feature points from two face images, each face image captured by a respective one of the at least two cameras;
performing similarity measurement on the extracted feature points to identify matched feature point pairs;
generating image spatial coordinate transformation parameters between the matched feature point pairs; and
performing the feature point registration between the two face images by means of the image spatial coordinate transformation parameters;
utilizing, by the computing device, preset algorithms to fit out a homography transformation matrix among the feature point combinations;
calculating, by the computing device, transformation errors of the feature point combinations using the homography transformation matrix to obtain an error calculation result; and
performing, by the computing device, live human face verification of the face images according to the error calculation result.

US Pat. No. 10,796,178

METHOD AND DEVICE FOR FACE LIVENESS DETECTION

BEIJING KUANGSHI TECHNOLO...

1. A method for face liveness detection, comprising:performing an illumination liveness detection and obtaining an illumination liveness detection result; and
determining whether or not a face to be verified passes the face liveness detection at least according to the illumination liveness detection result;
wherein performing of the illumination liveness detection and obtaining of the illumination liveness detection result comprise:
acquiring a plurality of illumination images of the face to be verified, wherein the plurality of illumination images are captured in a process of dynamically changing mode of illumination light irradiated on the face to be verified and are respectively corresponding to various modes of the illumination light; and
obtaining the illumination liveness detection result according to a light reflection characteristic of the face to be verified in the plurality of illumination images,
wherein the method further comprises:
performing an action liveness detection before determining whether or not the face to be verified passes the face liveness detection; wherein
performing of the action liveness detection comprises:
outputting an action instruction used for notifying the face to be verified to execute an action corresponding to the action instruction;
acquiring an action image of the face to be verified;
detecting the action executed by the face to be verified on the basis of the action image, so as to obtain an action detection result; and
obtaining an action liveness detection result according to the action detection result and the action instruction;
and
determining of whether or not the face to be verified passes the face liveness detection at least according to the illumination liveness detection result comprises:determining whether or not the face to be verified passes the face liveness detection according to both of the illumination liveness detection result and the action liveness detection result.

US Pat. No. 10,796,177

SYSTEMS AND METHODS FOR CONTROLLING THE PLAYBACK OF VIDEO IN A VEHICLE USING TIMERS

1. A system for playing video in a vehicle comprising:one or more processors;
a memory communicably coupled to the one or more processors and storing:
a video module including instructions that when executed by the one or more processors cause the one or more processors to:
receive a first request to play a video on a display inside of the vehicle;
in response to the first request, play the video on the display inside of the vehicle;
start a first timer having a first duration;
determine that the first timer has expired;
in response to determining that the first timer has expired, stop the video from playing on the display inside of the vehicle;
receive a second request to play the video on the display inside of the vehicle;
in response to the second request, play the video on the display inside of the vehicle;
start a second timer having a second duration;
determine that the second timer has expired; and
in response to determining that the second timer has expired, stop the video from playing on the display inside of the vehicle.

US Pat. No. 10,796,175

DETECTION OF A DROWSY DRIVER BASED ON VEHICLE-TO-EVERYTHING COMMUNICATIONS

1. A method comprising:receiving, by a first connected vehicle, a Vehicle-to-Everything (V2X) message including digital data describing a path history of a second connected vehicle;
determining, by the first connected vehicle, that a second driver of the second connected vehicle is drowsy based on the path history described by the digital data included in the V2X message;
determining whether the second connected vehicle is in an automated driving mode;
responsive to the second connected vehicle not being in automated driving mode, providing a notification to a first driver of the first connected vehicle; and
responsive to the second connected vehicle being in automated driving mode, the first connected vehicle automatically taking an evasive maneuver to avoid the second connected vehicle
so that a risk created by the second driver is reduced.

US Pat. No. 10,796,174

DISTANCE AND OBJECT BASED EXTERNAL NOTIFICATION SYSTEM FOR AUTOMATED HAILING SERVICE

Nissan North America, Inc...

1. An autonomous vehicle (AV), the AV comprising:a processor configured to execute instructions stored on a non-transitory computer readable medium to:
detect, based on sensor information, an object within the AV;
determine that the object belongs to a recent occupant of the AV; and
in response to determining that the object belongs to the recent occupant of the AV:
select, based on a proximity of the recent occupant to the AV, a notification modality for sending a message to the recent occupant regarding the object, wherein to select the notification modality comprises to:
in a first case that the recent occupant is outside of the AV and the recent occupant is not beyond a threshold distance from the AV, select a first notification modality for sending the message to the recent occupant; and
in a second case that the recent occupant is outside of the AV and the recent occupant is beyond the threshold distance from the AV, select a second notification modality for sending the message to the recent occupant, wherein the second notification modality is different from the first notification modality and wherein the second notification modality comprises sending an electronic notification to the recent occupant; and send the message using the notification modality.

US Pat. No. 10,796,173

VEHICLE CONTROL DEVICE

Honda Motor Co., Ltd., T...

1. A vehicle control device comprising:an external environment recognition unit configured to recognize a peripheral state of a host vehicle;
an action plan unit configured to determine an action to be performed by the host vehicle on a basis of a recognition result from the external environment recognition unit; and
a vehicle control unit configured to perform travel control of the host vehicle on a basis of a determination result from the action plan unit,
wherein:
the external environment recognition unit is configured to recognize a construction section ahead of the host vehicle and recognize that one or more recognition objects express entry possible/impossible information as to whether the host vehicle can enter the construction section; and
if the external environment recognition unit recognizes a traffic control person who directs traffic in the construction section as the recognition object and recognizes the entry possible/impossible information that is expressed by the traffic control person, the action plan unit is configured to decide whether to cause the host vehicle to enter the construction section or to stop before the construction section by using preferentially the entry possible/impossible information that is expressed by the traffic control person.

US Pat. No. 10,796,172

IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD

DENSO TEN Limited, Kobe ...

1. An image processing device comprising:a delimiting line detection unit configured to detect a delimiting line candidate based on image data obtained by capturing a surrounding of a vehicle, the delimiting line candidate being a candidate of a delimiting line for delimiting a parking space; and
an exclusion determination unit configured to determine whether or not to exclude the delimiting line candidate detected by the delimiting line detection unit from the candidate of the delimiting line,
wherein, in a case where
at least one white delimiting line candidate which is detected from a delimiting line having luminance higher than luminance of a road surface, and at least one black delimiting line candidate which is detected from a delimiting line having luminance lower than the luminance of the road surface are detected, and
one of the at least one white delimiting line candidate and the at least one black delimiting line candidate is set as at least one first delimiting line candidate, and other of the at least one white delimiting line candidate and the at least one black delimiting line candidate is set as at least one second delimiting line candidate while the at least one first delimiting line candidate includes a pair of first delimiting line candidates between which the at least one second delimiting line candidate is arranged and a distance between the first delimiting line candidates of the pair of first delimiting candidates is equal to or smaller than a predetermined threshold value,
the exclusion determination unit excludes the at least one second delimiting line candidate from the candidate of the delimiting line.

US Pat. No. 10,796,171

OBJECT RECOGNITION APPARATUS, OBJECT RECOGNITION METHOD, AND OBJECT RECOGNITION PROGRAM

JVCKENWOOD Corporation, ...

1. An object recognition apparatus comprising:an image acquisition unit configured to acquire a captured image of a photographic subject; and
a recognition processing unit configured to recognize the photographic subject in the acquired image using a recognition dictionary, wherein the recognition processing unit
detects a target in the acquired image using a target recognition dictionary,
detects an orientation of the target using the acquired image, wherein the orientation of the target is detected from an estimated direction in which the target travels,
selects a wheel recognition dictionary corresponding to the detected orientation,
detects a wheel at a lower part of the detected target using the selected wheel recognition dictionary, and
reflects a result of the detection of the wheel in a result of the detection of the target.

US Pat. No. 10,796,170

IMAGE INFORMATION COMPARISON SYSTEM

MITSUI KINZOKU ACT CORPOR...

1. An image information comparison system comprising:a comparison data storage that stores comparison data;
an on-board camera that is mounted on a vehicle, the camera continuously capturing an image that is external to the vehicle when the vehicle is running and stops;
a data comparator that compares image information that is captured by the on-board camera with the comparison data that are stored in the comparison data storage based on biometric authentication technology according to generation of a characteristic part or image recognition technology;
reporting means that report a result of comparison that is made by the data comparator; and
related information storage that stores a reporting level that stipulates whether or not reporting of the result of comparison is necessary,
wherein the reporting means report the result of comparison when the reporting level corresponds to a level that requires the reporting, and
the related information storage continuously records the result of comparison when the reporting level corresponds to a level that does not require the reporting.

US Pat. No. 10,796,169

PRUNING FILTERS FOR EFFICIENT CONVOLUTIONAL NEURAL NETWORKS FOR IMAGE RECOGNITION OF ENVIRONMENTAL HAZARDS

NEC Corporation, (JP)

1. A system for predicting changes to an environment, the system comprising:a plurality of remote sensors, each remote sensor being configured to capture images of an environment, the remote sensors including an image capture device and being configured to be mounted to an autonomous vehicle;
a storage device in communication with a processing device included on each of the remote sensors, the storage device including a pruned convolutional neural network (CNN) being trained to recognize obstacles according to images captured by the image capture device by training a CNN with a dataset, identifying filters from layers of the CNN that have kernel weight sums that are below a significance threshold for image recognition, removing the identified filters to produce the pruned CNN, and applying remaining filters to generate final feature maps for the pruned CNN, wherein the significance threshold is a number of smallest filters of a convolutional layer of the CNN according to corresponding absolute kernel weight sums,
wherein the processing device is configured to recognize the obstacles by analyzing the images captured by the image capture device, and to predict movement of the obstacles and changes to the environment using the pruned CNN such that the autonomous vehicle automatically avoids the obstacles based on the predicted movement of the obstacles; and
a transmitter configured to transmit the predicted movement of the obstacles and changes to the environment to a notification device such that an operator is alerted to the change.

US Pat. No. 10,796,168

DEVICE AND METHOD FOR THE CHARACTERIZATION OF OBJECTS

VOLKSWAGEN AKTIENGESELLSC...

1. A method for characterizing objects to be identified, the method comprising:acquiring, by a transportation vehicle, sensor data constituting sensor raw data from a first source of sensor information and a second source of sensor information, wherein the first source is a local sensor of the transportation vehicle and the second source is a remote sensor outside the transportation vehicle, and wherein the second source sensor data is acquired via a message including a data field, the data field comprising a description field, a dynamic object container including a description of at least one dynamic object detected by the second source, and a static object container including a description of at least one static object detected by the second source;
determining at least one object to be identified based on the sensor data;
selecting that sensor information from the first source that is associated with the object to be identified and is representative of sensor raw data modified in a course of object recognition to convert the sensor data to a common data format and/or localize the sensor data in a coordinate system; and
characterizing the object to be identified by combining the acquired sensor data and the selected sensor information.

US Pat. No. 10,796,167

PERIPHERY RECOGNITION DEVICE

HITACHI AUTOMOTIVE SYSTEM...

1. A periphery recognition device comprising:a first sensor that is configured to acquire situation data of a long-distance area;
a second sensor that has a detection region having a wider angle than the first sensor and is configured to acquire situation data of a short-distance area in the detection region;
a long-distance object recognition unit configured to recognize an object present in the long-distance area based on three-dimensional long-distance data calculated based on the situation data acquired by the first sensor, wherein recognizing the object includes determining a type of the object;
a short-distance object recognition unit configured to recognize the object present in the short-distance area based on three-dimensional wide-angle short-distance data calculated based on the situation data acquired by the second sensor, wherein recognizing the object includes determining a type of the object; and
an information linkage unit configured to transfer information indicating the type of the object between the long-distance object recognition unit and the short-distance object recognition unit,
wherein at least one of the long-distance object recognition unit and the short-distance object recognition unit is configured to use the transferred information to estimate an object type of an object detected in an area outside of the long-distance area and outside of the short-distance area.

US Pat. No. 10,796,166

INFORMATION PROCESSING FOR AGGREGATING SENSOR INFORMATION ABOUT PERSONS ENTERING AND EXITING A VENUE

NEC CORPORATION, Minato-...

1. An information processing system comprising:a first sensor that acquires information about a first domain;
a plurality of second sensors, each of which acquires information about a domain included in the first domain;
a memory storing instructions; and
a processor configured to execute the instructions to:
select at least one second sensor from the plurality of second sensors based on a state of a first target obtained from the acquired information about the first domain;
aggregate the information about the domain acquired by the selected at least one second sensor;
authenticate the first target based on the aggregated information;
determine whether the authentication of the first target is complete;
exclude the first target from targets of a next authentication when it is determined that the authentication of the first target is complete; and
select at least one second sensor from the plurality of second sensors based on a state of a target whose authentication is not complete.

US Pat. No. 10,796,165

INFORMATION PROCESSING APPARATUS, METHOD FOR CONTROLLING THE SAME, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM

Canon Kabushiki Kaisha, ...

1. An information processing apparatus comprising:one or more processors; and
a memory which stores instructions executable by the one or more processors to cause the information processing apparatus to perform:
detecting an object that enters or exits a predetermined region;
managing, in a queue, data based on the detection performed in the detecting; and
counting a number of predetermined objects based on an image obtained by capturing the predetermined region,
wherein the instructions further cause the information processing apparatus to perform, in a case where the predetermined object is detected, correcting the queue based on the number counted by the counting and a number of data managed in the queue.

US Pat. No. 10,796,164

SCENE PRESET IDENTIFICATION USING QUADTREE DECOMPOSITION ANALYSIS

Intellective Ai, Inc., D...

1. A computer-implemented method, comprising:receiving a background scene;
generating a quadtree decomposition of the background scene;
determining whether a window portion of the quadtree decomposition is invalid;
discarding the window portion if the window portion is determined to be invalid;
determining whether the background scene matches a previously captured background scene, based on the quadtree decomposition;
updating the previously captured background scene when the background scene matches the previously captured background scene; and
creating a new background scene when the background scene does not match the previously captured background scene.

US Pat. No. 10,796,163

SURVEILLANCE VIDEO ACTIVITY SUMMARY SYSTEM AND ACCESS METHOD OF OPERATION (VASSAM)

EAGLE EYE NETWORKS, INC.,...

1. A method performed by an apparatus to transform video surveillance files into a visual summary of security events associated with motion, the method comprising steps as follows:receiving at least one stream of encoded video frames from computer-readable non-transient media;
decoding said at least one stream of encoded video frames;
masking-in frames captured during a date-time range of said decoded video frames;
decoding motion indicia of pixel blocks within said masked-in frames;
triggering event glimpses by said motion indicia of pixel blocks;
re-encoding said triggered event glimpses into a succinct surveillance summary, whereby a stream of encoded video frames is transformed by an apparatus into a visual summary of security events associated with motion captured within a date-time range; wherein triggering event glimpses comprises:
designating at least one anchor frame by motion indicia within pixel blocks above a threshold value;
determining a quota for video frames desired for said visual summary of security event; and
incorporating video frames before and after each anchor frame until said quota is fulfilled, wherein said quota is a value within a range of values.

US Pat. No. 10,796,162

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING SYSTEM

TOSHIBA TEC KABUSHIKI KAI...

1. An information processing apparatus, comprising:a camera interface configured to connect to a camera; and
a processor configured to:
acquire shelf allocation plan information that indicates a shelf ID of each of a plurality of shelves in association with: a plurality of article IDs of articles displayed on the shelf, and a display location of each of the articles displayed on the shelf;
control the camera via the camera interface to capture an image of a plurality of articles and a shelf displaying the articles;
determine a plurality of article IDs for the articles shown in the captured image using a feature value of each of the articles;
determine a positional relationship of the articles shown in the captured image;
compare the determined positional relationship with a positional relationship of articles displayed on each of the shelves determined from the shelf allocation plan information; and
determine, as a shelf ID of the shelf shown in the captured image, a shelf ID of one of the shelves displaying articles having a positional relationship closest to the determined positional relationship, wherein
the determined shelf ID is associated with one or more of the determined article IDs.

US Pat. No. 10,796,161

SYSTEM AND METHOD FOR IDENTIFYING A NUMBER OF INSECTS IN A HORTICULTURAL AREA

ILLUMITEX, INC., Austin,...

1. A system comprising:a digital camera;
a device processor;
a data store storing trap detection parameters selected to identify pixels corresponding to insect traps and insect detection parameters selected to detect pixels corresponding to insects, the insect detection parameters including an insect recognition color and filter criteria; and
a non-transitory computer readable medium storing instructions executable by the device processor to:
capture, using the digital camera, a first digital image of an insect trap against a background of a horticultural area containing the insect trap;
isolate a portion of the first digital image using the trap detection parameters to isolate the insect trap from the background in the first digital image, the portion of the first digital image being the portion of the first digital image representing the insect trap, wherein isolating the portion of the first digital image comprises:
converting the first digital image to a specified color space to create a converted digital image;
performing a conditioning on the converted digital image to produce a conditioned digital image in which pixels representing the insect trap are further separated from pixels representing the background than in the first digital image wherein performing the conditioning on the converted digital image to produce the conditioned digital image comprises:
performing a first overlay operation on the converted digital image to produce a first overlaid image;
performing an analysis on the first overlaid image to determine a maximal difference among pixel color values for the first overlaid image;
performing a first threshold pass on the first overlaid image to produce a first thresholding result image, wherein the first threshold pass applies a first threshold that is dependent on the maximal difference among pixel color values for the first overlaid image;
performing a blur operation to blur clustered regions in the first thresholding result image to produce a blurring result image; and
performing a second threshold pass on the blurring result image to produce a second thresholding result image, wherein the second threshold pass applies a second threshold that is dependent on the maximal difference among pixel color values for the first overlaid image;
performing an object detection to detect the insect trap in the conditioned digital image;
performing a first isolating operation on the conditioned digital image to produce an image mask; and
performing a second isolating operation on the first digital image using the image mask to isolate the portion of the first digital image as an isolated portion;
perform automated particle detection on the isolated portion of the first digital image according to the insect detection parameters to identify regions of pixels in the isolated portion of the first digital image that have the insect recognition color and that pass the filter criteria;
determine a cardinality of insects on the insect trap based on a number of identified regions of pixels;
store the cardinality of insects in association with the first digital image; and
provide the cardinality of insects for display in a graphical user interface.

US Pat. No. 10,796,160

INPUT AT INDOOR CAMERA TO DETERMINE PRIVACY

Vivint, Inc., Provo, UT ...

1. A method for security or automation systems, comprising:operating a plurality of video monitoring components and audio monitoring components of the security or automation system;
detecting a first occupant in a first location associated with the security or automation system and a second occupant in a second location associated with the security or automation system;
identifying the detected first occupant and the detected second occupant, the identifying comprising identifying an inputted code associated with the first occupant, an inputted code associated with the second occupant, or both;
comparing a first audio privacy preference and a first video privacy preference associated with the first occupant, the first audio privacy preference and the first video privacy preference inputted by the first occupant, and a second audio privacy preference and a second video privacy preference associated with the second occupant, the second audio privacy preference and the second video privacy preference inputted by the second occupant;
comparing a priority of the first audio privacy preference and the first video privacy preference associated with the first occupant with a priority of the second audio privacy preference and the second video privacy preference associated with the second occupant; and
updating an operation status of at least one of the plurality of video monitoring components and at least one of the plurality of audio monitoring components in accordance with the first audio privacy preference and the first video privacy preference of the first occupant or the second audio privacy preference and the second video privacy preference of the second occupant based at least in part on the identifying the detected first occupant and the detected second occupant, and based at least in part on the comparing the priority of the first audio privacy preference and the first video privacy preferences of the first occupant and the second audio privacy preference and the second video privacy preference of the second occupant.

US Pat. No. 10,796,159

CONTENT-MODIFICATION SYSTEM WITH USE OF MULTIPLE FINGERPRINT DATA TYPES FEATURE

The Nielsen Company (US),...

1. A method comprising:receiving first query fingerprint data representing first content channeled through a portion of a content-distribution system;
detecting a first match between the received first query fingerprint data and first reference fingerprint data representing a modifiable content-segment;
responsive to detecting the first match, performing a first action;
receiving second query fingerprint data representing content received by a content-presentation device;
detecting a second match between the received second query fingerprint data and second reference fingerprint data representing second content transmitted by the content-distribution system, wherein the second content is a modified version of the first content and includes at least a portion of the first content; and
responsive to detecting the second match, performing a second action that is different from the first action.

US Pat. No. 10,796,158

GENERATION OF VIDEO HASH

GRASS VALLEY LIMITED, Ne...

1. An apparatus for generating a hash for an image in a video sequence of images, comprising:a first filter configured to receive a sample series of temporal difference samples in an image order of the sequence of images, with each temporal difference sample representing a difference in respective pixel values between said image and an adjoining image in the image order, with the first filter further configured to perform a temporal averaging of the sample series;
a second filter configured to determine a rate of change based on a magnitude corresponding to a difference between the temporally averaged sample series of said image with the temporally averaged sample series of another image in the sequence of images, and further configured to detect distinctive events based on the respective magnitudes indicating the rate of change for the temporal difference samples;
a buffer arrangement configured to store said distinctive events with a corresponding temporal location in the sequence of images of the respective image; and
a hash generator configured to derive a hash for the respective image based on a set of temporal spacing in images between said image and each of a plurality of images in a temporal neighbourhood of said image having associated therewith a respective distinctive event.

US Pat. No. 10,796,157

HIERARCHICAL OBJECT DETECTION AND SELECTION

MediaTek Inc., Hsinchu (...

1. A method, comprising:playing, by a processor of an apparatus, a video on a display device;
receiving, by the processor, a first command one or more times;
performing, by the processor, object detection in a hierarchical manner with respect to a plurality of objects in the video responsive to receiving the first command one or more times;
receiving, by the processor, a second command indicating selection of an object of a first set of one or more objects or a second set of one or more objects in the video; and
performing, by the processor, an operation with respect to the selected object,
wherein the performing of the object detection in the hierarchical manner with respect to the plurality of objects in the video comprises:
displaying, on the display device, a video image from the video;
detecting, upon receiving the first command for an Nth time, in the video image the first set of one or more objects of the plurality of objects at a first hierarchical level, N being a positive integer equal to or greater than 1;
highlighting, on the display device, the first set of one or more objects;
detecting, upon receiving the first command for an (N+1)th time, in the video image the second set of one or more objects of the plurality of objects at a second hierarchical level below the first hierarchical level; and
highlighting, on the display device, the second set of one or more objects,
wherein each object of the second set of one or more objects is a part of a respective object of the first set of one or more objects.

US Pat. No. 10,796,156

ANALYZING VIDEO STREAMS IN AN INDUSTRIAL ENVIRONMENT TO IDENTIFY POTENTIAL PROBLEMS AND SELECT RECIPIENTS FOR A DISPLAY OF VIDEO STREAMS RELATED TO THE POTENTIAL PROBLEMS

Rockwell Automation Techn...

1. A system, comprising:a processor; and
a memory communicatively coupled to the processor, the memory having stored therein computer-executable instructions, comprising:
a video historian component configured to:
store video streams captured by cameras in an industrial environment, and
learn a normal operating procedure for an industrial process in the industrial environment based on a first analysis of the video streams; and
a data identification component configured to identify a deviation from the normal operating procedure based on a second analysis of a new video stream captured by a camera of the cameras in the industrial environment, the new video stream depicting execution of a current operating procedure being performed for the industrial process.

US Pat. No. 10,796,155

IRREGULAR EVENT DETECTION IN PUSH NOTIFICATIONS

VERINT SYSTEMS LTD., Her...

1. A method of detecting irregular events from acquired data, the method comprising:acquiring data from a data acquisition system, said data comprising video data;
identifying objects in the video data;
automatedly producing, with the acquisition system, analytics data from the identified objects;
formatting a report template that is customized upon user input;
formatting the analytics data into a report notification pursuant to the report template;
extracting values of report measures from the report notification;
transmitting the extracted values across a communications interface to a measures database;
storing the extracted values in the measures database;
determining if an irregular event has occurred based upon the extracted values; and
producing an irregularity alert if an irregular event is determined.

US Pat. No. 10,796,154

METHOD OF IMAGE-BASED RELATIONSHIP ANALYSIS AND SYSTEM THEREOF

BIONIC 8 ANALYTICS LTD., ...

1. A computerized method of image-based relationship analysis, comprising:obtaining a set of target images each including one or more image representations of one or more individuals;
obtaining, for each image representation in each target image, a corresponding vector representation generated by a Facial Recognition Model (FRM), thereby providing multiple vector representations corresponding to multiple image representations of individuals included in the set of target images;
clustering the multiple vector representations to a plurality of clusters of vector representations corresponding to a plurality of unique individuals included in the set of target images using a similarity measure, and obtaining, for each target image, one or more unique individuals associated therewith;
for each given target image of at least one subset of the set,
obtaining a set of image parameters characterizing the given target image, wherein the set of image parameters includes at least one computed parameter indicative of a relationship measurement between the one or more unique individuals associated with the given target image;
generating a local relationship matrix using the set of image parameters, wherein the local relationship matrix is representative of local mutual relationships between the one or more unique individuals;
thereby obtaining a set of local relationship matrices corresponding to the at least one subset of target images; and
generating a global relationship matrix by combining the set of local relationship matrices, the global relationship matrix being representative of relationships between the plurality of unique individuals;
wherein the FRM is trained using a training set of images targeted for a specific group of individuals, each image in the training set is pre-tagged with one or more unique individuals included therein, and the training set of images comprises a plurality of subsets of images, each subset pre-tagged with a respective individual from the specific group, and wherein the plurality of subsets of images are filtered prior to being used for training the FRM so as to increase accuracy of the FRM, and wherein the FRM is trained by:
for each given subset of images pre-tagged with a respective individual:
feeding the FRM with the given subset of images to obtain a cluster of vector representations representing the respective individual in the given subset of images;
applying a similarity measure to the cluster giving rise to a reduced cluster of vector representations corresponding to a filtered subset of images;
thereby obtaining a plurality of filtered subsets corresponding to respective individuals; and
feeding the FRM with the plurality of filtered subsets so as to train the FRM.

US Pat. No. 10,796,153

SYSTEM FOR MAINTENANCE AND REPAIR USING AUGMENTED REALITY

INTERNATIONAL BUSINESS MA...

1. An augmented reality system comprising:a device comprising a user interface, a camera, and a controller;
the controller operable to:
receive data associated with a repair item, the data comprising a description of an issue associated with the repair item and diagnostic data, received from a diagnostic tool, associated with the repair item;
capture, by the camera, media associated with the repair item, wherein the media comprises one or more images of the repair item;
access an outside resource to determine general data associated with the repair item based on the data, wherein the outside resource comprises at least one of a manufacturer website for the repair item and a mechanic message board for the repair item;
analyze the data the media, and the general data to determine a candidate repair component of the repair item, wherein the candidate repair component is located at a target location; and
wherein determining the candidate repair component of the repair item comprises:
directing a user to the target location for the candidate repair component;
acquiring reference repair item data, wherein the reference repair data comprises one or more images of a working component that is of a same type component as the candidate repair component; and
comparing the media associated with the candidate repair component with the one or more images of the working component to determine a fault in the candidate repair component, wherein determining the fault comprises:
determining a comparison score between the one or more images of the repair item and the one or more images of the working component, wherein the comparison score is based on an absolute value of all pixel differences between the one or more images of the repair item and the one or more images of the working component;
provide, via the user interface, a repair method for repairing the candidate repair component at the target location, wherein providing the repair method comprises:
generating an augmented reality view of the candidate repair component, wherein the augmented reality view of the candidate repair component comprises a simulated view of the candidate repair component;
overlaying the simulated view on the target location; and
generating one or more tasks for a user to complete to repair the candidate repair component.

US Pat. No. 10,796,152

VENTRAL-DORSAL NEURAL NETWORKS: OBJECT DETECTION VIA SELECTIVE ATTENTION

ANCESTRY.COM OPERATIONS I...

1. A computer-implemented method for object detection within a visual medium, comprising:receiving a visual medium comprising a plurality of objects;
identifying, via a first neural network, one or more relevant visual regions and one or more irrelevant visual regions within the visual medium, comprising:
identifying, via a sensitivity analysis, pixels within the visual medium that are above a predetermined threshold, wherein the pixels above the predetermined threshold define the one or more relevant visual regions;
generating, based at least on the one or more irrelevant visual regions, a visual mask comprising a data structure containing pixel values;
applying the visual mask to modify pixel intensity values of the one or more irrelevant visual regions to generate a masked visual medium;
identifying, via a second neural network, one or more objects of interest within the masked visual medium; and
outputting an identification of the one or more objects of interest.

US Pat. No. 10,796,151

MAPPING A SPACE USING A MULTI-DIRECTIONAL CAMERA

Imperial College of Scien...

1. A robotic device comprising:a monocular multi-directional camera device to capture an image from a plurality of angular positions;
at least one movement actuator to move the robotic device within a space;
a navigation engine to control movement of the robotic device within the space;
an occupancy map accessible by the navigation engine to determine navigable portions of the space,
wherein the navigation engine is configured to:
instruct a movement of the robotic device around a point in a plane of movement using the at least one movement actuator;
obtain, using the monocular multi-directional camera device, a sequence of images at a plurality of different angular positions during the instructed movement of the robotic device;
determine pose data from the sequence of images, the pose data indicating the location and orientation of the monocular multi-directional camera device at a plurality of positions during the instructed movement, the pose data being determined using a set of features detected within the sequence of images;
estimate depth values by evaluating a volumetric function of the sequence of images and the pose data, each depth value representing a distance from the multi-directional camera device to an object in the space; and
process the depth values to populate the occupancy map for the space.

US Pat. No. 10,796,150

AUTOMATED DIAGNOSIS AND TREATMENT OF CROP INFESTATIONS

FARMWAVE, LLC, Alpharett...

1. A system, comprising:a first computing device comprising a processor and a memory; and
machine readable instructions stored in the memory that, when executed by the processor, cause the first computing device to at least:
receive a field report from a second computing device, the field report comprising a plurality of images of a corresponding plurality of plants in a crop and an identifier of a respective field;
apply a first object-recognition technique to each image in the plurality of images to determine a type of the crop;
select a second object-recognition technique based on the type of the crop;
apply the second object-recognition technique each image in the plurality of images to determine the individual yield for each of the corresponding plants in the crop;
determine a size of a corresponding field based on the identifier of the respective field; and
calculate an estimated crop yield based at least in part on the individual yield for each of the corresponding plants in the crop and the size of the respective field.

US Pat. No. 10,796,149

SYSTEM AND METHOD FOR PERFORMING VIDEO OR STILL IMAGE ANALYSIS ON BUILDING STRUCTURES

Accurence, Inc., Louisvi...

1. A system, comprising:an image analysis system comprising:
an image receiver that receives one or more original roof images;
a feature extractor that is used to identify features and corresponding locations of the identified features from the one or more original roof images or a processed version thereof;
a feature analyzer that generates a feature list describing the identified features and the corresponding locations of the identified features in a format that is deliverable to an automated settlement engine;
the automated settlement engine comprising:
an image analysis system Application Programming Interface (API) that is configured to receive the feature list from the image analysis system; and
a set of image analysis rules that are configured to analyze the identified features from the feature list along with the corresponding locations of the identified features to determine whether or not hail damage occurred to a roof system with respect to a predetermined likelihood.

US Pat. No. 10,796,148

AIRCRAFT LANDING PROTECTION METHOD AND APPARATUS, AND AIRCRAFT

AUTEL ROBOTICS CO., LTD.,...

1. An aircraft landing protection method, comprising:obtaining an image of a landing area;
determining a feature point in the image;
determining, according to the feature point, whether the landing area is a dangerous landing area; and
controlling the aircraft to suspend landing or controlling the aircraft to fly away from the dangerous landing area;
wherein the feature point refers to a point whose image grayscale value changes sharply or a point with relatively large curvature at an edge of the image; the feature point represents an intrinsic feature of the image, and is used to identify a target object in the image; the dangerous landing area refers to any area that is not suitable for landing of the aircraft,
wherein the determining, according to the feature point, whether the landing area is the dangerous landing area comprises:
determining whether a quantity of the feature points in the image is less than or equal to a first preset threshold of the quantity of the feature points.

US Pat. No. 10,796,147

METHOD AND APPARATUS FOR IMPROVING THE MATCH PERFORMANCE AND USER CONVENIENCE OF BIOMETRIC SYSTEMS THAT USE IMAGES OF THE HUMAN EYE

1. A method for biometric enrollment, comprising:acquiring, by a camera module connected to a processor, a first image of an eye;
generating a plurality of augmented images based on the first image of the eye;
training a classifier on at least one characteristic of the first image of the eye based on at least the plurality of augmented images to recover a first set of classifier parameters; and
determining, using the classifier and the first set of classifier parameters, a biometric match score between a second image and one of the plurality of augmented images to determine whether the second image is acquired from the eye.

US Pat. No. 10,796,146

IR ILLUMINATION MODULE FOR MEMS-BASED EYE TRACKING

MICROSOFT TECHNOLOGY LICE...

1. An iris recognition illumination system that includes a red, green, blue (RGB) visible light display, the iris recognition illumination system comprising:a RGB laser device that is associated with at least a first collimating optic and that generates RGB laser light;
an infrared (IR) illumination device that is associated with a second collimating optic and that generates IR light, the IR illumination device being positioned at a fixed position relative to the RGB laser device within the iris recognition illumination system;
a display module assembly (DMA) that includes a microelectromechanical scanning (MEMS) mirror system, wherein the DMA optically combines the IR light generated by the IR illumination device with the RGB laser light generated by the RGB laser device to generate combined light, and wherein the combined light is directed towards an iris of a user's eye via a transport medium; and
one or more photodetector(s) that are configured to detect reflected light that is reflected off of the user's iris as a result of the combined light being directed towards the user's iris via the transport medium, wherein the one or more photodetector(s) include at least an IR detector configured to detect reflected IR light included in the reflected light, the reflected IR light being usable by the iris recognition illumination system to perform iris recognition,
wherein the one or more photodetector(s) detect the reflected IR light within a range of at least two line pairs per millimeter for said iris recognition.

US Pat. No. 10,796,145

METHOD AND APPARATUS FOR SEPARATING TEXT AND FIGURES IN DOCUMENT IMAGES

Samsung Electronics Co., ...

1. A method of separating text and a figure of a document image, the method comprising:acquiring the document image;
dividing the document image into a plurality of regions of interest;
acquiring a feature vector by using a two-dimensional (2D) histogram, the 2D histogram being obtained by resizing one of the regions of interest among the plurality of the regions of interest, and performing connection component extraction on the resized region of interest;
acquiring a transformation vector of the feature vector by using a kernel;
acquiring a cluster center of the transformation vector;
acquiring a supercluster by performing clustering on the cluster center; and
classifying the supercluster into one of a text class and a figure class, based on the number of superclusters.

US Pat. No. 10,796,144

METHOD AND DEVICE FOR CLASSIFYING SCANNED DOCUMENTS

Hewlett-Packard Developme...

1. A method of classifying document hardcopy images, the method comprising:providing a document hardcopy image, the document hardcopy image having image features;
extracting image descriptors by a first set of image descriptor extractors, each image descriptor of the image descriptors being descriptive of the image features of the document hardcopy image;
estimating class probabilities of the document hardcopy image by multiple trained classifiers based on the image descriptors;
determining a most probable class of the document hardcopy image by a trained meta-classifier based on the class probabilities estimated by the multiple trained classifiers;
inputting the document hardcopy image and the most probable class of the document hardcopy image to an assigner; and
assigning, by the assigner, the most probable class determined by the trained meta-classifier to the document hardcopy image to obtain a classified document hardcopy image,
wherein the first set of image descriptor extractors comprise a spatial local binary pattern (SLBP) extractor, a grayscale runlength histogram (GRLH) extractor, and a Bernoulli Mixture Model Fisher vectors (BMMFV) extractor.

US Pat. No. 10,796,142

SYSTEMS FOR TRACKING INDIVIDUAL ANIMALS IN A GROUP-HOUSED ENVIRONMENT

NUtech Ventures, Lincoln...

1. A computer-implemented method comprising:receiving, from a motion sensing device, a plurality of image frames that includes information regarding a plurality of animals housed in a group-housed environment;
determining a coordinate space of the group-housed environment based on an analysis of a first image frame of the image frames;
generating, based on the analysis of the first image frame, an ellipsoid model for each animal based on defined surface points for each animal weighted according to a likely proximity to a crest of a spine of the respective animal, in which a first surface point that is more likely to be near the crest of the spine of the respective animal is given a higher weight than a second surface point that is less likely to be near the crest of the spine of the respective animal; and
tracking a position and an orientation of each animal within the image frames by:
enforcing shape consistency of the ellipsoid models; and
adjusting the position of each of the ellipsoid models based on the defined surface points for each animal and a maximum likelihood formulation for each animal.

US Pat. No. 10,796,141

SYSTEMS AND METHODS FOR CAPTURING AND PROCESSING IMAGES OF ANIMALS FOR SPECIES IDENTIFICATION

SPECTERRAS SBF, LLC, Bir...

1. A system for capturing images of animals for identification comprising:a camera having a field of view and configured to capture images of objects within the field of view;
a feeding station positioned adjacent to the camera, wherein the feeding station is located within the field of view of the camera;
a member positioned adjacent to the feeding station and opposed to the camera, wherein at least a portion of the member is within the field of view of the camera; and
a computing device coupled to the camera to receive the images captured by the camera, the computing device comprising:
a processing unit configured to execute instructions; and
a memory having the instructions stored thereon, the memory coupled to the processing unit to provide the instructions to the processing unit, wherein the instructions cause the processing unit to:
receive an image from the camera;
determine whether an animal is present in the received image based on a comparison of the received image and a predefined background image of at least the member stored in memory;
identify a first set of pixels and a second set of pixels in the received image in response to a determination that an animal is present in the received image, wherein the first set of pixels correspond to pixels associated with the animal and the second set of pixels correspond to pixels associated with the member and the feeding station;
generate an evaluation image by removing the second set of pixels from the received image; and
provide the evaluation image to animal recognition software, the animal recognition software configured to provide an identification of the animal based on the evaluation image.

US Pat. No. 10,796,140

METHOD AND APPARATUS FOR HEALTH AND SAFETY MONITORING OF A SUBJECT IN A ROOM

OXEHEALTH LIMITED, Londo...

1. A method of monitoring a subject in a room to provide status or alerting of a subject's condition, the method comprising the steps of:capturing a video image sequence of the room using a video camera;
processing the video image sequence using a data processor to automatically:
measure the movement of different parts of the scene to detect areas of gross movement and fine movement;
estimating one or more vital signs of the subject; and
outputting an indication of the status of the subject in the room based upon both the classification of movement and the presence or absence of vital signs;
wherein said step of estimating one or more vital signs of the subject is conducted by analysing areas of the video image sequence not containing gross movement;
if said step of estimating one or more vital signs is not providing a valid heart rate or breathing rate, conducting a further step of determining whether fine movement is present in the video image sequence;
if said further step determines that fine movement is present then outputting an indication that no vital signs are detected and the length of time for which no vital signs have been detected; and
if said further step determines that fine movement is not present then outputting an alert indicating that no vital signs and no movement are detected.

US Pat. No. 10,796,139

GESTURE RECOGNITION METHOD AND SYSTEM USING SIAMESE NEURAL NETWORK

KAIKUTEK INC., Taipei (T...

1. A gesture recognition method using siamese neural network, comprising steps of:controlling weight of a first neural network unit and weight of a second neural network unit to be the same by a weight sharing unit;
receiving a first training signal from a sensor to calculate a first feature by the first neural network unit;
receiving a second training signal from the sensor to calculate a second feature by the second neural network unit;
determining a distance between the first feature and the second feature in a feature space by a similarity analysis unit;
controlling the weight of the first neural network unit and the weight of the second neural network unit through the weight sharing unit to adjust the distance between the first feature and the second feature in the feature space according to a predetermined parameter by a weight controlling unit;
receiving a sensing signal to calculate a sensing feature by the first neural network unit;
receiving an anchor signal to calculate a reference feature by the second neural network unit;
generating a distance between the sensing feature and the reference feature in the feature space by the similarity analysis unit;
when the distance between the sensing feature and the reference feature is smaller than a threshold value, the similarity analysis unit classifies a gesture event.

US Pat. No. 10,796,138

INTRA-FACILITY ACTIVITY ANALYSIS DEVICE, INTRA-FACILITY ACTIVITY ANALYSIS SYSTEM, AND INTRA-FACILITY ACTIVITY ANALYSIS METHOD

PANASONIC INTELLECTUAL PR...

1. An intra-facility activity analysis device, which performs analysis relevant to an activity state of a moving object on the basis of activity information generated from a captured image acquired by imaging an inside of a facility, and generates output information acquired by visualizing the activity state of the moving object, the intra-facility activity analysis device comprising:a processor; and
a memory that stores an instruction,
wherein the processor, when executing the instruction stored in the memory, performs operations including:
acquiring the activity information representing an activity level of the moving object for each of a plurality of predetermined detection elements acquired through division performed on the captured image;
setting a target area on a facility map image acquired by drawing a layout on the inside of the facility;
generating indexed information of the target area, acquired by integrating the activity state of the moving object in the target area on the basis of the activity information for the plurality of predetermined detecting elements in the target area;
generating an activity state display image representing overall activity state of the moving object in the target area on the basis of the indexed information;
generating the output information which includes first display information acquired by superimposing the activity state display image on the facility map image;
displaying the generated first display information, on a display;
receiving an operation for designating densification of the activity state display image by a user, while the first display information is displayed on the display;
in response to receiving the operation for designating densification while the first display information is displayed on the display, generating a densified activity state display image representing the activity state of the moving object in each of the plurality of predetermined detecting elements on the inside of the target area on the basis of the activity information for each of the plurality of predetermined detecting elements in the target area;
generating the output information including second display information acquired by superimposing the densified activity state display image on the facility map image; and
switching an image displayed on the display from the first display information to the generated second display information.

US Pat. No. 10,796,137

TECHNIQUE FOR PROVIDING SECURITY

Intelligence Based Integr...

1. A system for providing security, the system comprising:at least one camera configured to capture a photographic image of a person in view of the at least one camera; and
a computer circuit configured to:
receive the captured photographic image of the person in view of at least one camera,
transmit the captured photographic image of the person in view of the at least one camera for a process to be performed for comparing the captured photographic image of the person in view of the at least one camera to facial images of persons of interest included in a database to detect a likely match,
receive an alert that the person in view of the at least one camera is a person of interest in response to a detection of the likely match between the captured photographic image of the person in view of the at least one camera and a facial image of the person of interest, and
control to alert at least one of one or more law enforcement officers or one or more security personnel of the detection of the likely match between the captured photographic image of the person in view of the at least one camera and the facial image of the person of interest,
wherein the system is controlled by a first party and the database is controlled by a second party.

US Pat. No. 10,796,136

SECONDARY SOURCE AUTHENTICATION OF FACIAL BIOMETRIC

American Express Travel R...

1. A method comprising:receiving, by a processor, a primary image of a user containing a first set of facial feature data;
comparing, by the processor, the first set of facial feature data to a second set of facial feature data associated with a secondary image of the user, wherein the secondary image is from a social media source; and
authenticating, by the processor, that the primary image depicts the user associated with a user financial account based at least in part on the comparison indicating the first set of facial feature data is representative of the user identified by the second set of facial feature data.

US Pat. No. 10,796,135

LONG-TAIL LARGE SCALE FACE RECOGNITION BY NON-LINEAR FEATURE LEVEL DOMAIN ADAPTATION

NEC Corporation, (JP)

1. A point of sale system with facial recognition, the point of sale system comprising:one or more cameras;
a processor device and memory coupled to the processor device, the processing system programmed to:
receive a plurality of images from the one or more cameras;
extract, with a feature extractor utilizing a convolutional neural network (CNN) with an enlarged intra-class variance of long-tail classes, feature vectors from each of the plurality of images;
generate, with a feature generator, discriminative feature vectors for each of the feature vectors;
classify, with a fully connected classifier, an identity from the discriminative feature vectors; and
control an operation of the point of sale system to react in accordance with the identity.

US Pat. No. 10,796,134

LONG-TAIL LARGE SCALE FACE RECOGNITION BY NON-LINEAR FEATURE LEVEL DOMAIN ADAPTATION

NEC Corporation, (JP)

1. A computer-implemented method for facial recognition, the method comprising:receiving, by a processor device, a plurality of images;
extracting, by the processor device with a feature extractor utilizing a convolutional neural network (CNN) with an enlarged intra-class variance of long-tail classes, feature vectors for each of the plurality of images;
generating, by the processor device with a feature generator, discriminative feature vectors for each of the feature vectors;
classifying, by the processor device utilizing a fully connected classifier, an identity from the discriminative feature vector; and
controlling an operation of a processor-based machine to react in accordance with the identity.

US Pat. No. 10,796,133

IMAGE PROCESSING METHOD AND APPARATUS, AND ELECTRONIC DEVICE

GUANGDONG OPPO MOBILE TEL...

1. An image processing method, comprising:acquiring a photo album obtained from face clustering;
collecting face information of respective images in the photo album, and acquiring a face parameter of each image according to the face information;
selecting a cover image according to the face parameter of each image; and
taking a face-region image from the cover image, and setting the face-region image as a cover of the photo album;
wherein selecting the cover image according to the face parameter of each image comprises:
performing calculation on the face parameter of each image in a preset way, to obtain a cover score of each image; selecting the image with a highest cover score as the cover image;
wherein selecting the image with the highest cover score as the cover image comprises:
acquiring a source of each image; and selecting the image with the highest cover score in images coming from a preset source as the cover image.

US Pat. No. 10,796,132

PUBLIC SERVICE SYSTEM AND METHOD USING AUTONOMOUS SMART CAR

HANCOM, INC., Seongnam-s...

1. A system for providing a public service using an autonomous smart car, the system comprising:autonomous smart cars
automatically driving to a destination when the destination is set,
storing images that are input through cameras on the autonomous smart cars together with location information and time information, and
wirelessly transmitting facial information of a matched person to a predetermined outside by recognizing faces from the input or stored images on the basis of pre-stored facial information or wirelessly transmitting image data corresponding to time and location requested from the outside of input or stored data to a predetermined server; and
an autonomous smart car managing server
managing the autonomous smart cars,
transmitting faces of missing persons or facial information of suspects or wanted persons to the autonomous smart cars,
transmitting matched facial information to one or more of a public agency server and a missing person information server, which needs the facial information, depending on whether the facial information is matched with a missing person, a suspect, or a wanted person, when receiving the facial information from the autonomous smart cars, and
requesting image information of a crime scene including time and location to the autonomous smart cars when the public agency server requests the image information,
wherein upon image information about a crime scene being requested, the autonomous smart car managing server selectively requests image information by referring to corresponding location information and time information to autonomous smart cars agreed with collection of time information and location information,
wherein each of the autonomous smart cars includes:
a black box unit taking and storing images of a surrounding of the each of the autonomous smart cars;
a facial recognizing-processing unit recognizing faces of persons from the images taken by a camera;
a communication unit communicating with the autonomous smart car managing server;
an image output unit outputting various images and displaying corresponding information when a missing person, a suspect, or a wanted person is found;
an autonomous driving unit autonomously driving the each of the autonomous smart cars; and
a control unit controlling the each of the autonomous smart cars,
wherein the black box unit includes:
a first camera taking images of inside and outside of the each of the autonomous smart cars;
a first memory unit storing image data taken by the first camera and sound data; and
a person extraction unit extracting person information from the images of the outside taken by the first camera,
wherein the facial recognizing-processing unit includes:
a second camera taking images of the outside of the each of the autonomous smart cars;
a facial recognition unit making data of facial information by recognizing faces of persons extracted from the black box unit or persons in the images taken by the second camera;
a second memory unit storing facial information of one of a missing person, a suspect, and a wanted person from one of the autonomous smart car managing server, the public agency server, and the missing person information server, and updated facial information that is updated at a predetermine period; and
a facial comparison unit detecting facial information of one or more of a missing person, a suspect, and a wanted person by comparing facial recognition data recognized by the facial recognition unit with the facial information stored in the second memory unit.

US Pat. No. 10,796,131

CONFIRMING COMPLIANCE WITH A CONFIGURATION

One Door, Inc., Boston, ...

1. A method comprising:sending by a server computing system (server) to a mobile computing device, information representing a floorplan, the information includes information about a fixture corresponding to a specified configuration of items on the fixture;
receiving by the server from the mobile computing device, electronic data corresponding to an image of an actual configuration of the fixture, the image of the actual configuration is associated with metadata that distinguishes the fixture in the image of the actual configuration from at least some other fixtures of a like fixture type;
executing by the server image recognition code that compares the image of the actual configuration with information derived from the specified configuration to produce a result determination that indicates whether the actual configuration of the fixture is substantially similar to the specified configuration of the fixture, and compares the metadata of the actual configuration and the determined configuration to determine whether the actual configuration matches the specified configuration; and
storing by the server electronic data corresponding to the image of the actual configuration, the metadata, and the result determination from executing the image recognition code.

US Pat. No. 10,796,130

IMAGE PROCESSING APPARATUS

NIKON CORPORATION, Tokyo...

1. An image processing apparatus comprising:a processor programmed to
acquire a cell image captured with cells;
calculate a plurality of types of characteristic amounts on the acquired cell image; and
extract specific correlations from among a plurality of correlations among the calculated characteristic amounts r, based on a likelihood of the characteristic amounts; and
a memory configured to store types of the characteristic amounts and either or both of types of the cells and types of the constituent elements of the cells that are captured in the cell image, the types of the characteristic amounts and either or both of the types of the cells and the types of the constituent elements of the cells being associated with each other,
wherein the processor calculates, from among the plurality of types of the characteristic amounts, types of the characteristic amounts corresponding to types of the cells and types of the constituent elements of the cells that are captured in the cell image.

US Pat. No. 10,796,129

DISPLAY PANEL WITH FINGERPRINT IDENTIFICATION AND DISPLAY DEVICE

Wuhan Tianma Micro-Electr...

1. A display panel with fingerprint identification function, comprising:a base substrate;
a control circuit layer formed on the base substrate, wherein the control circuit layer comprises a plurality of pixel circuits and a plurality of fingerprint identification circuits with interval dispose;
a planarization layer formed on the control circuit layer;
a plurality of fingerprint signal acquisition modules, formed between the control circuit layer and the planarization layer, wherein each of the plurality of fingerprint signal acquisition modules is electrically connected to a respective one of the plurality of fingerprint identification circuits;
a plurality of light-emitting units, formed on the planarization layer, wherein each of the plurality of light-emitting units is electrically connected to a respective one of the plurality of pixel circuits; and
a first insulating layer, wherein the first insulating layer is disposed between the control circuit layer and the plurality of fingerprint signal acquisition modules, and a material of the first insulating layer is an inorganic material;
wherein the each of the plurality of fingerprint signal acquisition modules comprises a photo-diode, and the photo-diode comprises a first electrode and a second electrode; and wherein the first electrode is disposed between the first insulating layer and the second electrode, and the first electrode is electrically connected to the respective one of the plurality of fingerprint identification circuits;
wherein the control circuit layer comprises a first metal layer and a second metal layer, and the first metal layer and the second metal layer are disposed in laminated manner and insulated from each other;
the first metal layer is disposed between the second metal layer and the base substrate;
each of the plurality of pixel circuits comprises a first control switch, and the each of the plurality of fingerprint identification circuits comprises a second control switch;
the first control switch comprises a first control end, a first signal input end and a first signal output end;
the second control switch comprises a second control end, a second signal input end and a second signal output end;
the first electrode is electrically connected to the second signal output end;
the each of the plurality of light-emitting units is electrically connected to the first signal output end in the respective one of the plurality of pixel circuits;
the first control end and the second control end are disposed at the first metal layer;
the first signal input end, the first signal output end and the second signal input end are disposed at the second metal layer;
wherein the each of the plurality of fingerprint signal acquisition modules further comprises a first capacitor, the first capacitor comprises a third electrode and a fourth electrode;
the second electrode is electrically connected to the third electrode, and the first electrode is electrically connected to the fourth electrode and the second signal output end;
wherein, the second signal output end is disposed at the second metal layer;
a vertical projection of the third electrode on the base substrate is overlapped at least in part with a vertical projection of the first electrode on the base substrate; and
the first electrode is reused as the fourth electrode, and the third electrode and the first electrode constitute the first capacitor.

US Pat. No. 10,796,128

OPTICAL SENSOR WITH AMBIENT LIGHT FILTER

FINGERPRINT CARDS AB, Go...

1. An optical sensor device, comprising:a display layer, comprising a light source configured to generate light incident on an input surface of the optical sensor device;
an image sensor layer, disposed below the display layer, comprising an optical image sensor having a plurality of image sensor pixels; and
a first ambient light filter layer, disposed between the display layer and the image sensor layer, configured to block one or more wavelengths of light,
wherein the first ambient light filter layer is a hybrid optical and ambient filter layer, configured to block the one or more wavelengths of light and collimate light incident on the hybrid optical and ambient filter layer.

US Pat. No. 10,796,127

ULTRASONIC TRANSDUCERS EMBEDDED IN ORGANIC LIGHT EMITTING DIODE PANEL AND DISPLAY DEVICES INCLUDING THE SAME

Samsung Electronics Co., ...

1. An ultrasonic transducer-embedded in-cell type of organic light emitting diode (OLED) panel, comprising:a substrate;
an OLED light emitting part on the substrate, the OLED light emitting part configured to emit visible light; and
an ultrasonic output part between the substrate and the OLED light emitting part, the ultrasonic output part including an ultrasonic transducer configured to generate ultrasonic waves according to an excitation voltage,
wherein the ultrasonic transducer is between a sub-pixel of the OLED light emitting part and the substrate.

US Pat. No. 10,796,126

FINGERPRINT SENSING DEVICE

AU OPTRONICS CORPORATION,...

1. A fingerprint sensing device comprising:a plurality of sensing pads arranged in an array, wherein the sensing pads comprise a first sensing pad and a second sensing pad adjacent to each other;
a plurality of data lines respectively and electrically connected to the sensing pads, and configured to provide a sensing voltage to the sensing pads;
a shielding layer disposed between the sensing pads and the data lines;
a plurality of auxiliary voltage lines respectively and electrically connected to the sensing pads and configured to provide an auxiliary voltage to the sensing pads, wherein the auxiliary voltage and the sensing voltage are different from each other;
wherein under a condition that the first sensing pad receives the sensing voltage, the second sensing pad receives the auxiliary voltage; and
a plurality of control circuits arranged in an array, electrically and respectively connected to the sensing pads, configured for providing the sensing voltage to the sensing pads according to a plurality of first scan signals, and providing the auxiliary voltage to the sensing pads according to a plurality of second scan signals,
wherein when the control circuits provide the sensing voltage to the first sensing pad, the control circuits refrain from providing the auxiliary voltage to the first sensing pad;
wherein the control circuits comprises:
a first switch configured for turning on corresponding to one of the first scan signals to provide the sensing voltage to the first sensing pad;
a second switch configured for turning on corresponding to the one of the first scan signals, wherein the first switch and the second switch are alternately turned on;
a third switch configured for turning on corresponding to one of the second scan signals to provide the auxiliary voltage to the first sensing pad through the second switch; and
a fourth switch configured for turning on corresponding the one of the second scan signals to provide an operating voltage to the first sensing pad through the second switch, wherein the third switch and the fourth switch are alternately turned on.

US Pat. No. 10,796,125

FINGERPRINT SENSING DISPLAY APPARATUS

LG Display Co., Ltd., Se...

1. A display device comprising:a substrate including a first surface and a second surface that is under the first surface;
a transistor on the first surface of the substrate;
an electroluminescence element on the transistor;
an encapsulation unit on the electroluminescence element; and
a fingerprint sensor under the second surface of the substrate;
wherein a portion of the substrate, a portion of the transistor, a portion of the electroluminescence element, and a portion of the encapsulation unit that overlap the fingerprint sensor are an ultrasonic transmission and reception channel on the fingerprint sensor,
wherein the encapsulation unit within the ultrasonic transmission and reception channel comprises a first inorganic encapsulation layer, an organic encapsulation layer on the first inorganic encapsulation layer, and a second inorganic encapsulation layer on the organic encapsulation layer,
wherein a Young's modulus of the first inorganic encapsulation layer and a Young's modulus of the second inorganic encapsulation layer are greater than a Young's modulus of the organic encapsulation layer.

US Pat. No. 10,796,124

ACOUSTIC BIOMETRIC TOUCH SCANNER

The Board of Trustees of ...

1. An ultrasonic fingerprint sensing device comprising:a surface configured to receive a finger;
an array of ultrasonic transducers comprising a piezoelectric layer and electrodes, the electrodes configured to address the ultrasonic transducers of the array, the ultrasonic transducers configured to transmit an ultrasound signal through the surface to the finger, the ultrasound signal having a frequency that is sufficiently high to achieve at least a 500 pixels per inch resolution, and the frequency of the ultrasound signal being no greater than 500 megahertz; and
a processor in communication with the ultrasound transducers of the array, the processor configured to generate an image of at least a portion of the finger based on a reflection of the ultrasound signal from the finger and to authenticate the finger based on the image, wherein the image has a resolution of at least 500 pixels per inch.

US Pat. No. 10,796,123

SYSTEMS AND METHODS FOR OPTICAL SENSING USING POINT-BASED ILLUMINATION

Will Semiconductor (Shang...

1. An optical sensing system, comprising:a display substrate;
a plurality of display elements;
a sensor light source for illuminating a sensing region, wherein the sensor light source is separate from the plurality of display elements, and wherein the sensor light source is disposed under the display substrate; and
a detector for detecting light from the sensing region;
wherein the plurality of display elements comprises a color filter, a liquid crystal material disposed between the display substrate and the color filter, and a backlight disposed under the display substrate, and
wherein the sensor light source comprises a micro LED arranged in a cluster of multiple micro LEDs.

US Pat. No. 10,796,122

OPTIMIZING DETECTION OF IMAGES IN RELATION TO TARGETS BASED ON COLORSPACE TRANSFORMATION TECHNIQUES

Capital One Services, LLC...

1. A method, comprising:detecting a matrix on display via a physical medium and associated with an environment, wherein the matrix includes a plurality of non-black and non-white colors, wherein each one of the plurality of non-black and non-white colors is a least prevalent color, of a plurality of prevalent colors in the environment, wherein the matrix is further based on a first plurality of colors derived from a plurality of least prevalent colors in the environment.

US Pat. No. 10,796,121

DECODING PARTS WITH ENCODED GEOMETRY FOR RETRIEVING PASSIVELY REPRESENTED INFORMATION

Dell Products L.P., Roun...

1. A computer-implementable method for decoding an encoded geometry, comprising:scanning the encoded geometry, the scanning comprising scanning a plurality of multi-dimensional symbols of the encoded geometry, each of the plurality of multi-dimensional symbols representing a plurality of constrained values, each of the plurality of constrained values including a two-dimensional value, the two-dimensional value being represented by a polygon, the two-dimensional value comprising a representation of vertices of the polygon;
identifying each of the plurality of multi-dimensional symbols;
decoding each identified multi-dimensional symbol to provide encoded geometry information;
accessing an encoded geometry repository using the encoded geometry information, the encoded geometry repository comprising an entry associating the encoded geometry with a unique identifier of an information handling system; and,
retrieving the unique identifier of the information handling system associated with the encoded geometry information.

US Pat. No. 10,796,120

PHOTOLUMINESCENT AUTHENTICATION DEVICES, SYSTEMS, AND METHODS

Spectra Systems Corporati...

1. A system for authentication comprising:a photoluminescent material disposed on or in a substrate and capable of absorbing an incident radiation from a radiation source and emitting an emitted radiation having a spectral signature with a decay time after removal of the radiation source; and
a photoauthentication device capable of being disposed in contact with the substrate, the photoauthentication device comprising:
the radiation source configured to provide the incident radiation to the photoluminescent material; and
a sensor configured to measure the emitted radiation from the photoluminescent material during the decay time;
wherein, in connection with providing the incident radiation and measuring the emitted radiation, the photoauthentication device is disposed in contact with the substrate.

US Pat. No. 10,796,119

DECODING COLOR BARCODES

HAND HELD PRODUCTS, INC.,...

1. A method of decoding a color barcode, comprising:simultaneously illuminating the color barcode with at least two spatially separate light zones in a manner that illuminates each bar of the color barcode with each light zone of the at least two spatially separate light zones, wherein the at least two spatially separate light zones are each illuminated by a different color;
capturing a monochrome image of light reflected from the color barcode;
for a bar in the color barcode,
computing a difference between a first relative intensity of a first color reflected from the bar and a second relative intensity of a second color reflected from the bar; and
determining a color of the bar based on a comparison of the difference with a predetermined threshold.

US Pat. No. 10,796,118

MULTIPURPOSE ENCLOSURE BOSS ASSEMBLY

Datalogic IP Tech S.R.L.,...

1. A boss assembly comprising:a first boss formed integrally with a first portion of an enclosure, wherein:
the first portion of the enclosure cooperates with at least a second portion of the enclosure to at least partly enclose an interior volume;
the first boss defines a first passage that extends through the first boss: and
the first passage defines an external aperture where the first passage opens into an environment external to the enclosure through an external surface of the first portion of the enclosure, and an internal aperture where the first passage opens into the interior volume;
an enclosure screw having an elongate threaded shaft portion and a head formed at a head end of the threaded shaft portion, wherein:
the enclosure screw is inserted into the first passage in an orientation that causes the head to extend toward the external aperture, and that causes a threaded end of the threaded shaft portion that is opposite the head end to extend through the internal aperture, into the interior volume and toward a second passage defined by a second boss of the second portion of the enclosure; and
the head is enlarged relative to the head end of the threaded shaft portion to provide an annular screw shoulder to engage an annular aperture shoulder within the first passage that surrounds the internal aperture to retain the head end of the enclosure screw within the first passage; and
a first threaded insert that is sized to engage interior surfaces of the first passage with a tight fit when inserted into the first passage,
wherein:
the first threaded insert is secured within the first passage at a location closer to the external aperture than the internal aperture, and with the head of the enclosure screw disposed between the first threaded insert and the aperture shoulder surrounding the internal aperture; and
the first threaded insert defines a threaded third passage that extends through the first threaded insert, and
wherein:
the third passage is configured to enable a tip of a tool to extend therethrough to engage the head of the enclosure screw to rotate the enclosure screw within the first passage; and
the third passage is configured to receive and engage an elongate threaded portion of a mounting screw inserted into the third passage from the environment external to the enclosure.

US Pat. No. 10,796,117

FIXED POSITION READER OF CODED INFORMATION AND CAMERA BASED CHECKOUT SYSTEM USING THE SAME

DATALOGIC IP TECH S.R.L.,...

1. A fixed position reader of coded information comprising:a housing provided with at least one of a horizontal reading window or a vertical reading window having a peripheral rim;
an optical code reading device disposed inside the housing and configured for reading coded information which generates a reading field projecting through the reading window toward the outside of the housing; and
a visual indication device disposed in the housing and configured for visually indicating a reading result to a user by projecting a light beam across an interior of the housing and configured for generating a visual indication causing an entire surface of the horizontal reading window or an entire surface of the vertical reading window to glow,
wherein the fixed position reader fixedly rests in a predetermined position and is an on-counter reader that rest on a top of a surface or an in-counter reader that is integrated in a checkout counter and oriented so that its reading field projects from a checkout counter surface toward a user or operator.

US Pat. No. 10,796,116

SYSTEMS AND METHODS FOR PROCESSING OBJECTS INCLUDING SPACE EFFICIENT DISTRIBUTION STATIONS AND AUTOMATED OUTPUT PROCESSING

Berkshire Grey, Inc., Le...

1. A space efficient automated processing system for processing objects, said processing system comprising:an input conveyance system for moving objects from an input area in at least an input conveyance vector that includes an input conveyance horizontal direction component and an input conveyance vertically upward direction component;
a perception system for receiving objects from the input conveyance system and for providing perception data regarding an object in a perception vector that includes a vertically downward direction component;
a primary transport system for receiving the object from the perception system and for providing transport of the object along at least a primary transport vector including a primary transport horizontal direction component and a primary transport vertically upward direction component; and
at least two secondary transport systems, each of which is adapted to receive the object from the primary transport system along a path that includes a vertically downward direction component.

US Pat. No. 10,796,115

ACTIVITY TIMING SYSTEM

The Houston Wellness Proj...

1. An activity timing system comprising a radio frequency identification reader;radio frequency tags;
a circuit board;
a first antenna at a beginning location of an activity and a second antenna at an end location of the activity;
a battery; and
a software on a flash memory device, wherein the software will run the activity timing system when a flash memory device is connected to the radio frequency identification reader;
wherein the radio frequency identification reader is queried to report its maximum return power and time stamps of the radio frequency tags are recorded with the maximum return power when the radio frequency tags reach the first antenna and the second antenna, wherein the time stamps are used to determine a start time of the activity and to determine a finish time of the activity respectively.

US Pat. No. 10,796,114

SELF-IDENTIFYING PERSONAL PROTECTIVE DEVICE AND METHODS OF MONITORING THE SAME

Honeywell International I...

1. A system for monitoring a wireless communication enabled personnel protective equipment (PPE), the system comprising:a face piece having a wireless tag configured to store unique identification information,
wherein the unique identification information identifies a user associated with the wireless communication enabled PPE; and
a breathing apparatus corresponding to the wireless communication enabled PPE and configured to be coupled to the face piece, the breathing apparatus comprising:
a wireless tag reader, wherein the wireless tag reader is configured to:
transmit an interrogation signal on actuation of the breathing apparatus, after the breathing apparatus is coupled to the face piece, and
receive a response signal comprising the unique identification information from the wireless tag,
wherein the breathing apparatus is configured to transmit the unique identification information to a central monitoring station, and
wherein the breathing apparatus is configured to supply air to the face piece.

US Pat. No. 10,796,113

READER DEVICE AND TABLE WITH READER DEVICE

MURATA MANUFACTURING CO.,...

1. A reader device comprising:an antenna element configured to communicate with an RFID tag attached to an article, the antenna element including:
a first dipole antenna having a first element axis extending in a first direction; and
a second dipole antenna having a second element axis extending in a second direction that intersects the first direction;
a reader module electrically connected to the antenna element and configured to read information of the RFID tag via the antenna element; and
a case housing the antenna element and the reader module, with the case having a main surface having a longitudinal dimension and a lateral dimension that are greater than a thickness of the case, with the main surface being rectangular as viewed from a thickness direction of the case,
wherein the first element axis is disposed adjacent to a first side of the main surface of the case and extends along the first side, and the second element axis is disposed adjacent to a second side orthogonal to the first side of the main surface of the case and extends along the second side.

US Pat. No. 10,796,112

PROTOCOL LAYER COORDINATION OF WIRELESS ENERGY TRANSFER SYSTEMS

Teslonix Inc., Ottawa (C...

1. A method for protocol layer coordination of wireless energy transfer systems comprising:defining, by a master Internet of Things Access Point (IoTA), a set of configuration parameters, the master IoTA being one of a plurality of IoTAs, each IoTA comprising a controller in communication with a Power Access Point (PAP), an intercommunication radio and a Radio Frequency Identification (RFID) transceiver, the PAP configured to energize an RFID tag, the intercommunication radio configured to communicate between the master IoTA and a slave IoTA, and the RFID transceiver configured to communicate with the RFID tag;
transmitting, by the master IoTA, the set of configuration parameters;
configuring with the set of configuration parameters, in both the master IoTA and the slave IoTA, the respective PAP and the respective RFID transceiver;
transmitting with the intercommunication radio of the master IoTA, an RFID request; and
transmitting with the RFID transceiver of the slave IoTA, an RFID command in response to the slave IoTA receiving the RFID request.

US Pat. No. 10,796,111

DETERMINING AN ENVIRONMENTAL CONDITION OF AN RFID TAG

RFMicron, Inc., Austin, ...

1. A method comprises:transmitting, by a radio frequency identification (RFID) reader, a first radio frequency (RF) signal of a plurality of RF signals, wherein each RF signal of the plurality of RF signals includes a unique carrier frequency and further includes an instruction to an RFID tag to respond with a received power level indication;
receiving, by the RFID reader, a first response from the RFID tag in response to a first RF signal of the plurality of RF signals, wherein the first response includes a first received power level indication, and wherein the first RF signal has a first carrier frequency;
transmitting, by the RFID reader, a second RF signal of the plurality of RF signals;
receiving, by the RFID reader, a second response from the RFID tag in response to the second RF signal of the plurality of RF signals, wherein the second response includes a second received power level indication, and wherein the second RF signal has a second carrier frequency;
determining, by the RFID reader, an estimated resonant frequency of the RFID tag based on the first and second received power level indications and the first and second carrier frequencies; and
determining, by the RFID reader, an environmental condition based on the estimated resonant frequency.