US Pat. No. 10,395,241

SYSTEM AND METHOD TO GENERATE AN ONBOARDING FLOW FOR AN APPLICATION

STRIPE, INC., San Franci...

1. A method for onboarding an application enabling a user to access services and/or service providers associated with a third party application server using the application, the method comprising:receiving, by an on-boarding server, a request from an electronic device to activate the application, wherein the electronic device is associated with the user;
determining, by the on-boarding server, whether the request is an initial request, wherein the request is determined to be the initial request when a minimal set of information associated with the user is not stored in the on-boarding server; and
when the request is determined to be the initial request,
transmitting a signal, by a communications interface of the on-boarding server, to the electronic device causing the electronic device to display a graphical user interface for a request for the minimal set of information associated with the user, wherein the minimal set of information includes at least one of: a user identification, a device identification, a legal name, a phone number, or an email address,
receiving, by the communications interface of the on-boarding server, the minimal set of information associated with the user from data entered by the user in the graphical user interface,
storing, in a memory of the on-boarding server, the minimal set of information associated with the user,
transmitting a signal, by the communications interface of the on-boarding server, to the third party application server to allow the user initial access to the application, wherein signaling to allow the user initial access causes the third party application server to transmit a signal to at least one application provider device that the user requires access to payment processing hardware to process a user's physical payment instrument,
receiving, by the communications interface of the on-boarding server, a default payment information entered using the payment processing hardware to process a user's first physical payment instrument, and
storing in the memory of the on-boarding server, the default payment information in the on-boarding server.

US Pat. No. 10,395,240

COMPONENTS FOR ENHANCING OR AUGMENTING WEARABLE ACCESSORIES BY ADDING ELECTRONICS THERETO

NXT-ID, INC., Shelton, C...

1. A device comprising:electronics components;
an enclosure for supporting the electronics components;
an accessory mount affixed to the enclosure and defining a gap between a surface of the accessory mount and a surface of the enclosure;
a transaction card disposed within the gap for interacting with the electronics components;
the enclosure further defining an opening, an interior-facing surface of the opening bounded by an upstanding wall defining grooves therein;
a button disposed in the opening and further comprising tabs each for receiving within one of the grooves; and
an electrical switch supported by the enclosure and proximate a rear surface of the button, wherein application of a force to a front surface of the button activates the electrical switch for controlling operation of the electronics components.

US Pat. No. 10,395,238

TWO STEP NEAR FIELD COMMUNICATION TRANSACTIONS

PAYPAL, INC., San Jose, ...

1. A method comprising:detecting, by a first device, that a second device is within a proximity of the first device at a first time period through a first near field communication (NFC) link between the first device and the second device;
in response to the detecting during the first time period, activating an application on the first device, wherein the application displays a selectable option to process a monetary transfer to the second device;
in response to a selection of the selectable option, establishing a data connection between the first device and the second device through the first NFC link;
retrieving, by the first device, payment information corresponding to the second device through the data connection during the first time period;
further in response to the selection of the selectable option, generating, by the first device, a monetary transfer request for the monetary transfer from a first account associated with the first device to a second account associated with the second device based on the selectable option and the payment information;
detecting, by the first device, that the second device is again within the proximity of the first device at a second time period through a second NFC link, wherein the second time period occurs after completion of the first time period; and
in response to the first device detecting the second device through the second NFC link, transmitting the monetary transfer request by the first device to a payment provider to cause the payment provider to process the monetary transfer request.

US Pat. No. 10,395,237

SYSTEMS AND METHODS FOR DYNAMIC PROXIMITY BASED E-COMMERCE TRANSACTIONS

AMERICAN EXPRESS TRAVEL R...

1. A method comprising:uploading, by a merchant web-client, merchant content for a plurality of items offered for sale by a merchant,
wherein a transaction account of a customer is synched with a transaction account holder web-client to create a synched transaction account;
receiving, by the merchant web-client and from the transaction account holder web-client, a first signal using a low energy consuming device,
wherein the receiving is in response to the customer logging into an app on the transaction account holder web-client, and
wherein the first signal includes personal information associated with the customer and a micro-location of the transaction account holder web-client;
determining, by the merchant web-client, merchant content based upon the personal information associated with the customer;
updating, by the merchant web-client, the merchant content to create updated content while the transaction account holder web-client is located within the micro-location and based upon the micro-location of the transaction account holder web-client, new customer status, loyal customer status and time of day that the transaction account holder web-client is located within the micro-location;
transmitting, by the merchant web-client and to the transaction account holder web-client, an interactive item catalog of the plurality of items based on the updated content and offered for sale by the merchant while the transaction account holder web-client is located within the micro-location;
transmitting, by the merchant web-client, a second signal using the low energy consuming device,
wherein the second signal is received by the transaction account holder web-client associated with the customer while the transaction account holder web-client is located within the micro-location,
wherein the second signal carries the updated content associated with the merchant,
wherein the updated content comprises an advertisement for an item of the plurality of items offered for sale by the merchant,
wherein the advertisement is based on the updated content, and
wherein the merchant is associated with the merchant web-client;
receiving, by the merchant web-client and from the transaction account holder web-client, a response including a bid to purchase the item from the plurality of items,
wherein the response is transmitted by the transaction account holder web-client to the merchant web-client using the low energy consuming device;
selecting, by the merchant web-client, the bid from a plurality of bids based upon at least one of: a highest bid, a loyalty associated with the customer to the merchant, or a new customer status of the customer with the merchant;
notifying, by the merchant web-client, the transaction account holder web-client of winning the bid,
wherein the transaction account holder web-client authorizes a payment processor to pay for the item using the synched transaction account;
receiving, by the merchant web-client and from the payment processor, payment information and authentication details associated with the item,
wherein the payment processor charged an amount of the item to the synched transaction account;
providing, by the merchant web-client, the item to the customer in response to receiving the authentication details from the transaction account holder web-client; and
receiving, by the merchant web-client, feedback from the transaction account holder web-client using the low energy consuming device.

US Pat. No. 10,395,236

MOBILE TERMINAL AND METHOD FOR CONTROLLING THE SAME

LG ELECTRONICS INC., Seo...

1. A mobile terminal comprising:a display; and
a controller configured to:
execute a specific application related to a payment, wherein a plurality of payment cards are associated with the specific application;
change the terminal to a payment ready state and cause the display to display a selected payment card of the plurality of payment cards; and
receive a specific input in the payment ready state;
determine whether the received specific input is a first input or a second input;
generate one-time payment information and a token value and perform the payment when the specific input is determined to be the first input; and
change the terminal to a payment waiting state and cause the display to change the displayed payment card to a specific indicator when the specific input is determined to be the second input.

US Pat. No. 10,395,235

SMART MOBILE APPLICATION FOR E-COMMERCE APPLICATIONS

International Business Ma...

1. A method comprising:requesting, by one or more computer processors, monitoring one or more operating systems of one or more mobile computing devices of a user;
sending, by the one or more computer processors, a request from the user for a mobile payment to a payment gateway;
determining automatically, by the one or more computer processors, an event indicating a disruption has occurred on the one or more mobile computing devices of the user based on monitoring the one or more mobile computing devices of the user, wherein the event is a notification causing an interruption to processing of the request for the mobile payment;
responsive to determining automatically the event indicating the disruption has occurred on the one or more mobile computing devices of the user based on monitoring the one or more operating systems of the one or more mobile computing devices of the user, sending, by the one or more computer processors, a request for additional transaction time to input information for the mobile payment;
responsive to receiving an approval of the request for the additional transaction time to input information, creating, by the one or more computer processors, an alert to the user to complete the mobile payment within the approved additional transaction time;
transmitting, by the one of more computer processors, the alert to the user;
responsive to receiving a response to the transmitted alert that includes information to complete the mobile payment, inputting, by the one or more processors, the information to complete the mobile payment; and
transmitting, by the one or more processors, the mobile payment.

US Pat. No. 10,395,233

MOBILE TERMINAL AND METHOD FOR CONTROLLING THE SAME

LG ELECTRONICS INC., Seo...

1. A mobile terminal optimized for reducing power consumption, comprising:a body having a front side, a lateral side, and a rear side;
a wireless communication unit located within the body;
a display having a first region located at the front side and a second region adjacent to the first region and extending to the lateral side; and
a controller configured to:
deactivate the first region and the second region;
activate the second region and display an object corresponding to a preset payment method in the second region based on data received from an external payment server via the wireless communication unit;
execute payment using the preset payment method in response to the mobile terminal being in proximity to an external payment terminal in a state where the object is displayed on the activated second region and the first region is in an inactive state;
based on completion of the payment, display a message indicating that the payment has been completed in the activated second region and maintain the first region in the inactive state;
when the payment has failed, activate the deactivated first region and display a message indicating the failure of the payment in the activated first region;
identify a consumption type and a plurality of payment methods corresponding to the consumption type based on a current location of the mobile terminal, wherein the consumption type is type of product or service that can be purchased; and
cause the display to display in the first region, which has been switched to an active state, a plurality of objects corresponding to the plurality of payment methods associated with the identified consumption type, wherein the plurality of objects is displayed sequentially according to criteria based on payment history information;
in response to the sensing of a gesture for shaking the mobile terminal in a state where the object is displayed on the activated second region and the first region is in an inactive state:
activate the deactivated first region; and
display an execution screen of a payment application in the activated first region.

US Pat. No. 10,395,232

METHODS FOR ENABLING MOBILE PAYMENTS

CA, Inc., New York, NY (...

1. A method using a mobile computing device that includes a hardware memory, a hardware processor, and an image sensor, the method comprising:acquiring, by the mobile computing device, a webpage associated with an online transaction from a server in communication with the mobile computing device, wherein the webpage comprises a set of data entry fields;
capturing, by the image sensor of the mobile computing device, a graphical image;
storing, in the hardware memory, the captured graphical image;
extracting, by the hardware processor, from the stored graphical image an encrypted set of data and a software key container;
acquiring, by the hardware processor, a personal code associated with an end user of the mobile computing device;
generating, by the hardware processor, a decryption key using the extracted software key container and the acquired personal code;
decrypting, by the hardware processor, the encrypted set of data using the decryption key;
generating, by the hardware processor, a second set of data from the decrypted set of data;
storing, in the hardware memory, the second set of data and the personal code;
populating, by the hardware processor, the set of data entry fields with the second set of data;
transmitting, by the hardware processor, the set of data entry fields populated with the second set of data from the mobile computing device to the server serving the webpage; and
deleting, by the hardware processor, the second set of data and the personal code from the hardware memory subsequent to transmission of the second set of data from the mobile computing device to the server and prior to completion of the online transaction.

US Pat. No. 10,395,231

METHODS, SYSTEMS, APPARATUSES, AND NON-TRANSITORY COMPUTER READABLE MEDIA FOR VALIDATING ENCODED INFORMATION

Altria Client Services LL...

1. A formatting device for validating encoded information, the device comprising:an input-output (I/O) interface configured to receive encoded information from a connected scanning device;
a memory having stored thereon computer readable instructions; and
at least one processor configured to execute the computer readable instructions to,
format the received encoded information into formatted data compatible with a point-of-sale (POS) terminal,
classify the formatted data into at least one classification layer of a plurality of classification layers in accordance with attributes associated with the received encoded information and a plurality of matching rules stored in the memory, each of the plurality of classification layers associated with an encoded information type of a plurality of encoded information types, respectively, and the plurality of matching rules associated with a plurality of destinations to which to transmit the formatted data,
determine a destination from the plurality of destinations to which to transmit the formatted data for processing of the formatted data based on the classification layer, the formatted data including metadata associated with the received encoded information and token information, and
transmit the formatted data to the determined destination; and
a housing including the I/O interface, the memory, and the at least one processor, the I/O interface being a USB interface, and the housing configured to physically connect to the connected scanning device and the POS terminal using the USB interface.

US Pat. No. 10,395,230

SYSTEMS AND METHODS FOR THE SECURE ENTRY AND AUTHENTICATION OF CONFIDENTIAL ACCESS CODES FOR ACCESS TO A USER DEVICE

Capital One Services, LLC...

1. A user device for providing secure entry of a confidential access code, comprising:a user interface;
one or more memories storing instructions; and
one or more processors configured to execute the instructions to perform operations comprising:
receiving, from a user through the user interface, a request for confidential access;
prompting the user, via the user interface, to enter a group of inputs into a single-entry field;
receiving a group of inputs from the user device, the received group comprising first, second, and third sequences of inputs, wherein there is no predefined number of inputs in the first sequence of inputs;
parsing the received group of inputs to identify the second sequence of inputs as an indicator sequence of inputs, the indicator sequence of inputs being a specific sequence of inputs associated with the user;
identifying the access sequence of inputs, based on the indicator sequence of inputs;
comparing the access sequence of inputs with a confidential access code associated with the user;
when the compared access sequence of inputs matches the confidential access code, granting access to the user device; and
when the compared access sequence of inputs does not match the confidential access code, denying access to the user device.

US Pat. No. 10,395,229

SYSTEM FOR TRANSMITTING ELECTRONIC RECEIPT

Toshiba Tec Kabushiki Kai...

7. A method for transmitting an electronic receipt, the method comprising:recording with an electronic receipt server the electronic receipt including transaction information regarding a sale of goods and settlement data;
performing communication with the electronic receipt server by way of the Internet with a portable terminal used by a purchaser;
processing the sale of goods with a point of sale terminal included with a settlement processing apparatus;
executing instructions stored in a memory of the settlement processing apparatus with a processor of the settlement processing apparatus to perform the following operations:
generating electronic-receipt data based on a result of processing a merchandise sale;
generating simplified settlement data based on the electronic receipt data, the simplified settlement data is derived from the settlement data, the simplified settlement data including a shop name, a transaction date, and a total transaction price;
generating ID data for downloading the electronic receipt data, the ID data is for generating an address indicating a region of the electronic receipt server in which the electronic receipt is recorded;
transmitting the electronic receipt data and the ID data to the electronic receipt server; and
transmitting the simplified settlement data and the ID data to the portable terminal;
executing instructions stored in a memory of the portable terminal with a processor of the portable terminal to perform the following operations:
receiving simplified settlement data and ID data from the settlement processing apparatus;
recording the simplified settlement data in association with the ID data received from the settlement processing apparatus;
displaying on a display unit the simplified data recorded;
generating a download command for downloading the electronic receipt related to the simplified settlement data displayed by the display unit;
generating the address indicating the region of the electronic receipt server in which the electronic receipt is recorded from the ID data recorded in association with the simplified settlement data;
recording the electronic receipt downloaded from the electronic receipt server; and
displaying the electronic receipt based on the electronic receipt data recorded.

US Pat. No. 10,395,226

MAINTAINING SECURE ACCESS TO A SELF-SERVICE TERMINAL (SST)

NCR Corporation, Atlanta...

1. A method of maintaining secure access to a Self-Service Terminal (SST), comprising:detecting, by a SST, a secure device presented thereto, wherein detecting further includes recognizing, by the SST, the secure device connected to the SST through a Universal Serial Bus (USB) port and recognizing the secure device as a USB key dongle that is a portable memory device, and wherein detecting further includes performing a cryptographic authentication on the USB key dongle before granting the USB key dongle access to the SST;
obtaining, by the SST, a list from the secure device relating to additional secure devices that are to be denied access to the SST, deactivated on the SST, and associated with invalid secure devices that are not allowed access to the SST, wherein obtaining the list further includes obtaining from the list, device identifiers associated with the additional secure devices, wherein each device identifier is a device serial number for a particular one of the additional secure devices, and wherein each device identifier in the list includes a modifiable attribute representing an expiration date, and wherein the additional secure devices are additional USB key dongles;
determining, by the SST, whether existing secure device information at the SST that represents invalid secure device identifiers is to be updated with the list having the device serial numbers and the corresponding expiration dates, and updating the existing secure device information at the SST with the list when the list is more recent than the existing secure device information, wherein determining further includes calculating each expiration date when processing the updating for each device identifier based on an issuance date and a time-to-live attribute; and
processing the method, by the SST, without the SST having a network connection.

US Pat. No. 10,395,222

INFORMATION DISPLAY METHOD, INFORMATION DISPLAY APPARATUS, INFORMATION DISPLAY SYSTEM, AND NON-TRANSITORY COMPUTER READABLE STORAGE MEDIUM

Yokogawa Electric Corpora...

1. An information display method comprising: inputting, using an input device, a work information for identifying a maintenance work to be conducted in a plant;identifying, by a processor using master data, a maintenance target device which is a target of the maintenance work and a peripheral device which relates to the maintenance target device based on the work information which has been input by the input device; and
displaying, by a display, a set of device-state-related information generated by a field device disposed in the maintenance target device identified by the processor, and a set of device-state-related information generated by a field device disposed in the peripheral device identified by the processor.

US Pat. No. 10,395,221

PROVIDING REWARDS TO ENCOURAGE DEVICE CARE

Amazon Technologies, Inc....

4. An electronic device comprising:a display;
one or more sensors;
one or more processors able to receive sensor information from the one or more sensors;
one or more computer-readable media; and
processor-executable instructions maintained on the one or more computer-readable media which, when executed by the one or more processors, program the one or more processors to:
receive the sensor information from the one or more sensors, the sensor information representing an amount of at least one of: acceleration or moisture;
determine that a portion of the sensor information indicates an occurrence of a physical event involving the electronic device, the physical event comprising at least one of a fall event or a moisture event;
in response to determining that the first portion of the sensor information indicates the occurrence of the physical event, cause a sampling rate of the one or more sensors to increase from a first sampling rate to a second sampling rate;
receive, from the one or more sensors, additional sensor information collected at the second sampling rate;
determine device information, based at least in part on the additional sensor information, indicating that the amount has not exceeded at least one of an acceleration threshold or a moisture threshold for a period of time;
sending, to a remote computing device, the device information; and
presenting, on the display, an indication of a reward.

US Pat. No. 10,395,220

AUTO-GENERATION OF ACTIONS OF A COLLABORATIVE MEETING

International Business Ma...

1. A method for identifying and initiating actions of a meeting, the method comprising:monitoring, by one or more computer processors, a meeting, wherein monitoring the meeting includes receiving input from at least a first computing device;
identifying, by one or more computer processors, a plurality of metadata triggers associated with the received input of meeting;
identifying, by one or more computer processors, an occurrence of a first metadata trigger of the plurality of metadata triggers associated with the received input of the meeting;
analyzing, by one or more computer processors, a first portion of the received input of the meeting that includes an occurrence of the first metadata trigger, wherein analyzing the first portion of the received input includes identifying a first action;
determining, by one or more computer processors, a response criterion of the first metadata trigger;
responsive to determining that the first action includes a response criterion of delayed post-meeting action, including, by one or more processors, the first action in a queue of post-meeting actions that initiate in response to determining that the meeting ends; and
responsive to determining that the first metadata trigger includes a response criterion indicating immediate action, initiating, by one or more computer processors, the first action.

US Pat. No. 10,395,219

LOCATION POLICIES FOR RESERVED VIRTUAL MACHINE INSTANCES

Amazon Technologies, Inc....

1. A system, comprising:one or more first computing devices configured to implement a user interface, a capacity management service, and a placement service in a provider network;
wherein the user interface is configured to:
receive a customer-specified reservation for a reserved unlaunched virtual machine instance, the reservation being for a predetermined period of time during which the reserved unlaunched virtual machine instance can be launched and terminated as requested by the customer; and
receive a customer-specified location policy for the reservation, the location policy including a customer-provided placement requirement as to which of a second plurality of computing devices is to be used to host the reserved unlaunched virtual machine instance, the location policy including at least one of an instance proximity requirement which indicates a closeness variable that indicates which of the second plurality of computing devices are to be used to launch the reserved unlaunched virtual machine instance and a cotenant requirement which indicates a characteristic of another customer;
wherein the capacity management service is configured prior to launching the reserved unlaunched virtual machine instance to determine that sufficient capacity does not exist on the second plurality of computing devices to execute the reserved unlaunched virtual machine instance of the reservation in compliance with the location policy and to reconfigure the provider network to make sufficient capacity available in compliance with the location policy;
wherein the placement service is configured to determine on which of the second plurality of computing devices to launch the reserved unlaunched virtual machine instance in compliance with the location policy and to launch the reserved unlaunched virtual machine instance on the determined computing device in response to the reconfiguration; and
wherein the one or more first computing devices is different than the second plurality of computing devices.

US Pat. No. 10,395,218

BENEFIT PLAN DESIGNER

Oracle International Corp...

1. A non-transitory computer-readable storage medium storing instructions which, when executed by one or more processors, causes the one or more processors to perform operations comprising:displaying, in a canvas area of a graphical user interface, a first plurality of first related plan objects, the first related plan objects being related to each other through a first multi-level hierarchical relationship visually represented in the canvas area;
receiving a first user input selecting a first eligibility object from a palette area of the graphical user interface;
receiving a second user input positioning the selected first eligibility object in the canvas area;
determining user positioning of the selected first eligibility object on the graphical user interface defining a screen position of the selected first eligibility object based on the second user input;
determining which of the first plurality of first related plan objects will inherit and define first requirements of the selected first eligibility object by comparing the screen position of the selected first eligibility object to a screen location of the first multi-level hierarchical relationship of the first plurality of first related plan objects;
in response to the second user input positioning the selected first eligibility object to the screen position adjacent to a first particular plan object among the first plurality of first related plan objects in the canvas area, selecting the first particular plan object and creating a first association of the selected first particular plan object with the selected first eligibility object; and
in response to the first association of the selected first eligibility object with the selected first particular plan object being created, visually representing the first association in the canvas area and causing child plan objects of the selected first particular plan object among the first plurality of first related plan objects to inherit the defined first requirements based on the first multi-level hierarchical relationship between the selected first particular plan object and the child plan objects.

US Pat. No. 10,395,216

COMPUTER-BASED METHOD AND SYSTEM OF ANALYZING, EDITING AND IMPROVING CONTENT

1. A method for providing proposition-based content for review within a collaborative on-line environment, the method comprising:retrieving base vocabulary elements maintained in an ontology data store having a tree structure that imposes one or more restrictions on representations of argument components and any relations there between, the base vocabulary elements in the ontology data store including a plurality of claims represented as a root in the tree structure and encompassing a statement of conclusions for which other statements are provided as support to indicate truth thereof, premises representative of the truth of a claim and represented as nodes in the tree structure, and warrants setting forth logical rules and represented as edges in the tree structure, wherein the edges connect the claims and the premises;
creating a structural representation of the retrieved base vocabulary elements whereby the structural representation can be delivered over communications to one or more users such that a set of propositional content available for a first argument may be visually displayed to the one or more users in an on-line collaborative environment as a plurality of statement elements within a user interface at one or more client systems utilized by the one or more users, each of the statement elements being one of a plurality of statement types including premise, warrant, and claim, each of the statement elements having a respective associated state;
constructing a logical argument object for the first argument responsive to a first set of input received from a user from the one or more users, the first set of input defining the first argument according to a specified argument type to include one or more premises and one or more warrants of the statement elements, a first claim of the statement elements, and a plurality of interconnections defining logical relations between the one or more premises, the one or more warrants, and the first claim according to respective logical rules for the one or more warrants such that the respective associated state of the first claim is dependent upon the respective associated states of the one or more premises and the one or more warrants and the logical relations defined by the interconnections of the first argument;
executing instructions stored in memory by way of a processing device whereby the logical argument object for the first argument is analyzed thus the respective associated state of the first claim based on the respective associated states of the one or more premises and the one or more warrants and the logical relations defined by the interconnections in the first argument; and
generating a structured argument model representation of the logical argument object for the first argument;
transmitting the structured argument model representation to the one or more users in the on-line collaborative environment whereby the structured argument model representation is visually displayed to the one or more users in the on-line collaborative environment to provide an indication of each of the one or more premises, the one or more warrants, the first claim, the interconnections between the one or more premises, the one or more warrants, and the first claim, and the respective associated state of each of the one or more premises, the one or more warrants, and the first claim; and
updating the tree structure of the ontology data store and any restrictions on the representations of argument components and any relations there between as a result of interaction with the logical argument object and structured argument model by the one or more users.

US Pat. No. 10,395,215

INTERPRETATION OF STATISTICAL RESULTS

International Business Ma...

1. A method, comprising:generating, with a processor of a computer, an interestingness index for each field of fields in a dataset, wherein the interestingness index provides a summary and a ranking of the field;
receiving, with the processor of the computer, multiple sets of statistical results generated for the dataset, wherein the multiple sets of statistical results comprise univariate statistics ordered according to a decreasing order of a first interestingness index and bivariate statistics for each pair of the fields ordered according to a decreasing order of a second interestingness index;
generating, with the processor of the computer, a hierarchy of first insights based on a template for each type of statistical result of the multiple sets of statistical results, wherein the first insights provide relationships between the fields in plain language, and wherein a top level of the hierarchy provides a general insight and is associated with a first visualization, wherein a lower level of the hierarchy provides technical information and is associated with a second visualization to enable confirmation of the general insight, and wherein the type of statistical result comprises one of the univariate statistics and the bivariate statistics;
identifying, with the processor of the computer, relationships between the first insights in the hierarchy to generate second insights comprising key findings;
displaying, with the processor of the computer, an executive summary that highlights the key findings across multiple analytic techniques based on the identified relationships, wherein the executive summary includes 1) dataset characteristics for the fields in the data set displayed in a first portion of the executive summary, 2) analytic techniques used to generate the executive summary displayed in a second portion of the executive summary, 3) a subset of the first insights displayed in a third portion of the executive summary, 4) the key findings displayed in a fourth portion of the executive summary, and 5) the first visualization displayed in a fifth portion of the executive summary;
displaying, with the processor of the computer, a first interactive visualization based on the executive summary, wherein the first interactive visualization includes 1) a list of fields with one or more selected fields displayed in a first portion of the first interactive visualization, 2) a visualization for one or more of the selected fields displayed in a second portion of the first interactive visualization, and 3) a plain language insight selected from a plurality of plain language insights associated with the visualization displayed in a third portion of the first interactive visualization;
in response to selection of a different plain language insight of the plurality of plain language insights, dynamically changing, with the processor of the computer, the visualization to include graphical annotations that depict the different plain language insight; and
in response to selection of one or more different fields from the list of fields, displaying, with the processor of the computer, a second interactive visualization with another visualization for the selected one or more different fields and another plain language insight associated with the another visualization.

US Pat. No. 10,395,214

METHOD FOR AUTOMATICALLY CREATING A CUSTOMIZED LIFE STORY FOR ANOTHER

1. A method of manufacturing a book encompassing a customized life story comprising the steps of:presenting to a subject specific pre-determined interview questions;
electronically recording, on a recording device, oral responses of the subject to said specific interview questions;
a computer converting said electronically recorded oral responses of the subject into a transcription;
the computer capturing one or more physical items into one or more electronic images;
automatically organizing, using a computer, said transcription and said electronic images into a draft manuscript;
providing the draft manuscript to the subject for review by the subject;
receiving editorial changes to said draft manuscript from the subject for use in creating a final manuscript;
choosing one of said electronic images for use on a cover or dust jacket; and
printing at least one physical copy of the final manuscript as a physical book.

US Pat. No. 10,395,213

SYSTEM AND METHOD FOR A COLLABORATIVE INFORMATION TECHNOLOGY GOVERNANCE

INTERNATIONAL BUSINESS MA...

1. A system comprising:a computer infrastructure which comprises a computing device including a processor and a memory which includes a situational environment technology governance (SEIG) tool, the computer infrastructure being configured to:
provide a field for entry of one or more questions in an entry screen which is provided by the SEIG tool in order to facilitate communications with one or more of a user, a subject matter expert, a stakeholder, and a decision maker;
receive a selection of the one or more of the user, the subject matter expert, the stakeholder, and the decision maker using the SEIG tool;
initiate an invitation to the selected one or more of the user, the subject matter expert, the stakeholder, and the decision maker using the SEIG tool; and
allow collaboration between the selected one or more of the user, the subject matter expert, the stakeholder, and the decision maker using a collaborative technology of the SEIG tool,
wherein the receiving the selection of the one or more of the user, the subject matter expert, the stakeholder, and the decision maker using the SEIG tool includes receiving a selection of one or more teams from a plurality of teams via the SEIG tool,
wherein a landing page interface of the SEIG tool comprises a virtual representation which includes a graphical user interface (GUI) comprising graphical elements of the selected one or more teams using the SEIG tool, a design order identifier associated with each of the graphical elements of the selected one or more teams of the plurality of teams, a graphical link which allows the user to be taken to one of a social networking site, blog, and java applet to input situations and collaborate with key stakeholders when selected, and a status indicator which is a circular graphical element that is filled when at least one team member of the one or more teams is online and available for communication and is unfilled when no team member of the one or more teams is online and is available for communication,
wherein the design order identifier associated with each of the selected one or more teams indicates an order in which a design flow occurs for each of the selected one or more teams,
wherein the SEIG tool collaborating between the one or more of the user, the subject matter expert, the stakeholder, and the decision maker utilizes a plurality of collaboration tools which include instant messaging, teleconferencing, video conferencing, white board, and wikis, and
wherein the SEIG tool comprises a social tagging tool which categorizes content that is used in the collaborating between the one or more of user, the subject matter expert, the stakeholder, and the decision maker,
wherein the SEIG tool is a web client application that provides the GUI which includes the field of entry, links, and interfaces to one or more of the plurality of collaboration tools,
wherein the graphical elements of the selected one or more teams in the landing page interface comprise a link to a separate subject matter experts (SME) page which includes a list of questions and answers to the list of questions to show whether a situation has been previously addressed,
wherein the collaborative technology of the SEIG tool includes a chat session between the selected one or more of the user, the subject matter expert, the stakeholder, and the decision maker, graphical elements of the selected one or more teams using the SEIG tool, the status indicator which is the circular graphical element that is filled when at least one team member of the one or more teams is online and available for communication and is unfilled when no team member of the one or more teams is online and is available for communication, a section including additional information such as at least one links to pages, links to tools, common questions and answers, and a collaborate now feature to schedule meetings, track participation, and record participation.

US Pat. No. 10,395,212

HEADS UP DISPLAY FOR MATERIAL HANDLING SYSTEMS

Dematic Corp., Grand Rap...

1. A method for more efficiently managing, with a portable computing device, containers and associated container information in a warehouse system, the method comprising:identifying, with a scanner, a container identification (ID) of a container in a warehouse, wherein the scanner is communicatively coupled to a portable computing device;
sending, with the portable computing device, the container ID to a warehouse server via a network;
receiving container information at the portable computing device from the warehouse server in response to the container ID and communicating the container information to a heads up display communicatively coupled to the portable computing device;
displaying, with the heads up display, informational content received from the portable computing device;
delivering the container to a target destination in the warehouse, wherein the target destination is included in the informational content and is based in part on the container information for the container; and
initiating, with the heads up display, a container delivery confirmation for the warehouse server when the container is delivered to the target destination, wherein, in response to the heads up display, the portable computing device sends the container delivery confirmation to the warehouse server at the time of delivery to the target destination, and wherein the warehouse server updates the container information based upon the container delivery confirmation.

US Pat. No. 10,395,211

APPARATUS FOR AUTOMATED MONITORING AND MANAGING OF INVENTORY

Frito-Lay North America, ...

1. An apparatus for storing product packages and monitoring inventory comprising:a shelf comprising a product support, wherein the product support is configured to support a plurality of product packages;
a detector associated with the shelf, the detector configured for detecting automatically and in real time a lateral displacement of one of the plurality of product packages on the product support;
a transmitter configured to electronically communicate detected data about product packages on the shelf in real time, the data including the lateral displacement of product packages on the product support and an identity of the product packages; and
a harvesting device in real time data communication with the transmitter and with downstream vending devices; wherein the harvesting device calculates a number of product packages on the product support;
wherein the apparatus is configured to automatically distinguish between a product which has a first associated package thickness and another product having a second associated package thickness that is different from the first associated package thickness, based on identifying average package thickness data for each product stored in the harvesting device, and is configured to use average package thickness data associated with a particular product to calculate a number of packages of said particular product on a product support.

US Pat. No. 10,395,210

SYSTEM AND METHOD FOR USING STORES AS RECEIVING POINTS FOR THIRD PARTY, E-COMMERCE SUPPLIERS

WALMART APOLLO, LLC, Ben...

1. A method for providing third party suppliers multiple price costs for distributing a product from distinct points of distribution, the method comprising:receiving, at a server, historical sales data associated with a third party e-commerce product;
applying a machine learning algorithm to the historical sales data, to yield a predicted demand quantity for the third party e-commerce product at a plurality of retail locations, wherein the machine learning algorithm is updated on a periodic basis;
calculating, using a processor of the server, a first shipping cost for:
(1) receiving the predicted demand quantity from the third party supplier at a single retail location in the plurality of retail locations; and
(2) subsequently redistributing the predicted demand quantity from the single retail location to remaining retail locations in the plurality of retail locations;
calculating, using the processor, a second shipping cost for:
(1) receiving the predicted demand quantity from the third party supplier at a distribution center; and
(2) redistributing the predicted demand quantity to remaining retail locations in the plurality of retails locations;
determining, via the processor and based on the first shipping cost and the second shipping cost, that distribution from the single retail location results in cost savings, resulting in a determination; and
based on the determination:
receiving the third party e-commerce product from the third party supplier at the single retail location; and
redistributing, using the processor and based on the cost savings, the third party e-commerce product from the single retail location to the plurality of retail locations according to the predicted demand quantity for each respective retail location.

US Pat. No. 10,395,208

BEACON TRACKING

CFPH, LLC, New York, NY ...

1. A method comprising:receiving, by at least one processor, an order for at least one of goods or services from a customer device;
transmitting, by the at least one processor, the order to a merchant device;
receiving, by the at least one processor, a first indication from the merchant device that a first signal from a wireless beacon of a delivery agent has been detected by the merchant device;
in response to receiving the first indication,
(i) associating, by the at least one processor, the wireless beacon of the delivery agent with the received order, and
(ii) transmitting, by the at least one processor to the customer device, a confirmation that the order was retrieved, wherein the confirmation controls activating of a wireless receiver of the customer device to detect a given signal from the wireless beacon of the delivery agent;
after associating the wireless beacon of the delivery agent with the received order, receiving, by the at least one processor, a second indication from the customer device that a second signal from the wireless beacon of the delivery agent has been detected by the wireless receiver of the customer device; and
in response to receiving the second indication from the customer device, determining, by the at least one processor, that the order has been delivered.

US Pat. No. 10,395,207

FOOD SUPPLY CHAIN AUTOMATION GROCERY INFORMATION SYSTEM AND METHOD

Elwha LLC, Bellevue, WA ...

1. A system for prevention of unsafe foods from advancing through a supply chain, comprising:circuitry configured for receiving one or more indications of one or more remote sensor measurements corresponding to one or more shipments of one or more foods to one or more destinations;
circuitry configured for maintaining a food safety database including at least (a) one or more food safety criteria relating to one or more foods, (b) one or more tracers corresponding to the one or more shipments of one or more foods to one or more destinations, and (c) at least one received indication of the one or more remote sensor measurements in association with at least one of the one or more shipments of one or more foods to one or more destinations;
circuitry configured for comparing at least one food safety criteria associated with at least one food and at least one remote sensor measurement corresponding to at least one shipment including the at least one food;
circuitry configured for generating at least one alert responsive to at least one indication of at least one unsafe food shipment at least partially based on comparing the at least one food safety criteria associated with the at least one food and the at least one remote sensor measurement corresponding to the at least one shipment including the at least one food, the at least one alert including at least one tracer of the one or more tracers that corresponds to the at least one unsafe food shipment; and
circuitry configured for controlling at least one remote emitter to mark at least one container of the at least one food with at least one indication that the at least one container of the at least one food is not in compliance with the at least one food safety criteria associated with the at least one food.

US Pat. No. 10,395,206

REFRIGERATING HOME DELIVERIES

Walmart Apollo, LLC, Ben...

1. A system for evaluating consumer behavior, the system comprising:a customer knowledge database storing a customer profile for each customer of a plurality of customers, the customer profile for each customer including a purchase history of items purchased by each customer;
a plurality of electronic crates each comprising a volume configured to store meal ingredients during deliveries and a processor programmed to detect retrieval of deliveries made with the electronic crate;
a server system comprising one or more processors and one or more memory devices operably coupled to the one or more processors, the one or more memory devices storing executable and operational code effective to execute a supply chain engine comprising
a meal plan module effective to generate, for each customer of the plurality of customers, a meal plan including meals including styles of food and ingredients corresponding to the customer profile of each customer;
a monitoring module effective to monitor times of retrieval of a plurality of completed deliveries to each customer of the plurality of customers via the plurality of electronic crates, each completed delivery including ingredients for a meal of the meal plan;
a characterization module effective to generate a retrieval model for each customer of the plurality of customers according to the times of retrieval for the plurality of completed deliveries for each customer based on at least one retrieval time of a completed delivery as detected by one or more of the plurality of electronic crates; and
a fulfillment module effective to, for a current delivery:
determine an expected delivery time for the current delivery corresponding to a time that one of the electronic crates is expected to leave a delivery vehicle;
determine an expected retrieval time for the current delivery according to the retrieval model of each customer;
determine an expected ambient temperature between the expected delivery time and the expected retrieval time;
calculate an amount of refrigerating material required to maintain the current delivery at an appropriate temperature between the expected delivery time and the expected retrieval time according to the expected ambient temperature;
generate a pick list including the ingredients for a meal included in the current delivery and the amount of refrigerating material; and
output the pick list to a representative for retrieval.

US Pat. No. 10,395,205

COST OF CHANGE FOR ADJUSTING LONG RUNNING ORDER MANAGEMENT FULFILLMENT PROCESSES FOR A DISTRIBUTED ORDER ORCHESTRATION SYSTEM

ORACLE INTERNATIONAL CORP...

1. A non-transitory computer-readable medium having instructions stored thereon, when executed by a processor, cause the processor to provide a distributed order orchestration system, the providing comprising:creating a business rule that controls an operation of an executable orchestration process based on runtime data, the executable orchestration process comprising steps that orchestrate an order;
when a rule set does not already exist, creating a rule set that includes one or more business rules;
adding the business rule to the rule set;
adding the rule set to a rule dictionary associated with the executable orchestration process, the rule dictionary comprising a library of one or more rule sets;
storing the rule dictionary in a process definition table of a database;
receiving an order;
decomposing the order into a plurality of services for fulfilling the order;
receiving, at an orchestration system, metadata encapsulating one or more instructions for creating a business process, the business process comprising a plurality of steps, and each step is associated with one of the services;
defining a cost of change value for each of the steps of the business process, wherein the cost of change value represents a cost required to adjust the associated step of the business process;
executing an executable orchestration process that is generated from the business process, wherein the executable orchestration process orchestrates the order by dynamically invoking one or more services stored within a service library configured to control task execution of an external fulfillment system, wherein each of the steps is associated with at least one of the services;
receiving, at the orchestration system, a change request from a client device, wherein the change request comprises an adjustment of at least one step of the business process;
applying a rule set of the rule dictionary to the change request of the executable orchestration process by invoking one or more business rules in the rule set to determine whether the cost of change value is greater than an upper threshold value;
when the cost of change value is not greater than the upper threshold value, initiating the change request and automatically adjusting the steps of the executable orchestration process that have already been executed; and
when the cost of change value is greater than the upper threshold value, not initiating the change request.

US Pat. No. 10,395,203

SYSTEM AND METHOD TO SIMULATE THE IMPACT OF LEADERSHIP ACTIVITY

1. A system, comprising:a memory that stores instructions; and
a processor that executes the instructions to perform operations, the operations comprising:
extracting, from computer or network usage data obtained by utilizing an electronic surveillance technique or from data obtained from an electronic survey instrument, data on a leadership activities variable so as to establish an initial value of the leadership activities variable, wherein the computer or network usage data is obtained utilizing the electronic surveillance technique by utilizing electronic surveillance equipment including video equipment, wherein the leadership activities variable is simulated based on a network structure associated with an organization, wherein the leadership activities variable is a multi-dimensional leadership activities variable;
determining, after the extracting, representations of levels of different types of leadership activities within the organization for the leadership activities variable based on aggregating the computer or network usage data and additional data obtained on the leadership activities variable;
calculating, by utilizing a computer simulation program of the system that executes within a hardware-based simulation module component, a predicted performance of the organization based on an organization state variable, the leadership activities variable, and a changing level of leadership activity of the organization, wherein the organization state variable is a multi-dimensional organization state variable;
determining, by utilizing the computer simulation program of the system and based on the calculated predicted performance, an action that is predicted to change the leadership activities variable if it is executed by the processor and thus also be expected to adjust the calculated predicted performance;
providing, to a browser program of a computer communicatively linked to the system, an output report and a recommendation indicating specific leadership activities and protocols to be increased or decreased for the organization and a forecasted outcome expected from performing the action based on the recommendation;
adjusting, by utilizing the computer simulation program and by utilizing the output report and the recommendation, the action to be executed to adjust the performance of the organization as the computer or network usage data and additional data on the leadership activities variable and data on the organization state variable change over time; and
simulating, in the computer simulation program and based on an input received from the computer, the action to be executed to adjust the performance of the organization so as to simulate an impact of the action on the organization, wherein the simulating is performed by utilizing a time series matrix including the multi-dimensional leadership activities variable and the multi-dimensional organization state variable.

US Pat. No. 10,395,202

METHOD AND SYSTEM FOR DETERMINING PATIENT STATUS

Koninklijke Philips N.V.,...

1. A clinical decision support (CDS) system, comprising:a repository including a plurality of core computer-implemented clinical guidelines (CIGs), wherein each core CIG comprises a plurality of device-independent computer-implemented nodes corresponding to steps of a care process predetermined by a clinical guideline (GL);
an engine configured to execute by a processor a selected one of the plurality of core CIGs across a plurality of hardware devices, wherein each hardware device utilizes one or more hardware-specific features corresponding to at least one node, the selected core CIG being mapped to the device; and
a plurality of hardware-specific feature managers which are processor-executable, each feature manager corresponds to one of the plurality of hardware device and is configured to: receive an indication of a current state of execution of the selected core CIG, retrieve localization data specific to the corresponding hardware device, wherein the localization data includes capabilities of the corresponding hardware device, and instantiate a hardware-specific feature configured to map at least one node of the selected core CIG to the corresponding hardware device based on the current state and the localization data,
wherein the plurality of hardware-specific feature managers comprises a first feature manager configured to retrieve first localization data of a first one of the plurality of hardware devices, and a second feature manager configured to retrieve second localization data of a second one of the plurality of hardware devices, the first localization data being different from the second localization data, and
wherein a first feature instantiated by the first feature manager is different from a second feature by the second feature manager.

US Pat. No. 10,395,200

METHOD AND APPARATUS FOR REPAIRING POLICIES

CA, Inc., New York, NY (...

1. A computer-implemented method comprising:validating, by a control application executing on a computer system, a plurality of stored policies for a computer network, each policy including information associated with operating one or more computing devices within the computer network; and
for each policy that fails validation:
generating, by the control application, a list of one or more errors that caused the policy to fail validation;
sending, by the control application, to a pool of repair modules, the list of one or more errors, wherein each repair module of the pool is executable by the computer system to:
identify a respective error that the repair module is preconfigured to correct; and provide information for correcting the respective error;
receiving, by the control application from one or more repair modules of the pool:
an indication that the one or more repair modules are preconfigured to correct the one or more errors on the list; and
information for correcting the one or more errors on the list;
generating, by the control application, a set of commands for correcting the one or more errors on the list based on the information received from the one or more repair modules; and
initiating, by the control application, repairs to the policy, wherein repairing the policy includes executing the set of commands to modify the information in the policy.

US Pat. No. 10,395,199

METHOD AND SYSTEM FOR ATM CASH SERVICING AND OPTIMIZATION

JPMorgan Chase Bank, N.A....

1. An automated computer implemented method for determining and implementing an optimized schedule for deposit pickup, cash replenishment, and service timing for one or more ATM devices, wherein the method is executed by a programmed computer processor which communicates with a user via a network, the method comprising the steps of:executing, via the computer processor, a volume forecast determination for at least one ATM device to generate forecast data, wherein the volume forecast comprises a withdrawal forecast and a deposit forecast, and where the withdrawal forecast and deposit forecast utilize distinct methodologies;
executing, via the computer processor, a simulation based on the forecast data to develop a plurality of possible ATM schedules for the at least one ATM, each of the plurality of possible ATM schedules comprises a replenishment schedule, a deposit schedule and a total cost associated with servicing each of the plurality of possible ATM schedules, wherein the simulation considers one or more identified uncertainties and wherein the simulation comprises a withdrawal simulation and a deposit simulation, the withdrawal simulation is based on forecast uncertainty and vendor arrival time uncertainty and the deposit simulation is based on deposit bin capacity uncertainty, forecast uncertainty and vendor arrival time uncertainty;
automatically, via the computer processor, generating one or more fault risks for each of the plurality of possible ATM schedules based at least in part on the one or more identified uncertainties, the one or more fault risks comprises a cumulative fault risk and an incremental fault risk;
automatically, via the computer processor, determining an optimal schedule for the at least one ATM device based on the one or more fault risks; and
initiating, via the computer processor, the optimal schedule for the at least one ATM based on the one or more fault risks.

US Pat. No. 10,395,197

TRANSPORTATION SYSTEM DISRUPTION MANAGEMENT APPARATUS AND METHODS

AMERICAN AIRLINES, INC., ...

1. A method for proposing an intentional delay for at least one travel leg from a plurality of travel legs of a transportation system, the method comprising:receiving, using a computer, transportation-related data associated with the plurality of travel legs from at least one of:
a dispatch environmental control computer system;
an enhanced reservation computer system;
an off-schedule operations computer system;
a flight operating computer system; and
an aircraft communication addressing and reporting computer system;
analyzing, using the computer, the transportation-related data to generate a projected departure delay and a projected arrival delay for each travel leg from the plurality of travel legs,
wherein the projected departure delay is the difference between a projected departure time and a scheduled departure time of the travel leg,
wherein the projected arrival delay is the difference between a projected arrival time and a scheduled arrival time of the travel leg,
wherein each of the projected departure delay and the projected arrival delay is not more than the greater of:
a resources delay relating to a delay necessary to provide the travel leg with resources required for the departure of the travel leg, and
an existing delay associated with the travel leg; and
wherein determining the projected departure delay and the projected arrival delay for each travel leg from the plurality of travel legs comprises minimizing the sum of the projected departure delays and the projected arrival delays while:
ensuring that each travel leg departs a departure location with the resources required for the departure of the travel leg; and
preserving an arrival order of two or more of the travel legs at an arrival location;
determining, using the computer, a projected excess gate demand for a plurality of gates within the transportation system and a projected number of passenger misconnects based on the projected departure delays and the projected arrival delays;
outputting on a graphical user interface of the computer a first interface displaying the projected excess gate demand for the plurality of gates at a first location within the transportation system and the projected number of passenger misconnects, comprising:
displaying, in a gate demand display region of the first interface, a plurality of bars representing projected demand for the plurality of gates at the first location over a period of time, wherein a width of each bar—along a time axis—represents a time period within the period of time, and a height of each bar—along a demand axis that is perpendicular to the time axis-represents the total projected demand for gates in that time period;
displaying, in the gate demand display region of the first interface, a first line imposed over the plurality of bars, wherein the first line represents a scheduled demand for the plurality of gates at the first location for each time period within the period of time;
displaying, in the gate demand display region of the first interface, a second line—extending parallel to the time axis—positioned perpendicular to the demand axis at a position representing a physical number of gates that are available at the first location; and
displaying, in the gate demand display region of the graphical user interface, a third line—extending parallel to the demand axis—positioned perpendicular to the time axis at a position representing the current time;
wherein a projected excess gate demand is depicted when a height of any bar extends over the second line;
generating, in response to the projected excess gate demand and the projected number of passenger misconnects illustrated on the first interface, either: a first recommended plan having a first recommended projected departure delay and a first recommended projected arrival delay for each travel leg from the plurality of travel legs; or a second recommended proposed plan having a second recommended projected departure delay and a second recommended projected arrival delay for each travel leg from the plurality of travel legs;
wherein generating the first recommended plan having the first recommended projected departure delay and the first recommended projected arrival delay for each travel leg from the plurality of travel legs comprises:
displaying a second interface on the graphical user interface, wherein the second interface comprises:
a first input field configured to receive, for each time period within the period of time, a user-specified delay on a travel leg from the plurality of travel legs; and
a second input field configured to receive an airport closure time;
receiving first operation parameters from a user via the second interface, the first operation parameters including:
a user-specified delay on a travel leg from the plurality of travel legs for a time period; and
the airport closure time;
wherein the first recommended projected departure delay is the difference between a first recommended projected departure time and the scheduled departure time of the travel leg,
wherein the first recommended projected arrival delay is the difference between a first recommended projected arrival time and the scheduled arrival time of the travel leg, and
wherein each of the first recommended projected departure delay and the first recommended projected arrival delay is not more than the greater of:
 the resources delay,
 the existing delay associated with the travel leg, and
 the user-specified delay on the travel leg; and
minimizing the sum of the first recommended projected departure delays and the first recommended projected arrival delays while:
ensuring that each travel leg departs the departure location with the resources required for the departure of the travel leg; and
preserving the arrival order of two or more of the travel legs at the arrival location; and
wherein generating the second recommended proposed plan having the second recommended projected departure delay and the second recommended projected arrival delay for each travel leg from the plurality of travel legs comprises:
receiving, using the computer, second operation parameters from the user, the second operation parameters including the airport closure time;
wherein the second recommended projected departure delay is the difference between a second recommended projected departure time and the scheduled departure time of the travel leg, and
wherein the second recommended projected arrival delay is the difference between a second recommended projected arrival time and the scheduled arrival time of the travel leg; and
minimizing the sum of the second recommended projected departure delays, the second recommended projected arrival delays, the projected number of passenger misconnects, and the projected excess gate demand, while:
ensuring that each travel leg departs the departure location with the resources required for the departure of the travel leg; and
preserving the arrival order of two or more of the travel legs at the arrival location;
outputting on a third interface on the graphical user interface at least one of the first recommended projected departure delay, the first recommended projected arrival delay, the second recommended projected departure delay, and the second recommended projected arrival delay as the proposed intentional delay that reduces at least one of the projected excess gate demand and the projected number of passenger misconnects; and that minimizes operations beyond the airport closure time;
and
implementing the proposed intentional delay to transform a state of an aircraft associated with one of the plurality of travel legs to a delayed state.

US Pat. No. 10,395,196

TWO-STAGE CONTROL SYSTEMS AND METHODS FOR ECONOMICAL OPTIMIZATION OF AN ELECTRICAL SYSTEM

Enel X North America, Inc...

1. An electrical system controller to optimize overall economics of operation of an electrical system, the controller comprising:a first computing device to determine a control plan for managing control of the electrical system during an upcoming time domain and provide the control plan as output, the control plan including a plurality of sets of parameters each to be applied for a different time segment within the upcoming time domain; and
a second computing device to determine a set of control values for a set of control variables for a given time segment of the upcoming time domain and provide the set of control values to the electrical system, the second computing device separate from the first computing device, wherein the second computing device determines the set of control values based on a set of values for a given set of parameters of the plurality of sets of parameters of the control plan, wherein the given set of parameters corresponds to an upcoming time segment;
wherein the second computing device is configured to modify operation of one or more electrical components of the electrical power system based on the set of control values, the one or more electrical components including at least one of one or more loads, one or more electrical power generators, or one or more energy storage systems.

US Pat. No. 10,395,195

PROVISIONING VIRTUAL MACHINES TO OPTIMIZE APPLICATION LICENSING COSTS

INTERNATIONAL BUSINESS MA...

1. A computer-implemented method comprising:establishing, by a provisioning engine executing on at least one processor, one or more shared processor pools of physical processing units on one or more servers of a cluster of servers;
provisioning, by the provisioning engine, virtual machines into the one or more shared processor pools and assigning the physical processing units to the one or more shared processor pools, the provisioning and assigning comprising:
provisioning at least two virtual machines for different tenants into a common shared processor pool of one or more shared processor pools;
receiving a tenant request by a tenant of the different tenants to provision a virtual machine of the virtual machines to execute an application;
determining based on the received request that at least one shared processor pool for the application does not yet exist;
identifying based on determining that at least one shared processor pool for the application does not yet exist, a server of the one or more servers of the cluster of servers having greatest unallocated capacity;
establishing a target shared processor pool on the identified server;
provisioning the virtual machine into the established target shared processor pool on the identified server;
obtaining utilization data by continuously monitoring the one or more shared processor pools of physical processing units on one or more servers of a cluster of servers;
determining based on the obtained utilization data that at least one physical processing unit of at least one shared processor pool of the one or more shared processor pools provides excess capacity; and
resizing the at least one shared processor pool of the one or more shared processor pools by removing the at least one physical processing unit from the at least one shared processor pool of the one or more shared processor pools; and
executing the virtual machines using the one or more shared processor pools, wherein the executing executes at least one virtual machine of the virtual machines using the resized at least one shared processor pool.

US Pat. No. 10,395,194

RESOURCE ALLOCATION FOR INFRASTRUCTURE ENGINEERING

WALMART APOLLO, LLC, Ben...

1. A method comprising:identifying, by a computer system using one or more processors, a plurality of resources for agile infrastructure engineering with respect to an organization, wherein the plurality of resources comprise human resources and physical resources, wherein the physical resources comprise equipment infrastructure, and wherein the agile infrastructure engineering comprises a collaboration-based methodology associated with one or more projects of an e-commerce work item or a market;
sorting, by a resource system, the plurality of resources based at least in part on a plurality of skills, a plurality of attribute information, and a plurality of roles associated with the plurality of resources, wherein first attribute information of the plurality of attribute information is associated with the human resources and comprises education attributes, changeability attributes, and human fragmentation attributes, and wherein second attribute information of the plurality of attribute information is associated with the physical resources and comprises geographic attributes, cost attributes, and supply attributes;
determining, by an allocation system, multiple teams based at least in part on the plurality of resources, as sorted, at least one first individual team of the multiple teams having one or more first resources of the plurality of resources for one or more first skills of the plurality of skills, the first attribute information, the second attribute information, and the multiple teams sharing one or more roles of the plurality of roles aligned to the one or more projects of the e-commerce work item and including particular role attributes to optimize cross-functional learning among the multiple teams, wherein the market comprises the e-commerce work item, and wherein the multiple teams comprise one or more agile teams;
aligning, by an association system, the multiple teams with the one or more projects of the e-commerce work item, wherein estimates of time required for completion of the one or more projects of the e-commerce work item are tracked by a report generator;
obtaining, by the allocation system, a first set of parameters comprising a technology, a size, a demand, a location, and a business priority to allocate among the multiple teams based on at least the technology associated with the one or more projects of the e-commerce work item, the size of the one or more projects of the e-commerce work item, an amount of the demand associated with the one or more projects of the e- commerce work item, the location associated with the one or more projects of the e-commerce work item, the business priority of the one or more projects of the e-commerce work item, and a skill staffing with a primary backup and a secondary backup to provide cross-sharing of skills of the plurality of skills required by the one or more projects of the e-commerce work item;
dynamically allocating, by the allocation system, the plurality of resources among the multiple teams according to the first set of parameters aligned with the one or more projects of the e-commerce work item;
generating, by the report generator in data communication with the allocation system and the association system, a market workload report associated with the market indicating at least a comparison between a work volume and a monetary allocation of the one or more projects of the e-commerce work item based on the agile infrastructure engineering among the multiple teams, the market workload report comprising audio or video information reporting on at least a status of the dynamically allocating of the plurality of resources aligned with the one or more projects of the e-commerce work item; and
determining, by the allocation system, other multiple teams for one or more remaining projects of the e-commerce work item based at least in part on the market workload report associated with the market and the plurality of resources, as sorted, at least one second individual team of the other multiple teams having one or more second resources of the plurality of resources for one or more second skills of the plurality of skills, one or more attributes of a plurality of attributes, and the other multiple teams sharing the one or more roles of the plurality of roles aligned to the one or more projects of the e-commerce work item.

US Pat. No. 10,395,192

SYSTEM AND METHOD FOR INSTRUCTING PERSONNEL ON WASHROOM MAINTENANCE REQUIREMENTS

Kimberly-Clark Worldwide,...

1. A method for maintenance of a plurality of washroom facilities by maintenance personnel, wherein each of the washroom facilities has one or more consumable product dispensers that require periodic refill, the method comprising:for each of the washroom facilities, configuring the product dispensers with a sensor that detects a product level or amount condition of the product dispenser, the sensors in communication with a monitoring station assigned to the washroom facility;
generating a set of instructions unique to each of the washroom facilities based upon the detected product level or amount conditions of the dispensers in the respective washroom facility, the set of instructions including instructions as to the amount of product refill to be added to the dispensers; and
with an identification (ID) system configured within each washroom facility, identifying a maintenance personnel that enters the washroom facility and providing the unique set of instructions to the maintenance personnel in a message delivered to the maintenance personnel.

US Pat. No. 10,395,190

METHOD AND SYSTEM FOR DETERMINING TOTAL COST OF OWNERSHIP

JPMorgan Chase Bank, N.A....

1. An apparatus comprising:a computer memory storing instructions;
a display having a display screen; and
at least one computer processor configured to access the computer memory, control the display screen of the display, and execute the stored instructions to control the display screen to simultaneously
a) display a name of an asset included in an asset hierarchy of assets having different levels of assets received from an external server of a configuration management system,
b) display a charge incurred by the asset received from an external server of a financial system,
c) display the names of applications using data about the asset received from the external server of the configuration management system,
d) display a portion of the charge incurred by the asset and allocated to each of the applications using the asset, based on the actual usage of the asset by each of the applications, the allocated portion of the charge being displayed closer to the name of its associated application than the asset name and the charge incurred by the asset, and
e) display a weight factor for each application that is applied to the charge incurred by the asset to determine the displayed portions of the charge allocated to each of the plurality of applications, the weight factor being displayed closer to the name of its associated application than the asset name and the charge incurred by the asset,
thereby i) simultaneously displaying the asset charge, the portions of the charge allocated to each of the applications, and the manner in which the allocation was arrived at, and ii) visually associating the charge with the asset and visually associating the portions of the charge and its weight factor with each application,
in response to
(1) receiving a charge request via a network;
(2) importing a charge information data file from the external server of the financial system over the network that includes the charge;
(3) importing an asset information data file from the external server of the configuration management system over the network, the asset information data file including data identifying the applications and the asset hierarchy of assets, which is listed in a hierarchical matching criteria list in order from the least desirable asset to the ideal charging asset;
(4) traversing the asset hierarchy to match the charge in the imported charge information data file imported from the external server of the financial system with the asset whose identifying data is imported from the configuration management system by
traversing the hierarchical matching criteria list imported from the external server of the configuration management system in order from the least desirable asset to the ideal charging asset or from the ideal charging asset to the least desirable asset to match the charge in the imported charge information data file imported from the external server of the financial system with the asset whose identifying data is imported from the configuration management system;
(5) determining whether the asset is an information technology (IT) asset, based on the asset level, and whether the asset is associated with the applications in imported data received from the external server of the configuration management system;
(6) determining actual usage of the asset by each of the applications;
(7) determining whether weight factors reflecting the actual usage of the asset by each of the applications are listed in an inventory for the applications, when the IT determination indicates that the asset is an IT asset associated with the applications; and
(8) allocating portions of the charge in the imported charge information data file from the external server of the financial system to each of the applications whose identifying data is imported from the configuration management system, based on at least one weight factor that reflects the actual usage of the asset by each of the applications when the weight factor determination determines that the weight factors are listed in the inventory for the applications.

US Pat. No. 10,395,189

OPTIMIZING A BUSINESS MODEL OF AN ENTERPRISE

International Business Ma...

1. A method for operating an enterprise in accordance with an optimized enterprise-level business model including optimizing a computer resource's capacity to reduce data throughput delay and increase throughput of bottleneck operations, said method comprising:receiving, by a processor of a computer system, a first set of data representing a business strategy, a business goal and a constraint;
receiving, by the processor, a second set of data representing relationships between the input business strategy, business goal and constraint;
receiving, by the processor, a third set of data to define an enterprise-level business model, wherein the enterprise-level business model comprises an enterprise component, a customer component and a partner component and provides a structure of services within the enterprise defining relationships with customers, partners and vendors, the enterprise component comprising one or more business components that provide business services and that are associated with business processes and service performance indicators (SPIs), the enterprise-level business model being defined based on interrelated business strategy, business goal and business constraint data, wherein the business strategy comprises one or more strategic intents that provide one or more strategic goals to be achieved by the enterprise;
monitoring, by the processor in real time, metrics of the enterprise at a service level to dynamically determine, in real time, a real-time actual performance value of business service;
dynamically displaying, on a computer display device of the computer system in real time, the real-time actual performance value;
determining an initial benchmark value for a resource of the enterprise; processing, by the processor, a model optimization engine resident in the computer system based on the defined enterprise-level business model, the input business strategy, business goal and constraint to iteratively generate an output benchmark value, to update the initial benchmark value based on the output benchmark value, and to update the defined enterprise-level business model;
iteratively processing, by the processor, the model optimization engine based on the updated benchmark value and model, until updating the benchmark value involves changing the benchmark value by less than a predetermined benchmark value error threshold to generate the optimized enterprise-level business model;
operating the enterprise in accordance with the optimized enterprise-level business model, said operating the enterprise in accordance with the optimized enterprise-level business model including:
generating, in real time by the processor, performance measures of usage of a computer resource used by the computer system executing a business process of the enterprise;
dynamically displaying, in real time on the computer display device, a dashboard of the performance measures of the computer resource's usage during said executing the business process;
determining, by the processor from the performance measures displayed on the dashboard, that the computer resource is a current bottleneck or is likely to become a bottleneck in the near future; and
optimizing the computer resource's usage, by the processor using the performance measures displayed on the dashboard, to reduce data throughput delay and increase throughput of bottleneck operations during said executing the business process, wherein said optimizing the computer resource's usage comprises modifying the computer system to make the computer system work more efficiently, use fewer resources, or both work more efficiently and use fewer resources.

US Pat. No. 10,395,184

SYSTEM AND METHOD FOR MANAGING ROUTING OF CUSTOMER CALLS TO AGENTS

Massachusetts Mutual Life...

1. A processor-based method, comprising:receiving a customer call from an identified customer at an inbound call receiving device;
in response to receiving the customer call:
retrieving, by a processor, customer demographic data for the identified customer;
executing, by the processor, a predictive machine-learning model configured to determine, for each lead profile of a plurality of lead records, a value prediction signal by inputting the customer demographic data for the identified customer, payment data, marketing costs data, and lapse data into a logistic regression model operating in conjunction with a tree based model, the predictive machine-learning model outputting a first subset of the plurality of lead records into a first value group and a second subset of the plurality of lead records into a second value group,
wherein the value prediction signal comprises one or more of a first signal representative of a likelihood that the identified customer will accept an offer to purchase a product, a second signal representative of a likelihood that the identified customer will lapse in payments for a purchased product, and a third signal representative of a likelihood that the identified customer will accept an offer to purchase the product and will not lapse in payments for the purchased product, and
wherein the predictive machine-learning model is continually trained using updated customer demographic data, updated payment data, updated marketing costs data, and updated lapse data;
classifying, by the processor, the identified customer into one of the first value group and the second value group; and
directing, by the processor, the inbound call receiving device,
to route the identified customer to a first call queue for connection to one of a first pool of call center agents in the event the processor classifies the identified customer into the first value group; and
to route the identified customer to a second call queue for connection to one of a second pool of call center agents in the event the processor classifies the identified customer into the second value group.

US Pat. No. 10,395,183

REAL-TIME FILTERING OF DIGITAL DATA SOURCES FOR TRAFFIC CONTROL CENTERS

NEC CORPORATION, Tokyo (...

1. A system for filtering data for a traffic control center, comprising:a plurality of data sources, comprising a plurality of traffic-related data sources and a weather-related data source;
one or more network computing devices, configured to:
obtain predictions of incidents, wherein each predicted incident indicates a future time of the predicted incident and a location of the predicted incident;
determine predicted causes of each of the predicted incidents according to a machine learning model utilizing historical data from the plurality of data sources;
assign probabilistic incident scores to the locations corresponding to the predicted incidents, wherein the probabilistic incident score for a respective location corresponding to a respective predicted incident is based on the predicted cause of the respective predicted incident;
rank the locations corresponding to the predicted incidents based on the assigned probabilistic incident scores; and
select a subset of data from the plurality of data sources for output to the traffic control center based on the ranking; and
one or more output devices, located at the traffic control center, configured to display the subset of data selected by the one or more network computing devices.

US Pat. No. 10,395,182

PRIVACY AND MODELING PRESERVED DATA SHARING

International Business Ma...

1. A method for generating a classification model of original sensitive data that is private to a data owner, the method comprising:accessing, by a processor, one or more records at one or more computing devices, wherein each record includes original sensitive data and unsensitive data;
generating, by the processor, an original data matrix that represents the original sensitive data, wherein the original data matrix includes a set of sensitive features and the feature label set for use in training a first classification model and classifying the original sensitive data, the training of the first classification model further uses the unsensitive data, and the training of the first classification model being performed by a model building tool of the processor;
generating, by the processor, a random feature matrix sharing a same subspace as a column space of the set of sensitive features of the original data matrix, such that the random feature matrix includes entries that lie in the same subspace as the column space of the set of sensitive features;
computing, by the processor, one or more intermediate data structures, wherein each intermediate data structure corresponds to a product of original data matrix of a record and the random feature matrix that shares the same subspace as the column space of the sensitive features of the original matrix;
forming, by the processor, a convex optimization problem having an objective function based on the original data matrix, the corresponding feature label set, and the one or more intermediate data structures;
solving, by the processor, the convex optimization problem to generate one or more masked data sets, wherein each masked data set includes masked data and a masked feature label set for use in classifying the masked data, the masked data is different from the original sensitive data, and the masked feature label set is different from the feature label set;
inputting, by the processor, the masked data and the masked feature label sets into a machine learning program being executed by the model building tool of the processor, wherein the masked data and masked feature label sets provide an amount of datasets, in addition to the unsensitive data of the one or more records, that is used to train a second classification model; and
implementing, by the processor, the model building tool executing the machine learning program to train the second classification model based on the masked data, the masked feature label sets, and the unsensitive data, wherein the second classification model classifies the masked data, and wherein the second classification model is the same as the first classification model trained from the original sensitive data and the unsensitive data, the original sensitive data is hidden from the second classification model, and the original sensitive data and feature label set cannot be recovered even when the masked data, the masked feature label set, and a classification model of the masked data are known.

US Pat. No. 10,395,181

MACHINE LEARNING SYSTEM FLOW PROCESSING

Facebook, Inc., Menlo Pa...

1. A computer-implemented method, comprising:initializing a workflow run in a machine learning system by identifying a text string defining a workflow, the text string including descriptions of a plurality of data processing operator instances, descriptions of an input data source for each of the data processing operating instances, and descriptions of an output data source for each of the data processing operating instances;
traversing syntax of the text string to determine an interdependency graph of the plurality of data processing operator instances of the workflow by generating directed edges between pairs of data processing operator instances in which an which an output data source of a first data processing operating instance in a given pair of data processing operating instances matches an input data source of a second data processing operating instance;
detecting in the interdependency graph, independent data processing operating instances that have input data sources that are independent of any output data sources of other data processing operating instances, and a dependent data processing operating instance that has an input data source matching an output data source of a connected data processing operating instance;
generating an execution schedule of the workflow run based on the interdependency graph in which the independent data processing operating instances are scheduled to execute in parallel, and in which the dependent processing operating instance is scheduled to execute upon completion of the connected data processing operating instance;
causing execution of the workflow run on one or more computing devices according to the execution schedule; and
indexing an output of a data processing operator instance from among the data processing operator instances in a memoization repository, wherein the output is indexed as a result of processing an identifiable input through a data processing operator type associated with the data processing operator instance.

US Pat. No. 10,395,180

PRIVACY AND MODELING PRESERVED DATA SHARING

International Business Ma...

1. A system for generating a classification model of original sensitive data that is private to a data owner comprising:a memory storage device;
a first hardware processor configured to be in communication with the memory storage device, the first hardware processor being configured to train a first classification model using original sensitive data and unsensitive data of one or more records;
a second hardware processor in communication with the first hardware processor, the second hardware processor being configured to:
generate an original data matrix that represents the original sensitive data, wherein the original data matrix includes a set of sensitive features and a feature label set for use in training the first classification model and classifying the original sensitive data;
generate a random feature matrix sharing a same subspace as a column space of the set of sensitive features of the original data matrix, such that the random feature matrix includes elements that lie in the same subspace as the column space of the set of sensitive features;
compute one or more intermediate data structures, wherein each intermediate data structure corresponds to a product of original data matrix of a record and the random feature matrix that shares the same subspace as the column space of the sensitive features of the original matrix;
form a convex optimization problem having an objective function based on the original data matrix, the corresponding feature label set, and the one or more intermediate data structures;
solve the convex optimization problem to generate one or more masked data sets, wherein each masked data set includes masked data and a masked feature label set for use in classifying the masked data, the masked data is different from the original sensitive data, and the masked feature label set is different from the feature label set;
send the masked data sets comprising the masked data and the masked feature label set to the first hardware processor;
the first hardware processor being further configured to:
input the masked data and the masked feature label sets into a machine learning program, wherein the masked data and masked feature label sets provide an amount of datasets, in addition to the unsensitive data of the one or more records, that is used to train a second classification model; and
implement the machine learning program to train the second classification model based on the masked data, the masked feature label sets, and the unsensitive data, wherein the second classification model classifies the masked data, and wherein the second classification model is the same as the first classification model trained from the original sensitive data and the unsensitive data, the original sensitive data is hidden from the second classification model, and the original sensitive data and feature label set cannot be recovered even when the masked data, the masked feature label set, and a classification model of the masked data are known.

US Pat. No. 10,395,179

METHODS AND SYSTEMS OF VENUE INFERENCE FOR SOCIAL MESSAGES

FUJI XEROX CO., LTD., To...

1. A method for inferring venues from social messages, comprising:at a computer system with one or more processors and memory storing instructions for execution by the processor, the memory further including a data storage component:
accessing a collection of venues stored in the data storage component;
training a classifier, using a set of training social messages, that predicts whether or not a social message is linked to a venue in the collection of venues;
receiving a new social message that is not geo-tagged and does not include geographical identification metadata;
for each venue in the collection of venues:
identifying, for the new social message, corresponding meta-paths to the particular venue;
encoding the corresponding meta-paths as a feature vector for the trained classifier, wherein each element of the feature vector includes a measure based on a respective type of social message connected to the particular venue;
computing, by the trained classifier, a score for each venue in the collection of venues indicating whether the new social message is linked or not linked to the venue; and
based on the scores, identifying at least one candidate venue as a predicted venue for the new social message; and
associating the predicted venue with the new social message in the data storage component, thereby providing the computer system with geographic context of the new social message to facilitate subsequent query search or information presentation related to the predicted venue.

US Pat. No. 10,395,178

RISK ASSESSMENT SYSTEM AND DATA PROCESSING METHOD

Wistron Corporation, New...

1. A risk assessment system, comprising:an analysis device, generating at least one decision table according to a plurality of data and context features of the plurality of data, wherein each of the decision tables has a plurality of entries, and each of the entries comprises at least one determining condition and probability information corresponding to a specific result; and
an electronic device, communicating with the analysis device, receiving the at least one decision table, and comparing the at least one determining condition of each of the entries in the at least one decision table with at least one current condition of an assessee, wherein when the at least one current condition is the same with the at least one determining condition of at least one specific entry, the electronic device displays the at least one determining condition and the probability information corresponding to the at least one specific entry, to improve a usage efficiency of the at least one decision table.

US Pat. No. 10,395,177

OPTIMIZED EXECUTION ORDER CORRELATION WITH PRODUCTION LISTING ORDER

Microsoft Technology Lice...

1. An execution reporting process, comprising:obtaining a set of rules listed in a listing order, each rule including at least one partial condition and at least one action;
building an execution structure which imposes an execution order on the partial conditions, the execution order being different from the listing order;
logging in an execution log, during an execution of the rules according to the execution structure, at least the following: inputs matched to partial conditions, results of evaluating partial conditions according to matched inputs, and which rule was executing when partial conditions were evaluated; and
deriving an execution report from the rule set and the execution log, the execution report showing, in the listing order and for each of the rules, whether the rule was executed and also showing for each executed rule the one or more inputs matched to the one or more partial conditions of the executed rule and the results of evaluating the one or more partial conditions according to the one or more matched inputs.

US Pat. No. 10,395,176

DATA BASED TRUTH MAINTENANCE

International Business Ma...

1. A method comprising:receiving, by a computer processor of a computing device from a plurality of data sources, first health event data associated with a first plurality of heath care records associated with a plurality of patients, said computer processor controlling a cloud hosted mediation system comprising an inference engine software application, a truth maintenance system database, and non monotonic logic, wherein said non monotonic logic comprises code for enabling a Dempster Shafer theory;
deriving, by said computer processor executing said inference engine software application, first health related assumption data associated with each portion of portions of said first health event data associated with associated patients of said plurality of patients and related records in said truth maintenance system database, wherein said first health related assumption data comprises multiple sets of assumptions associated with said plurality of patients, wherein each set of said multiple sets comprises assumed medical conditions and an associated plausibility percentage value, wherein at least two sets of said multiple sets is associated with each patient of set plurality of patients, wherein a first set of said multiple sets comprises evidence supporting a first fact indicating that a first patient of said plurality of patients has a first medical condition of said assumed medical conditions with a first plausibility percentage value, wherein a second set of said multiple sets comprises evidence supporting a second fact indicating that said first patient has a second medical condition of said assumed medical conditions with a second plausibility percentage value, wherein said first medical condition differs from said second medical condition, and wherein said first plausibility percentage value differs from said second plausibility percentage value;
determining, by said computer processor, based on results of executing the Dempster Shafer theory with respect to said first set and said second set, that said first set comprises a higher belief assignment value than said second set;
generating, by said computer processor based on results of said determining, said deriving and said first executing, an initial diagnosis and treatment recommendation for said first patient, said initial diagnosis and treatment recommendation associated with said first set;
retrieving, by said computer processor from said truth maintenance system database, previous health related assumption data derived from and associated with previous portions of previous health event data retrieved from said plurality of data sources, said previous health related assumption data derived at a time differing from a time of said deriving, said previous health related event data associated with previous health related events occurring at a different time from said first health event data;
additionally executing, by said computer processor executing said non monotonic logic, the Dempster Shafer theory with respect to said first set, said second set, said first patient, and said previous health related assumption data;
modifying, by said computer processor based on results of said additionally executing, said first plausibility percentage value of said first set and said second plausibility percentage value of said second set;
determining, by said computer processor, based on results of said additionally executing and said modifying, that said second set comprises a higher belief assignment value than said first set;
generating, by said computer processor based on said results of said additionally executing and said modifying, an updated diagnosis and treatment recommendation for said first patient; and
generating, by said computer processor executing said non monotonic logic and said inference engine software application, first updated health related assumption data associated with said first health related assumption data and said previous health related assumption data, wherein said previous health related assumption data, said first health related assumption data, and said first updated health related assumption data each comprise assumptions associated with detected medical conditions of said plurality of patients.

US Pat. No. 10,395,175

DETERMINATION AND PRESENTMENT OF RELATIONSHIPS IN CONTENT

Amazon Technologies, Inc....

1. A method comprising:receiving, by a source device comprising at least one processor, an electronic book (“eBook”) comprising a story including a first character having a first character name;
identifying, by the source device, a match between a keyword and a first word in the eBook;
determining, by the source device, a bookmarked location in the eBook, wherein the bookmarked location indicates a current reading location in the eBook;
determining, by the source device, an occurrence relating the first character name to the first word;
determining, by the source device, a number of words between the first character name and the first word in the eBook;
determining, by the source device, that the number of words between the first character name and the first word in the eBook is less than a threshold number of words;
determining, by the source device, a connection score for the first character and the first word in the eBook, wherein the connection score is based at least in part on the occurrence;
determining, by the source device and based at least in part on the connection score, that the first character is connected to the first word;
identifying, by the source device, that a second character is connected to the first word;
generating, by the source device and based at least in part on the number of words between the first character name and the first word in the eBook being less than the threshold number of words, data that represents a family structure including a parental relationship between the first character and the second character; and
sending, by the source device, the eBook and the data representing the family structure to a reader device.

US Pat. No. 10,395,174

METHOD FOR PERFORMING INSIGHT OPERATIONS WITHIN A COGNITIVE ENVIRONMENT

Cognitive Scale, Inc., A...

1. A method for providing cognitive insight via a cognitive information processing system environment, the cognitive information processing system environment comprising a cognitive inference and learning system and a cognitive application, comprising:receiving data from a plurality of data sources, the plurality of data sources comprising a social data source stored in a social data repository, public data source stored in a public data repository, a licensed data source stored in a licensed data repository and a proprietary data source stored in a proprietary data repository;
encapsulating an operation for providing a desired cognitive insight via an insight engine; and,
applying the operation to a target cognitive graph to generate a cognitive insight based upon the operation, the target cognitive graph being stored within a repository of cognitive graphs, the target cognitive graph providing a representation of expert knowledge, associated with individuals and groups over a period of time, to depict relationships between people, places and things, the target cognitive graph providing a machine-readable formalism for knowledge representation, the cognitive inference and learning system executing on a hardware processor of an information processing system, the information processing system being deterministic, the cognitive inference and learning system comprising a cognitive platform executing on the information processing system, the cognitive platform and the information processing system performing a cognitive computing function, the cognitive platform comprising a cognitive engine, the cognitive engine comprising the insight engine, the insight engine processing streams of data from the plurality of data sources, the cognitive inference and learning system using the insight engine to generate a plurality of cognitive insights; and,
providing the plurality of cognitive insights generated by the insight engine to a destination, the destination comprising the cognitive application, the cognitive application enabling a user to interact with the cognitive insights, the cognitive application being a cloud-based application.

US Pat. No. 10,395,172

COLLABORATIVE DECISION MAKING

AIRBUS OPERATIONS LIMITED...

1. A method of generating decision options, the method comprising operating a computer system to:receive and store sensor data from a plurality of sensors;
present a visualisation of at least some of the sensor data to a first user;
receive and store first tag data from the first user in response to the presentation of the visualisation to the first user;
present a visualisation of at least some of the sensor data to a second user which is the same visualisation that is presented to the first user or a different visualisation;
receive and store second tag data from the second user in response to the presentation of the visualisation to the second user;
generate decision options with a computer implemented decision support algorithm in accordance with the first and second tag data, a stored operational plan, and at least some of the sensor data; and
output the decision options generated by the decision support algorithm.

US Pat. No. 10,395,171

PROVIDING EVENT-PROCESSING RULES

INTERNATIONAL BUSINESS MA...

1. A method of storing a plurality of general rules capable of representing a larger plurality of customized rules as computer readable data on computer data storage hardware in a storage space efficient manner that does not require storing all of the larger plurality of customized rules, the method comprising:determining a plurality of rule expression parameters including at least: an event field, an arithmetic operator, a first operand, and a logical operator;
determining an order for the plurality of rule expression parameters;
storing the plurality of rule expression parameters in the determined order as a particular general rule of the plurality of general rules; and
generating a first customized rule from the particular general rule at least in part by determining a first respective parameter value for each rule expression parameter of the plurality of rule expression parameters;
storing, on the computer data storage hardware, each determined first parameter value in the determined order;
generating a second customized rule from the particular general rule at least in part by determining a second respective parameter value for each rule expression parameter of the plurality of rule expression parameters;
storing, on the computer data storage hardware, each determined second parameter value in the determined order;
executing the first customized rule at least in part by calling the particular general rule and applying the particular general rule using each first respective parameter value specified by the first customized rule in accordance with the determined order; and
executing the second customized rule at least in part by calling the particular general rule a second time, shifting each rule expression parameter from a corresponding first respective parameter value specified by the first customized rule to a corresponding second respective parameter value specified by the second customized rule, and applying the particular general rule using each second respective parameter value in accordance with the determined order.

US Pat. No. 10,395,167

IMAGE PROCESSING METHOD AND DEVICE

BOE TECHNOLOGY GROUP CO.,...

9. An image processing device, comprising:a first Convolutional Neural Network (CNN) circuit configured to extract one or more features of an inputted first image by a first CNN, the inputted first image being inputted to the first one of the first convolutional layers, wherein the first CNN comprises a plurality of first convolutional layers connected sequentially to each other and a plurality of first pooling layers each connected to and arranged between respective adjacent first convolutional layers, and each of the first convolutional layers is configured to generate and output a first convolutional feature; and
a second CNN circuit configured to reconstruct the inputted first image and output the reconstructed image after reconstruction by a second CNN, wherein the second CNN comprises a plurality of second convolutional layers connected sequentially to each other and a plurality of second composite layers each connected to and arranged between respective adjacent second convolutional layers, and each of the second composite layers is an up-sampling layer, wherein
the number of the first convolutional layers is identical to the number of the second convolutional layers,
an outputted image from the last one of the first convolutional layers is applied to the first one of the second convolutional layers,
apart from the first one of the plurality of second convolutional layers, at least one of the second convolutional layers is configured to receive the first convolutional feature outputted from the corresponding first convolutional layer, and
an output from the second composite layer at an identical level started from the first one of the second convolutional layers is combined with the first convolutional feature outputted from the corresponding first convolutional layer to acquire a final output image data.

US Pat. No. 10,395,166

SIMULATED INFRARED MATERIAL COMBINATION USING NEURAL NETWORK

Lockheed Martin Corporati...

1. A mipping system, comprising:processing circuitry configured to:
receive combinations of a plurality of pixels N at a time, each pixel having material codes directed to respective materials of the pixels, where the material codes relate to infrared properties of the respective materials sensed by a sensor of the mipping system, and N is a positive integer greater than 1; and
train an artificial neural network having a classification space by providing respective neurons for each unique combination of material codes, and condition the artificial neural network so that the respective neurons activate when presented with their unique of material code combinations;
calculate an average value of the material codes;
replace a stored maximum value with the average value when the average value exceeds the stored maximum value;
replace a stored minimum value with the average value when the average value falls below the stored minimum value; and
normalize the material codes using the replaced maximum and minimum values;
train the artificial neural network starting with a vigilance setting of a first value; and
when the artificial neural network reaches a state in which no more new patterns are to be learned by the artificial neural network, adjust the vigilance setting of the first value to a second value that is lower than the first value and greater than zero, and retrain the artificial neural network with the vigilance setting of the second value.

US Pat. No. 10,395,165

NEURAL NETWORK UNIT WITH NEURAL MEMORY AND ARRAY OF NEURAL PROCESSING UNITS THAT COLLECTIVELY PERFORM MULTI-WORD DISTANCE ROTATES OF ROW OF DATA RECEIVED FROM NEURAL MEMORY

VIA ALLIANCE SEMICONDUCTO...

1. An apparatus, comprising:an array of N processing units (PU) each having:
an accumulator having an output;
an arithmetic unit having first, second and third inputs and that performs an operation thereon to generate a result to store in the accumulator, the first input receives the output of the accumulator;
a weight input that is received by the second input to the arithmetic unit; and
a multiplexed register having first, second, third and fourth data inputs, an output received by the third input to the arithmetic unit, and a control input that controls selection of the first, second, third and fourth data inputs;
a first memory that holds rows of N weight words and provides the N weight words of a row to the corresponding weight inputs of the N PUs of the PU array;
a second memory that holds rows of N data words and provides the N data words of a row to the corresponding first data inputs of the multiplexed register of the N PUs of the PU array;
wherein the output of the multiplexed register is also received by:
the second data input of the multiplexed register of a PU one PU away;
the third data input of the multiplexed register of a PU 2  J PUs away, wherein J is an integer greater than 1; and
the fourth data input of the multiplexed register of a PU 2 K PUs away, wherein K is an integer greater than J;
wherein the multiplexed registers of the N PUs collectively operate as an N-word rotater that rotates by one word when the control input specifies the second data input;
wherein the multiplexed registers of the N PUs collectively operate as an N-word rotater that rotates by 2 J words when the control input specifies the third data input; and
wherein the multiplexed registers of the N PUs collectively operate as an N-word rotater that rotates by 2 K words when the control input specifies the fourth data input.

US Pat. No. 10,395,164

FINGERPRINT SENSING MODULE AND METHOD FOR MANUFACTURING THE FINGERPRINT SENSING MODULE

1. A fingerprint sensing module comprising:a fingerprint sensor device having a sensing array arranged on a first side of the device, the sensing array comprising an array of fingerprint sensing elements, wherein said fingerprint sensor device comprises connection pads arranged on said first side of said fingerprint sensing device for connecting said fingerprint sensor device to external circuitry;
a fingerprint sensor device cover structure arranged to cover said fingerprint sensor device, said cover structure having a first side configured to be touched by a finger, thereby forming a sensing surface of said sensing module, and a second side facing said sensing array, wherein said cover structure comprises conductive traces arranged on the second side of the cover structure, for electrically connecting said fingerprint sensing module to external circuitry, and wherein a surface area of said cover structure is larger than a surface area of said sensor device; and
a carrier having a first side attached to a second side of said fingerprint sensor device, opposite of said first side of said fingerprint sensor device;
wherein said fingerprint sensor device further comprises wire-bonds electrically connecting said connection pads of said fingerprint sensor device to said conductive traces of said cover structure, said wire-bonds comprising:
wire-bonds between said connection pads of said fingerprint sensor device and said first side of said carrier, and
wire-bonds arranged between a second side of said carrier, opposite of said first side of said carrier, and said conductive traces of said cover structure.

US Pat. No. 10,395,163

METAL CHIP CARD CAPABLE OF SUPPORTING RADIO FREQUENCY COMMUNICATION AND PAYMENT

Hightec Technology Co., L...

1. A metal chip card supporting radio frequency communication and payment, wherein, an antenna circuit module of the metal chip card supporting radio frequency communication and payment comprises a flexible printed circuit/printed circuit board assembly (FPC/PCBA) antenna circuit board, an IC chip, a chip sealing adhesive and a two-side gold-plated touch electrode, an ultrathin ferrite wave absorption electromagnetic shielding layer is stuck below the antenna circuit module, the two-side gold-plated touch electrode is disposed on a surface of the FPC/PCBA antenna circuit board, and the antenna circuit module and a metal substrate which is provided with a milled groove and an inner wall of which is coated with a hot melt adhesive layer are packaged into the metal chip card by means of hot pressing,wherein a lower surface of the FPC/PCBA antenna circuit board is stuck with the IC chip and the sealing adhesive used for fixing and protecting gold wire solder joints.

US Pat. No. 10,395,162

ULTRA-LOW POWER AND COST PURELY ANALOG BACKSCATTER SENSORS WITH EXTENDED RANGE SMARTPHONE/CONSUMER ELECTRONICS FM RECEPTION

1. A device comprising:an antenna configured to receive and backscatter a RF signal;
a sensing element;
a base (first) oscillator coupled to the sensing element;
a modulation (second) oscillator configured to be controlled by the base oscillator;
and an impedance modulator coupled to the antenna and controlled by the modulation oscillator.

US Pat. No. 10,395,158

METHOD FOR MAKING AN ANTI-CRACK ELECTRONIC DEVICE

GEMALTO SA, Meudon (FR)

1. A method for manufacturing an intermediate electronic-device for a device having an electronic module covered with a cover sheet or layer, said method comprising a step of forming a carrier body comprising:a cavity formed in the carrier body and extending through opposed outer surfaces of the carrier body,
an electrical circuit inside the cavity, said electrical circuit comprising a conductive path and at least one electrical interconnection area electrically connected to said conductive path,
an electronic module comprising a protective coating and at least one connection pad connecting said interconnection area, said electronic module, including said at least one connection pad, being disposed in the cavity,
a cover sheet or layer disposed outside the cavity and covering said electronic module from outside the cavity, and
a space or gap existing at the interface between the module and the cavity formed in the carrier body,
wherein the space or gap is at least partially filled by a conductive material arranged in the device in contact with the at least one electrical interconnection area and the at least one connection pad.

US Pat. No. 10,395,157

SMART CARD MODULE ARRANGEMENT, SMART CARD, METHOD FOR PRODUCING A SMART CARD MODULE ARRANGEMENT AND METHOD FOR PRODUCING A SMART CARD

Infineon Technologies AG,...

1. A method for producing a smart card module arrangement, the method comprising:arranging a smart card module on a first carrier layer, wherein the first carrier layer is free of a prefabricated smart card module receptacle cutout for receiving the smart card module, and wherein the smart card module comprises:
a substrate;
wherein the substrate comprises a first side and a second side,
wherein the second side of the substrate is opposite the first side,
a chip on the substrate;
a first mechanical reinforcement structure arranged on the first side, between the chip and the substrate, wherein the first mechanical reinforcement structure covers at least one part of a surface of the chip; and
a second mechanical reinforcement structure arranged on the second side, wherein the second mechanical reinforcement structure covers at least one part of the chip;
applying a second carrier layer to the smart card module, wherein the second carrier layer is free of a prefabricated smart card module receptacle cutout for receiving the smart card module; and
at least one of laminating or pressing the first carrier layer with the second carrier layer, such that the smart card module is enclosed by the first carrier layer and the second carrier layer.

US Pat. No. 10,395,155

BILLBOARD CONTAINING ENCODED INFORMATION

1. A billboard containing encoded information, the billboard comprising:a billboard body having a front face presenting advertisement content;
a plurality of color blocks, the plurality of color blocks being distributed in a preset mode on the front face and each of which individually covering a part of the front face,
wherein the plurality of color blocks are encoded as color geometric graphic code elements and the entire front face can be optically identified and decoded to obtain the encoded information when captured by a mobile terminal, and wherein the encoded information, or the information decoded by the mobile terminal and imported to a web page, is associated with the advertisement content.

US Pat. No. 10,395,154

DIGITAL LABEL AND ASSET TRACKING INTERFACE

GENERAL ELECTRIC COMPANY,...

1. A product information display device for application to a product, the product information display device comprising:a controller;
a non-transitory electronic memory unit operably connected to, the controller and configured to store product information therein;
a securing mechanism configured to secure the device to a surface of the product; and
a display operably connected to the controller and configured to present the product Information thereon wherein the controller includes a transceiver, and wherein the memory unit retains and stores updated product information from the transceiver on newly installed components on the product for presentation on the label,
wherein the controller includes at least one sensor, and
wherein the display device is further configured to obtain and provide real-time usage and performance data of the product and environmental condition data of the product from the at least one sensor on the display.

US Pat. No. 10,395,153

DURABLE CARD

COMPOSECURE, LLC, Somers...

1. A process for forming a card, the process comprising the steps of:forming a first core subassembly comprised of two or more layers which include one or more elements that define functionality of the card, said first core subassembly having a top layer and a bottom layer;
forming a second subassembly including a hard coat layer attached to a release layer mounted on a carrier layer;
attaching the second subassembly to the top layer of the of the first core subassembly so the hard coat layer is closest to the first core assembly to form a first card assembly;
laminating the first card assembly under predetermined temperature and pressure such that the carrier layer imparts a finish to the hard coat layer of the card; and
removing the release layer and the carrier layer to form a resultant card.

US Pat. No. 10,395,152

AMASSING PICK AND/OR STORAGE TASK DENSITY FOR INTER-FLOOR TRANSFER

Amazon Technologies, Inc....

1. A method comprising:instructing retrieval of a first storage rack from a storage area of a storage floor to a consolidation area of the storage floor based on the first storage rack bearing a first container including a first inventory item designated for removal from the first container on a processing floor, the processing floor being separate from the storage floor;
instructing transfer of the first container including the first inventory item from the first storage rack to a transfer rack in the consolidation area;
instructing retrieval of a second storage rack from the storage area of the storage floor to the consolidation area of the storage floor based on the second storage rack bearing a second container including a second inventory item designated for removal from the second container on the processing floor;
instructing transfer of the second container including the second inventory item from the second storage rack to the transfer rack in the consolidation area;
instructing movement of the transfer rack to the processing floor;
instructing movement of the first container including the first inventory item from the transfer rack to a first shuttle rack to facilitate movement to a destination on the processing floor for removal of the first inventory item from the first container; and
instructing movement of the second container including the second inventory item from the transfer rack to a second shuttle rack to facilitate movement to a destination on the processing floor for removal of the second inventory item from the second container.

US Pat. No. 10,395,151

SYSTEMS AND METHODS FOR LOCATING GROUP MEMBERS

Symbol Technologies, LLC,...

1. A method for tracking individuals within a venue, the method comprising:grouping, with at least one processor, a set of wearable articles, each of the wearable articles comprising a radio frequency identification (RFID) tag;
receiving, from an RFID positioning system within the venue, RFID position data indicating a location for each wearable article in the set of wearable articles;
detecting, based on the RFID position data, that a first wearable article within the set of wearable articles is no longer within a permitted location of the venue; and
transmitting, using the at least one processor, an alert to a mobile device associated with a second wearable article within the set of wearable articles,
wherein the permitted location is a proximity to another wearable article; and
wherein detecting that the first wearable article is no longer within the permitted location of the venue comprises detecting, based on the RFID position data, that a distance between the first wearable article and another wearable article within the set of wearable articles exceeds a threshold distance.

US Pat. No. 10,395,150

PRINTING CONTROL APPARATUS, CONTROL METHOD OF PRINTING CONTROL APPARATUS, AND PROGRAM

Seiko Epson Corporation, ...

1. A printing control apparatus that controls a printing apparatus executing printing based on print data, comprising:a storage unit having a nonvolatile memory in which reading and writing are executed in n cell units (n is 2 or more);
a writing unit that writes the print data to the nonvolatile memory;
a reading unit that reads the print data from the nonvolatile memory;
a measuring unit that measures a cumulative amount of the print data written in the nonvolatile memory;
a reporting unit that reports information; and
a control unit that controls erasing of the print data that have been printed from the nonvolatile memory according to a read state of the reading unit and causes the writing unit to write new print data,
wherein the reporting unit reports information on a reduction in a printing speed of the printing apparatus in a case where the cumulative amount measured by the measuring unit is equal to or more than a predetermined amount.

US Pat. No. 10,395,147

METHOD AND APPARATUS FOR IMPROVED SEGMENTATION AND RECOGNITION OF IMAGES

RAKUTEN, INC., Tokyo (JP...

1. A method of determining a floorplan using a specially programmed machine, the machine comprising and a processor, a memory and a display, in communication with one another, the method comprising:obtaining a first floorplan image into said machine;
obtaining semantic segmentation data of the floorplan image;
obtaining optical character recognition (OCR) data for the floorplan image;
using the machine to compare the results of the OCR data to the semantic segmentation data with respect to a room size; and
outputting a second floorplan image based on a result of the comparison.

US Pat. No. 10,395,146

FACE RECOGNITION IN BIG DATA ECOSYSTEM USING MULTIPLE RECOGNITION MODELS

International Business Ma...

1. A computer-implemented method of training a facial recognition modeling system using an extremely large data set of facial images, the method comprising:distributing a plurality of facial recognition models across a plurality of nodes within the facial recognition modeling system; and
optimizing a facial matching accuracy of the facial recognition modeling system by increasing a facial image set variance among the plurality of facial recognition models, wherein, to optimize the facial matching accuracy of the facial recognition modeling system, the program code when executed is further operable to:
match each facial image of the data set of facial images with at least one of the facial recognition models;
determine the least closely matching facial image associated with a maximum eigenvector distance between the facial image and each most closely matching facial image of the plurality of facial recognition models; and
insert a facial image of the data set of facial images into a facial recognition model of the plurality of facial recognition models, wherein the facial recognition model is associated with a least closely matching facial image.

US Pat. No. 10,395,144

DEEPLY INTEGRATED FUSION ARCHITECTURE FOR AUTOMATED DRIVING SYSTEMS

GM GLOBAL TECHNOLOGY OPER...

1. A sensor fusion system for an autonomous driving system, comprising:a sensor system for providing environment condition information;
a camera for providing camera data;
a range data processing unit configured to receive the environment condition information and produce a range data map; and
a convolutional neural network comprising:
a receiving interface configured to receive the environment condition information, from the sensor system and to receive the camera data from the camera,
a common convolutional layer configured to, by a processor, extract traffic information from the camera data based on the range data map and to produce a plurality of feature maps associated with the traffic information,
a plurality of fully connected layers configured to, by a processor, detect objects belonging to different object classes based on the extracted traffic information and the range data map, wherein the object classes include at least one of a road feature class, a static object class, and a dynamic object class;
an environment representation layer configured to, by a processor, provide environment information; and
an object-level fusion layer is configured to, by a processor, track the detected objects by fusing information from the range data map and data from the environment representation layer and to provide estimates for the position and velocity of the tracked objects, perform fusion in a free-space representation using the range data map and the plurality of feature maps to produce a fused free-space output, and perform fusion in stixel representation using the range data map and camera data the plurality of feature maps to produce fused stixels.

US Pat. No. 10,395,143

SYSTEMS AND METHODS FOR IDENTIFYING A TARGET OBJECT IN AN IMAGE

International Business Ma...

1. A computer implemented method of identifying a plurality of target objects in a digital image, the method comprising:receiving a digital image including a plurality of target objects;
extracting a plurality of query descriptors from a respective plurality of locations in the digital image;
comparing each one of said plurality of query descriptors with a plurality of training descriptors for identifying a plurality of matching training descriptors, each one of the plurality of training descriptors is associated with one of a plurality of reference object identifiers and with relative location data comprising an estimated distance and an estimated direction from a center point of a reference object indicated by the respective associated reference object identifier from the plurality of reference object identifiers;
computing a plurality of object-regions of the digital image by clustering the query descriptors having common center points defined by the matching training descriptors, each object-region approximately bounding one target object of the plurality of target objects of the digital image, each object-region is associated with another common center point of said common center points and with a scale relative to a reference object size,
wherein each of the plurality of object-regions is computed independently of the respective reference object identifier associated with said each of the plurality of object-regions; and
classifying the bound target object of each object-region of the plurality of object-regions according to the reference object identifier of a respective cluster according to a statistically significant correlation requirement between a common center point of the respective cluster and the center point of the reference object associated with the reference object identifier of the respective cluster;
wherein the comparing is performed by finding a set of Euclidean nearest neighbors of the respective extracted query descriptors, wherein each member of the set of Euclidean nearest neighbors is one of the plurality of matching training descriptors;
wherein the set of Euclidean nearest neighbors are identified for a first subset of the extracted query descriptors, wherein a second subset of extracted query descriptors are unmatched, wherein for each member of the second subset of extracted query descriptors that are unmatched, a matching training descriptor is computed such that the difference between the center point of the relative location data of the identified matching training descriptors and the center point of the relative location data of the computed training descriptor matched to the unmatched second subset of query descriptors is equal to the difference between the location relative location of the query descriptor matched to the identified matching training descriptor and the location relative location of the unmatched second subset of query descriptors for which the matching training descriptor is computed.

US Pat. No. 10,395,141

WEIGHT INITIALIZATION FOR MACHINE LEARNING MODELS

SAP SE, Walldorf (DE)

1. A system, comprising:at least one data processor; and
at least one memory storing instructions which, when executed by the at least one data processor, result in operations comprising:
processing an image set with a convolutional neural network configured to detect, in the image set, a first feature and a second feature;
determining a first effectiveness of the first feature and a second effective of the second feature, the first effectiveness of the first feature corresponding to a first quantity of images in the image set the convolutional neural network is able to classify based on the presence of the first feature, and the second effectiveness of the second feature corresponding to a second quantity of images in the image set the convolutional neural network is able to classify based on the presence of the second feature;
determining, based at least on the first effectiveness of the first feature and the second effectiveness of the second feature, a first initial weight for the first feature and a second initial weight for the second feature; and
initializing the convolutional neural network prior to training the convolutional neural network, the initialization of the convolutional neural network comprising configuring the convolutional neural network to apply, during the training of the convolutional neural network, the first initial weight and the second initial weight.

US Pat. No. 10,395,139

INFORMATION PROCESSING APPARATUS, METHOD AND COMPUTER PROGRAM PRODUCT

Kabushiki Kaisha Toshiba,...

1. An information processing apparatus comprising:a memory; and
processing circuitry configured to:
acquire an input image captured by an image-capturing device installed in a specific location;
perform adaptation processing of adapting an estimation model, which is used for detecting positions or the number of objects contained in an image, to the specific location by sequentially selecting a parameter of the estimation model from a lower level toward a higher level, and by modifying the selected parameter in such a manner to reduce an estimation error in the positions or the number of the objects contained in the input image;
acquire a termination condition for the adaptation processing; and
terminate the adaptation processing when the termination condition is satisfied.

US Pat. No. 10,395,124

THERMAL IMAGE OCCUPANT DETECTION

OSRAM SYLVANIA Inc., Wil...

1. A method for determining occupancy of an area, the method comprising:receiving a first thermal image of the area collected at a first time, the first thermal image including a first plurality of thermal intensity values corresponding a plurality of pixels of a sensor;
receiving a second thermal image of the area collected at a second time after the first time, the second thermal image including a second plurality of thermal intensity values corresponding to the plurality of pixels of the sensor;
identifying a change in thermal intensity values between the second plurality of thermal intensity values and the first plurality of thermal intensity values;
comparing the change in thermal intensity values to a level of expected change in thermal intensity values corresponding to at least one of a single occupant entering the area and a single occupant leaving the area;
determining a rate of change for the change in thermal intensity values;
identifying a presence of an occupant in the area when:
the compared change in thermal intensity values corresponds to one or more occupants, and
the determined rate of change is equal or greater to an occupant threshold; and
identifying the number of occupants in the area by determining a multiple of the change in thermal intensity values to the level of expected change in thermal intensity values.

US Pat. No. 10,395,095

FACE MODEL MATRIX TRAINING METHOD AND APPARATUS, AND STORAGE MEDIUM

TENCENT TECHNOLOGY (SHENZ...

1. A face model matrix training method, comprising:obtaining a face image library, the face image library comprising k groups of face images, and each group of face images comprising at least one face image of at least one person, wherein k>2, and k is an integer;
separately parsing each group of the k groups of face images, and calculating a first matrix and a second matrix according to parsing results, the first matrix being an intra-group covariance matrix of facial features of each group of face images, and the second matrix being an inter-group covariance matrix of facial features of the k groups of face images; and
training face model matrices according to the first matrix and the second matrix,
wherein the training face model matrices according to the first matrix and the second matrix comprises:
calculating a third matrix and a fourth matrix according to the first matrix and the second matrix, wherein the third matrix is a covariance matrix of facial features in the face image library, and the fourth matrix is a covariance matrix among facial features of different persons in the face image library; and
training the face model matrices according to the third matrix and the fourth matrix.

US Pat. No. 10,395,090

SYMBOL DETECTION FOR DESIRED IMAGE RECONSTRUCTION

MorphoTrak, LLC, Anaheim...

1. A computer-implemented method comprising:obtaining data indicating an image comprising a latent fingerprint and a template that surrounds the latent fingerprint, and
obtaining reference data that (i) identifies a known symbol associated with the template, and (ii) includes characteristics of the known symbol;
processing the image;
identifying, based on processing the image, one or more candidate regions of the image that are predicted to include the known symbol;
extracting image characteristics represented within the one or more candidate regions; and
determining, based on the characteristics of the known symbol and the extracted image characteristics represented within the one or more candidate regions, that the one or more candidate regions include the known symbol.

US Pat. No. 10,395,078

DIGITAL FINGERPRINT GENERATION USING SENSOR EMBEDDED PACKAGING ELEMENTS

International Business Ma...

1. A method for detecting package tampering, comprising:performing first scanning of a container comprising a packaged item and a plurality of packaging elements surrounding the packaged item;
wherein the plurality of packaging elements are integrated on a base material wrapped around the packaged item;
wherein each packaging element of the plurality of packaging elements is a cushioning element comprising a sensing component comprising a stress sensor;
wherein each stress sensor measures a stress value on a corresponding packaging element in one or more directions; and
wherein each sensing component wirelessly transmits one or more of the measured stress values to one or more scanning devices;
determining at least one stress on each of the plurality of packaging elements surrounding the packaged item from the first scanning;
performing second scanning of the container comprising the packaged item and the plurality of packaging elements surrounding the packaged item;
determining at least one stress on each of the plurality of packaging elements surrounding the packaged item from the second scanning; and
comparing the at least one stress on each of the plurality of packaging elements surrounding the packaged item from the first scanning with the at least one stress on each of the plurality of packaging elements surrounding the packaged item from the second scanning;
wherein the method is performed by at least one computer system comprising at least one memory and at least one processor coupled to the memory.

US Pat. No. 10,395,069

RESTRICTING ACCESS TO A DEVICE

PAYPAL, INC., San Jose, ...

1. A system comprising:one or more computer-readable memories storing program instructions; and
one or more processors configured to execute the program instructions to cause the system to perform operations comprising:
determining that a first mobile device, associated with a first user, is in a process of falling or has fallen during a first time period;
in response to the determining that the first mobile device is in the process of falling or has fallen during the first time period, determining if the first mobile device is located within a safe space; and
in response to determining that the first mobile device is not located within the safe space, switching the first mobile device to stealth mode, wherein switching the first mobile device to stealth mode includes determining an image that visually matches at least a portion of a surface that is underneath the first mobile device, and displaying the image on at least one display of the first mobile device.

US Pat. No. 10,395,035

PHOTON EMISSION ATTACK RESISTANCE DRIVER CIRCUITS

Intel Corporation, Santa...

1. An apparatus comprising:diffusion regions located adjacent each other in a substrate, the diffusion regions including first diffusion regions, second diffusion regions, and third diffusion regions, wherein one of the second diffusion regions and one of the third diffusion regions are between two of the first diffusion regions, and one of the first diffusion regions and one of the third diffusion regions are between two of the second diffusion regions, wherein the first, second, and third diffusion regions have a same conductivity type;
a first connection coupled to each of the first diffusion regions;
a second connection coupled to each of the second diffusion regions; and
a third connection coupled to each of the third diffusion regions.

US Pat. No. 10,395,034

DATA TRACKING IN USER SPACE

International Business Ma...

1. A method comprising:marking, by a set of processors, a first location in a storage, wherein (i) the first location is a store for a set of data based, at least in part, on a first section of code in a program, (ii) the set of data is requested in a set of data requests from the program, and (iii) the program is encrypted;
determining, by the set of processors, that a second section of code in the program attempts to access the first location;
injecting, by the set of processors, a set of instrumentation code into the program according to a dynamic tracing framework, wherein the set of instrumentation code (i) is a dynamic binary instrumentation, (ii) is injected subsequent to the second section of code in an instruction execution stream, and (iii) does not modify the second section of code;
determining, by the set of processors, the instrumentation code executes;
examining, by the set of processors, the first section of code and a set of subsequent instructions in the program, wherein the set of subsequent instructions references the first location;
scanning, by the set of processors, the first location for a set of threats;
determining, by the set of processors, the set of threats exist; and
taking, by the set of processors, a defensive measure.

US Pat. No. 10,395,029

VIRTUAL SYSTEM AND METHOD WITH THREAT PROTECTION

FireEye, Inc., Milpitas,...

1. A computing device comprising:one or more hardware processors; and
a memory coupled to the one or more processors, the memory comprises one or more software components that, when executed by the one or more hardware processors, provide a virtualization software architecture including (i) a virtual machine, (ii) a plurality of hyper-processes and (iii) a hypervisor, wherein
the visual machine to operate in a guest environment and includes a process that is configured to monitor behaviors of data under analysis within the virtual machine,
the plurality of hyper-processes to operate in a host environment and isolated from each other within an address space of the memory, the plurality of hyper-processes include a threat protection process to classify the data under analysis as malicious or non-malicious based on the monitored behaviors and a guest monitor process configured to manage execution of the virtual machine and operate with the process to obtain and forward metadata associated with the monitored behaviors to the threat protection process, and
the hypervisor is configure to enforce temporal separation of the plurality of hyper-processes and enable inter-process communications between the plurality of hyper-processes.

US Pat. No. 10,395,017

SELECTIVELY REDACTING DIGITAL FOOTPRINT INFORMATION IN ORDER TO IMPROVE COMPUTER DATA SECURITY

International Business Ma...

1. A computer-implemented method for protecting user privacy, the computer-implemented method comprising:retrieving, by one or more processors, a historical digital footprint of a user, wherein the historical digital footprint is a record of past digital data about the user that is available to a public, and wherein the historical digital footprint describes a pattern of routine activities related to social communications from the user;
generating, by one or more processors, a simulated digital footprint for the user, wherein the simulated digital footprint conforms to the pattern of routine activities related to the social communications from the user, and wherein the simulated digital footprint describes simulated current activities of the user;
transmitting, by one or more processors, the simulated digital footprint to the public while a current real digital footprint of real-time activities of the user is being created for the user, wherein use of the pattern of routine activities related to the social communications from the user provides an imperceptible transition from the historical digital footprint to the simulated digital footprint, and wherein the simulated digital footprint prevents the public from accessing the current real digital footprint of the user; and
adjusting, by one or more processors, the simulated digital footprint of the user to simulate a new routine of the user while at a second location, wherein the user is actually at a different first location.

US Pat. No. 10,395,015

MULTI-LEVEL MATRIX PASSWORDS

INTERNATIONAL BUSINESS MA...

1. A method comprising:traversing, during a password entry, a matrix to select a position, wherein the matrix comprises a plurality of levels, each level in the plurality of levels comprising at least one position where data can be entered, wherein a second level in the matrix forms a sub-level of a first level, and wherein the second level is reachable only from a particular position in the first level;
changing, responsive to an input, a mode of the selected position such that the position becomes unchangeable and unselectable during a remainder of the password entry;
encoding the selected position in an auth-step; and
transmitting, responsive to an indication of an end of the password entry, an auth-code, the auth-code comprising a set of auth-steps, the set of auth-steps including the auth-step.

US Pat. No. 10,395,002

OPTICAL RULE CHECKING FOR DETECTING AT RISK STRUCTURES FOR OVERLAY ISSUES

INTERNATIONAL BUSINESS MA...

1. A method of performing lithography and detecting at risk structures due to a lithographic mask overlay comprising:performing a lithography process;
performing the lithographic mask overlay; and
the method of detecting being implemented in a computer infrastructure having computer executable code tangibly embodied on a computer readable storage medium having programming instructions operable to:
determine a probability that an arbitrary point (x, y) on a metal layer is covered by a via by calculating a statistical coverage area metric followed by a summing function; and
detect at risk structures of a semiconductor device by detecting a lithography error occurring from a misalignment of the lithography mask overlay during the lithography process in which the misalignment of the lithography mask overlay occurs when the metal layer is covered by the via based on the determined probability,
wherein determining the probability that the arbitrary point (x, y) on the metal layer is covered by the via comprises:
determining that the metal layer is inside the via by calculating:

wherein:
Pin is representative of a probability that the via covers the metal layer, at the arbitrary point;
determining that the metal layer is outside the via by calculating:

wherein:
Pout is representative of the probability that the via covers the metal layer, at the arbitrary point outside the nominal via shape; and
Ox and Oy follows Gaussian distributions to calculate for the Pin and Pout;
Rv represents the radius of the nominal via shape,
wherein the arbitrary point (x, y) is defined by an x coordinate and a y coordinate in a cartesian coordinate system,
wherein P is a probability, and Ox and Oy is an overlay in an x direction and a y direction, respectively, and
wherein the summing function includes mathematical approximations, and the mathematical approximations including engineering approximations to detect the at risk structures of overlay error.

US Pat. No. 10,394,999

ANALYSIS OF COUPLED NOISE FOR INTEGRATED CIRCUIT DESIGN

International Business Ma...

1. A computer-implemented method comprising:generating, by a processor coupled to the computer, an electronic representation of a circuit design based on the output of a Simulation Program with Integrated Circuit Emphasis (SPICE) for one or more variations of the circuit, derived from the described circuit as expressed in a hardware description language (HDL), or derived from actual data measured from one or more manufactured prototypes of the circuit;
identifying a noise cluster from within the circuit design;
representing said noise cluster according to a variational model, wherein the variational model supports variational analysis of a maximum and a minimum noise given asserted levels of pessimism, the maximum and the minimum noise are expressed through assumed and/or nominal values that are passed through the variational model to represent the noise cluster;
projecting said variational model onto one or more corners to yield a projected noise cluster; and
determining a computed noise for said projected noise cluster.

US Pat. No. 10,394,992

WIRE LINEEND TO VIA OVERLAP OPTIMIZATION

International Business Ma...

1. A computer-implemented method for shifting a cut associated with a lineend of an interconnect in an advanced manufacturing system, the method comprising:selecting, by a circuit design component, one or more polygons associated with a lineend of an interconnect;
determining, by the circuit design component, whether a first cut is spanning the one or more polygons;
determining, by the circuit design component, a presence of a first via on a first interconnect;
determining, by the circuit design component, a first distance of the first via to the first cut;
determining, by the circuit design component, whether the first distance is greater than a first pre-determined threshold;
determining, by the circuit design component, a second distance of the first cut to a second cut;
determining, by the circuit design component, whether the second distance is greater than a second pre-determined threshold;
generating, by the circuit design component, a shift associated with the first cut; and
outputting, by the circuit design component, the shift for moving the first cut.

US Pat. No. 10,394,972

SYSTEM AND METHOD FOR MODELLING TIME SERIES DATA

Dell Products, LP, Round...

8. A method comprising:acquiring data, the data including time series data;
selecting a first level of granularity for the data;
using a processor to isolate one or more time series from the data, assign a unique time series identifier to each time series, and store the time series and the time series identifiers in a data store;
selecting a set of models based on a type of the data;
training the set of models against a first portion of the data;
testing the set of model against a second portion of the data;
forecasting additional time points for the one or more time series using the set of models;
determining a fit statistic for each model for each time series;
using the processor to select a preferred model for each time series based on the fit statistics of the models for the time series;
determining a confidence value for the model for each time series;
adjusting a granularity level for any time series where the confidence value is below a threshold and repeating the forecast at the adjusted granularity level, adjusting the granularity level until the confidence value meets or exceeds the threshold;
storing the fit statistics and an execution history along with the time series in the data store; and
providing a forecast for each time series.

US Pat. No. 10,394,965

CONCEPT RECOMMENDATION BASED ON MULTILINGUAL USER INTERACTION

SAP SE, Walldorf (DE)

1. A system comprising:a database storing entries for a plurality of concepts, each entry comprising a multilingual vector of counterpart expressions for the respective concept in a source language and in multiple target languages; and
one or more hardware processors configured to perform operations comprising:
receiving, via a network, a first request for translation recommendations, the first request including a source-language expression;
in response to the first request for translation recommendations, recommending a set of entries selected from the stored entries for the plurality of concepts based on the set of entries each including the source-language expression included in the first request for translation recommendations;
receiving, via the network, a first translation decision that specifies an entry among the recommended set of entries, the specified entry including a first target-language expression and the source-language expression, the specified entry identifying a subset of the recommended set of entries, each entry in the identified subset including the first target-language expression and the source-language expression;
receiving, via the network, a second request for translation recommendations, the second request including the source-language expression included in the first request for translation recommendations;
in response to the second request for translation recommendations, recommending the identified subset of entries that each include the first target-language expression and the source-language expression; and
receiving a second translation decision that specifies an entry among the recommended subset of entries, the entry specified by the second translation decision including a second target-language expression, the first target-language expression, and the source-language expression, the entry specified by the second translation decision identifying a portion of the recommended subset of the set of entries, each entry in the identified portion including the first and second target-language expressions and the source-language expression.

US Pat. No. 10,394,963

NATURAL LANGUAGE PROCESSOR FOR PROVIDING NATURAL LANGUAGE SIGNALS IN A NATURAL LANGUAGE OUTPUT

INTERNATIONAL BUSINESS MA...

1. A computer-implemented method of operating a speech synthesizer (SS) circuit configured to convert natural language inputs to natural language outputs and provide a natural language alert that communicates that the natural language outputs may contain error, the computer-implemented method comprising:converting, using a machine translation circuit of the SS circuit, a natural language input to natural language input data, wherein the natural language input comprises a human source language;
performing, using the machine translation circuit of the SS circuit, a translation operation on the natural language input data to translate the natural language input data to a natural language output that represents a target human language;
wherein the translation operation comprises performing a confidence level analysis on at least one portion of the translation operation to generate at least one confidence level signal that represents a confidence level that the natural language output that results from the translation operation contains error;
wherein the translation operation further comprises, based at least in part on the at least one confidence level that the natural language output contains an error, selecting a portion of a disfluency natural language data stored in a memory and embedding the selected portion of the disfluency natural language data into the natural language output;
wherein the selected portion of the disfluency natural language data is embedded into the natural language output in a location selected to communicate that a portion of the natural language output may contain an error; and
converting, using the SS circuit, the natural language output that has been embedded with the selected portion of the disfluency natural language data into speech or text comprising a natural language output having disfluency, wherein the natural language output is in the target human language and the disfluency is in the target human language.

US Pat. No. 10,394,764

REGION-INTEGRATED DATA DEDUPLICATION IMPLEMENTING A MULTI-LIFETIME DUPLICATE FINDER

International Business Ma...

1. A computer program product for performing deduplication in conjunction with random read and write operations across a namespace, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, wherein the computer readable storage medium is not a transitory signal per se, the program instructions executable by a computer to cause the computer to perform a method comprising:receiving, at the computer, a write request comprising a data chunk;
computing, by the computer, a fingerprint of the data chunk;
determining, by the computer, whether a short term dictionary corresponding to the namespace comprises an entry corresponding to the fingerprint;
in response to determining the short term dictionary comprises the entry corresponding to the fingerprint, writing, by the computer, the data chunk to a data store corresponding to the namespace in a deduplicating manner;
in response to determining the short term dictionary does not comprise the entry corresponding to the fingerprint, determining, by the computer, whether a long term dictionary corresponding to the namespace comprises the entry corresponding to the fingerprint;
in response to determining the long term dictionary comprises the entry corresponding to the fingerprint, writing, by the computer the data chunk to the data store in the deduplicating manner;
in response to determining the long term dictionary does not comprise the entry corresponding to the fingerprint, writing, by the computer, the data chunk to the data store in a non-deduplicating manner; and
in response to determining the long term dictionary comprises the entry corresponding to the fingerprint, repopulating the short term dictionary with the entry corresponding to the fingerprint,
wherein the short term dictionary comprises a first eviction policy,
wherein the long term dictionary comprises a second eviction policy,
wherein the first eviction policy is configured to evict one or more entries of the short term dictionary in response to a new entry being inserted into the short term dictionary, and
wherein the second eviction policy is configured to evict one or more entries of the long term dictionary in response to a new entry being inserted into the long term dictionary.

US Pat. No. 10,394,713

SELECTING RESOURCES TO MAKE AVAILABLE IN LOCAL QUEUES FOR PROCESSORS TO USE

INTERNATIONAL BUSINESS MA...

1. A computer program product for managing access to resources in a computer system, the computer program product comprising a computer readable storage medium having computer readable program code embodied therein that when executed performs operations, the operations comprising:maintaining, by each processor of a plurality of processors, a queue of resources for the processor to use when needed for processor operations;
maintaining a global queue indicating available resources available for use by the processors;
in response to the queue for one of the processors indicating no available resources, obtaining, by the processor for that queue having no available resources, a lock for the global queue to access available resources from the global queue to indicate in the queue, having no available resources, as available for use by the processor;
in response to the queue for one of the processors indicating a maximum number of available resources, obtaining, by the processor for the queue having the maximum number of available resources, the lock to the global queue;
indicating in the global queue a plurality of the available resources indicated in the queue having the maximum number of available resources to reduce a number of the available resources indicated in the queue;
selecting one of the processors;
accessing, by the selected processor, at least one available resource; and
including the accessed at least one available resource in the queue of the selected processor.

US Pat. No. 10,394,710

STORAGE CLASS MEMORY (SCM) MEMORY MODE CACHE SYSTEM

Dell Products L.P., Roun...

1. A Storage Class Memory (SCM) memory mode persistent memory cache system, comprising:a first Storage Class Memory (SCM) subsystem that provides first data communication speeds;
a persistent memory subsystem that includes at least one non-volatile memory device and that provides second data communication speeds that are greater than the first data communication speeds; and
a memory controller that is coupled to the first SCM subsystem and the persistent memory subsystem, wherein the memory controller is configured to:
write a plurality of data to the persistent memory subsystem and, in response, update a cache tracking database;
write a first subset of the plurality of data to the first SCM subsystem subsequent to the writing of the plurality of data to the persistent memory subsystem and, in response, update the cache tracking database; and
receive a shutdown signal and, in response, copy the cache tracking database to the persistent memory subsystem, wherein the persistent memory subsystem is configured to store at least some of the plurality of data and the cache tracking database in the at least one non-volatile memory device during a shutdown associated with the shutdown signal.

US Pat. No. 10,394,709

FUNCTION ANALYSIS METHOD AND MEMORY DEVICE

Silicon Motion, Inc., Jh...

1. A function analysis method for a memory device, comprising:analyzing a mapping relationship of at least one application programming interface (API) function and at least one normal function;
analyzing a calling relationship of the at least one normal function through the mapping relationship;
developing a two-dimensional array to analyze whether there is a loop or not in the calling relationship, wherein when one normal function of the at least one normal function calls another normal function of the at least one normal function, the name of the called normal function is inspected to determine whether or not it is identical to the name of the normal function and identical to the names of all normal functions which call the normal function.

US Pat. No. 10,394,688

METHOD FOR DETECTING COMPUTER MODULE TESTABILITY PROBLEMS

1. A method for detecting testability problems of a computer module defined:by first code instructions in a modeling language, said first code instructions representing a plurality of blocks of said computer module distributed in one or more components and a plurality of relationships between the blocks and/or the components; and
by second code instructions in a textual language, said second code instructions representing a list of specifications each associated with a capability and defining at least one information flow at the level of the capability;
the method being characterized in that it comprises implementing by data processing means of a device steps of:
(a) Expressing the first and second code instructions each in the form of a model instantiating a common metamodel matching the blocks with the capabilities, and matching said relationships with said information flows;
(b) Synchronizing the models associated respectively with the first and second code instructions in a consolidated model;
(c) Expressing from said consolidated model a graph in which the blocks and the components are nodes, and the relationships are edges;
(d) Calculating by means of a graph traversal algorithm the width and/or the depth of said graph;
(e) Emitting a signal indicating a testability problem of the computer module if at least one of said calculated width and depth is greater than a predefined threshold;
wherein the width of the graph corresponds to the largest number of blocks along a path in the edges of the graph representative of data type information flows, between two observable data type pieces of information; and/or the depth of the graph corresponds to the largest number of blocks along a path in the edges of the graph representative of command type information flows, between two observable command type pieces of information.

US Pat. No. 10,394,685

EXTENSIBLE MARKUP LANGUAGE (XML) PATH (XPATH) DEBUGGING FRAMEWORK

International Business Ma...

1. An extensible markup language (XML) path (XPATH) expression debugging method comprising:receiving an XPATH input expression in a portion of a graphical user interface (GUI) of a debugger tool executing in memory of a computer;
parsing the XPATH input expression to produce a plurality of sub-expressions corresponding to intermediate steps of evaluation for the XPATH input expression, the parsing of the XPATH input expression revealing expression nodes, step nodes, function nodes, predicates to the step nodes and parenthesis nodes;
constructing an XPATH traversal tree (XTT) model as an extension of a pattern tree model used for computing XPATH containment and comprising additional node and token types to seamlessly model XPATH expressions, the pattern tree model comprising a directed and unranked tree modeling an XPATH expression with an XTT model by associating each of the revealed sub-expressions with a node in the XTT model, the node comprising a composite tree node in the XTT modeling different kinds of XPATH expressions, the XTT model being expressed as an aggregation of (a) one or more expression XTT nodes each including either a literal expression, a numerical expression, a path expression or a function expression, function, (b) one or more step XTT nodes modeling a step in the XPATH input expression, and (c) one or more function path nodes each encapsulating a function names as an instance of an XTT token comprising an atomic string token for an XPATH expression;
receiving a selection of an XML document from a hierarchical list including different XML documents from a second portion of the GUI of the debugger tool;
applying each of the ordered sub-expressions in the XTT model to the selected XML document;
rendering a visual representation of the model in a third portion of the GUI of the debugger tool; and,
responsive to receiving a selection of one of the sub-expressions rendered as a node in the model in the third portion of the GUI, differentially visually emphasizing, in a fourth portion of the GUI of the debugger tool, each portion of the selected XML document corresponding to a result set resulting from the application of the selection of the one of the sub-expressions to the selected XML document.

US Pat. No. 10,394,680

TECHNIQUES FOR TRACKING GRAPHICS PROCESSING RESOURCE UTILIZATION

Microsoft Technology Lice...

1. A method for reporting memory resource usage by a graphics processing unit (GPU), comprising:receiving, by a memory tracking application, a list of memory resources allocated for the GPU;
displaying, by the memory tracking application, an indication of memory resource utilization based on the list of memory resources;
receiving, by the memory tracking application and as transmitted by the GPU executing on a separate device from the memory tracking application, multiple indications that one or more of the memory resources allocated for the GPU are accessed; and
updating, by the memory tracking application, the indication of memory resource utilization based at least in part on the multiple indications.

US Pat. No. 10,394,676

GENERATION DEVICE, GENERATION METHOD, AND PROGRAM

International Business Ma...

1. A generation device for generating a test sequence to be supplied to a test target without defining an operation sequence in detail, the generation device comprising:a first reception unit receiving, from a user terminal, prohibition rule information for defining combinations of values that cannot be used, for parameters as factors included in a test vector;
a test vector generation unit selecting, for each of a plurality of parameters to be included in the test vector, one value from among possible values for the parameter to generate a plurality of test vectors whose combinations of values are different from each other and whose use is permitted, on the basis of the prohibition rule information,
wherein the test vector generation unit generates the plurality of test vectors that cover all possible patterns taken by combinations of values, according to an orthogonal table, for a predetermined number of parameters of the plurality of parameters;
a second reception unit receiving commutativity information indicating a condition of values of the plurality of parameters under which order of two or more test vectors is changeable;
an extraction unit extracting, as a plurality of partial sequences each including one or more test vectors, a plurality of portions of a series comprising an output of the plurality of test vectors by the test vector generation unit, wherein the extraction unit extracts the plurality of partial sequences from the series including the plurality of test vectors, based on the commutativity information; and
a test sequence generation unit generating a test sequence based on the extracted plurality of partial sequences, wherein the test sequence generation unit generates the test sequence having a length suppressed within a realistically executable range.

US Pat. No. 10,394,675

VEHICLE CONTROL DEVICE

HITACHI AUTOMOTIVE SYSTEM...

1. A vehicle control device comprising:a plurality of processing cores that include a first processing core and a second processing core, wherein each processing core is assigned one or more in in-vehicle functions so that the first processing core is assigned a first in-vehicle function and the second processing core is assigned a second in-vehicle function; and
a storage area that is communicatively coupled to the plurality of processing cores, wherein the storage area includes a plurality of portions that are each assigned to a respective processing core;
wherein each respective processing core from the plurality of processing cores is configured to:
detect a fault by performing a hardware diagnosis by the respective processing core when the respective processing core is started, and
perform a software diagnosis on the portion of the storage area assigned to the respective processing core after the hardware diagnosis is completed; and
wherein, on a condition that the fault is detected in the first processing core, the vehicle control device is configured to:
reassign the first in-vehicle function from the first processing core to the second processing core, wherein the second processing core executes both the first in-vehicle function and the second in-vehicle function,
perform the software diagnosis on the portion of the storage area assigned to the first processing core using a third processing core, and
restart the first processing core.

US Pat. No. 10,394,672

CLUSTER AVAILABILITY MANAGEMENT

International Business Ma...

1. A method, comprising:operating a first logical partition having transferable partition resources in a first physical processing complex of a server cluster in an active mode which includes operating an operating system and actively performing input/output operations between a host and a storage system, and a second logical partition having transferable partition resources in the same first physical processing complex and in a quiesced standby mode which includes operating an operating system but is otherwise substantially inactive as compared to said active mode wherein the transferable partition resources transferred to the second logical partition in the quiesced standby mode are reduced as compared to the transferrable partition resources transferred to the first logical partition in the active mode;
detecting a failure in a second physical processing complex different from the first physical processing complex of the server cluster;
in response to said failure detection, activating the standby logical partition in the first physical processing complex to operate in an active mode so that both the first and second logical partitions of the first physical processing complex operate in the active mode; and
subsequent to activating the second logical partition, transferring partition resources from the first logical partition to the second logical partition while the first logical partition remains in the active mode;
wherein said active mode operating includes providing access to a shared resource of data storage disk drives for a logical partition operating in an active mode and wherein said quiesced standby mode operating includes denying access to said shared resource of data storage disk drives for a logical partition operating in a quiesced standby mode.

US Pat. No. 10,394,670

HIGH AVAILABILITY AND DISASTER RECOVERY SYSTEM ARCHITECTURE

Verizon Patent and Licens...

1. A system, comprising:a set of interfaces to provide a first device with connectivity to a first data center;
a second device to provide a uniform resource identifier (URI) resolution or routing service among the first data center and a second data center,
the first data center and the second data center being physically separated,
the URI configured to access the first data center when the first data center is not experiencing an outage and configured to access the second data center when the first data center experiences the outage,
the second device providing a first failover service among devices associated with the first data center for the set of interfaces;
a first set of devices to provide a first resource to provide a first application or a first environment to run the first application;
a second set of devices to provide a second resource to provide a second application or a second environment to run the second application,
the second set of devices including a set of process orchestration (PO) application devices;
the second device providing a second failover service for the first set of devices and the second set of devices; and
a first database cluster to provide first software or a first service related to clustering a third set of devices or providing a threshold level of availability for the third set of devices,
the first database cluster providing a third failover service for the third set of devices.

US Pat. No. 10,394,668

MAINTAINING CONSISTENCY USING REVERSE REPLICATION DURING LIVE MIGRATION

VMware, Inc., Palo Alto,...

1. A system for effectively reversing replication during live migration, said system comprising:a memory area associated with a computing device, said memory area storing a consistency group (CG) of a plurality of source processes; and
a processor programmed to:
in response to receiving a request to perform a live migration of the CG of source processes on one or more source hosts and storage to a plurality of destination processes on one or more destination hosts and storage, perform the live migration of the CG of the source processes by transferring data representing the source processes to the destination hosts and storage;
during the live migration of the CG, intercept input/output (I/O) writes to the migrated source processes and apply the intercepted I/O writes to the CG on the source hosts; and
restore, in response to a failure during the live migration of the CG, the destination processes using the CG on the source hosts.

US Pat. No. 10,394,667

SYSTEM AND METHODS FOR BACKING UP AND RESTORING DATABASE OBJECTS

1. A method for backing up and restoring one or more updates to metadata of one or more files containing content stored in a database, comprising:receiving the one or more updates to the metadata of the one or more files, wherein the metadata is to be included in the one or more files;
entering the one or more updates to the metadata of the one or more files into a first database table of the database, wherein the first database table comprises dirty data, the dirty data indicating that the one or more updates to the metadata has not yet been included in the one or more files;
generating at least one backup file of the first database table in computer-readable storage that is communicatively connected with the database; and
restoring the one or more updates to the metadata by:
creating a recovery table within the database, wherein the recovery table is populated with data from the at least one backup file;
determining which of the one or more files to apply the one or more updates to by searching one or more entries in the recovery table for an identifier of the one or more files; and
adding the one or more updates to the metadata from the recovery table to the corresponding one or more files that matches the identifier identified from the recovery table such that the one or more updates to the metadata are included with the content in the one or more files.

US Pat. No. 10,394,665

MANAGING REMOTE DATA REPLICATION

INTERNATIONAL BUSINESS MA...

1. A system comprising:a first site having a first disk for storing input/output (I/O) data for a host system, the first site further having a second disk being a point-in-time copy of the first disk;
a second site that is remote from the first site, the second site having a first disk and a second disk, the first disk of the second site being a synchronous replication of the first disk of the first site, and the second disk of the second site being a point-in-time copy of the first disk of the second site;
a third site that is remote from the first site and the second site, the third site having a first disk, a second disk, and a third disk, the first disk of the third site being a synchronous replication of the second disk of the first site; and
a processor configured to:
responsive to a loss of the first site, transfer the storing of I/O data from the first disk of the first site to the first disk of the second site;
determine whether a replication of the I/O data from the second disk of the first site to the first disk of the third site was being performed at the loss; and
responsive to determining that the replication of the I/O data from the second disk of the first site to the first disk of the third site was being performed at the loss, start a synchronous replication of the second disk of the second site to the third disk of the third site.

US Pat. No. 10,394,664

IN-MEMORY PARALLEL RECOVERY IN A DISTRIBUTED PROCESSING SYSTEM

EMC IP Holding Company LL...

1. An apparatus comprising:a distributed processing system comprising a plurality of processing nodes;
each of the processing nodes comprising a processor coupled to a memory and being configured to communicate over one or more networks with other ones of the processing nodes;
the processing nodes comprising respective buffers and respective components of a distributed checkpoint manager of the distributed processing system;
the processing nodes implementing respective ones of a plurality of operators for a processing a data stream in the distributed processing system;
each of the operators being configured to interact with its corresponding one of the buffers and its corresponding one of the components of the distributed checkpoint manager on the corresponding one of the processing nodes;
responsive to a detected fault in a given one of the operators processing the data stream, partitioning other ones of the operators processing the data stream into one or more upstream operators, one or more immediately downstream operators, and one or more further downstream operators, relative to the given faulted operator;
recovering the given faulted operator from a checkpoint captured by its corresponding component of the distributed checkpoint manager; and
in parallel with recovering the given faulted operator, performing different sets of operations for respective ones of the upstream operators, immediately downstream operators and further downstream operators.

US Pat. No. 10,394,663

LOW IMPACT SNAPSHOT DATABASE PROTECTION IN A MICRO-SERVICE ENVIRONMENT

Red Hat, Inc., Raleigh, ...

11. A non-transitory computer-readable storage medium comprising instructions that when executed, by a processing device, cause the processing device to:identify, by the processing device, a transaction queue comprising a plurality of transactions associated with a storage device in a cloud computing environment, each of the transactions comprising an operation to be executed by an application in the storage device, the transaction queue storing identifiers of operations performed by the application;
evaluate whether the transaction queue is in compliance with a snapshot policy associated with the storage device, wherein the snapshot policy indicates a priority level that the operations of the transaction queue are to satisfy, and wherein the priority level of the snapshot policy is configured to be adjusted in view of current available resources for executing a snapshot command;
provide, in view of the transaction queue being in compliance with the snapshot policy, a schedule for executing a snapshot command to generate a point-in-time snapshot associated with the application;
compare a priority status of at least one operation comprised by the transaction queue with a status threshold level associated with the snapshot policy; and
responsive to determining that the priority status of the at least one operation meets the status threshold level, execute, in view of the schedule and subsequent to an execution of the at least one operation, the snapshot command to generate the point-in-time snapshot for at least a portion of the storage device, the point-in-time snapshot comprising state information corresponding to the application.

US Pat. No. 10,394,662

STORAGE APPARATUS AND STORAGE APPARATUS MIGRATION METHOD

Hitachi, Ltd., Tokyo (JP...

1. A method for migrating a first set to a second set, the first set being composed of a first primary volume and a first secondary volume in one of a plurality of systems, the second set being composed of a second primary volume and a second secondary volume in at least another one of the plurality of systems, the method comprising:executing a copy of data stored in the first primary volume to the second primary volume and the second secondary volume;
receiving a write request from an application program operating in a computer during the copy to the second primary volume and the second secondary volume;
storing the data regarding to the write request in both of the first primary volume and the first secondary volume until the copy to both of the second primary volume and the second secondary volume is completed;
providing a virtual volume to which the first primary volume is mapped as a storage area of the virtual volume, and
wherein the copy of the data stored in the first primary volume to the second secondary volume is a replication from the virtual volume to the second secondary volume as a replication pair;
providing the virtual volume as a volume having the same volume identifier with the first primary volume; and
providing the second secondary volume as a volume having the same volume identifier with the first secondary volume,
wherein the copy of the data stored in the first primary volume to the second primary volume is a migration from the virtual volume to the second primary volume, and
wherein after completion of the copy the data stored in the first primary volume to the both of the second primary volume and the second secondary volume, the method further comprises:
swapping volume identifiers of the virtual volume and the second primary volume, provide the second primary volume as the volume having the same identifier with the first primary volume; and
configuring the second set of the second primary volume and the second secondary volume instead of the replication pair of the virtual volume and the second secondary volume.

US Pat. No. 10,394,661

POLICY DRIVEN DATA UPDATES

International Business Ma...

1. A method, executed by at least one processor, the method comprising:generating a snapshot for a plurality of data files within a filesystem, wherein:
the plurality of data files includes a first data file and a second data file;
the first data file references a first data block and the second data file references a second data block; and
the snapshot includes a first snapshot file that references the first data block and a second snapshot file that references the second data block;
receiving a first update request for the first data file, wherein the first update request indicates to update the first data block;
determining the first data file is subject to a backup policy, wherein:
the backup policy includes a file specification and a time specification; and
the file specification includes one or more filename filters;
storing the determination to memory;
responsive to determining that the first data file is subject to the backup policy:
copying the first data block to a third data block; and
updating the first data block, wherein:
subsequent to the first update, the first data file references the updated first data block and the first snapshot file references the third data block; and
the third data block includes data from the first data block from before the update;
receiving a second update request for the second data file, wherein the second update request indicates to update the second data block;
determining that the second data file is not subject to the backup policy; and
responsive to determining that the second data file is not subject to the backup policy, updating the second data block, wherein subsequent to the second update, the second data file references the updated second data block and the second snapshot file references the updated second data block.

US Pat. No. 10,394,660

SNAPSHOT RESTORE WORKFLOW

NetApp, Inc., Sunnyvale,...

1. A method comprising:receiving a first input/output (I/O) request directed towards a logical unit (LUN), the first I/O request processed by a small computer systems interface (SCSI) target at a storage system connected to a storage array, the LUN associated with a host-visible serial number and mapped to a first volume on the storage array, the SCSI target including a first volume identifier associated with the first volume;
creating a snapshot of the first volume;
creating a second volume associated with a second volume identifier based on the snapshot;
updating the SCSI target to replace the first volume identifier with the second volume identifier so as to re-direct a second I/O request directed towards the LUN to the second volume; and
deleting the first volume, wherein the deleting and updating are performed as an atomic operation, wherein the host-visible serial number of the LUN is not restored from the snapshot so as to avoid changing an identity of the LUN.

US Pat. No. 10,394,658

HIGH SPEED SNAPSHOTS MECHANISM

EMC IP Holding Company LL...

1. A system, comprising:a processor; and
a memory coupled with the processor, wherein the memory is configured to provide the processor with instructions which when executed cause the processor to:
use a smart snap controller to configure backup storage to automatically take one or more snapshots of a protected device, wherein the smart snap controller configures the backup storage, including by:
selecting, from a plurality of application program interfaces, an application program interface for the backup storage based at least in part on the type of the backup storage; and
using the selected application program interface to configure a snapshot schedule in the backup storage that is associated with one or more times at which to automatically take a snapshot of the protected device; and
use the smart snap controller to communicate with the backup storage according to a cataloging schedule in order to generate cataloged metadata associated with any uncatalogued snapshots of the protected device that have been automatically taken by the backup storage wherein:
the cataloged metadata is sent from the smart snap controller to a cataloged metadata table associated with a backup server; and
the cataloging schedule is based at least in part on the snapshot schedule such that there are at least two uncatalogued snapshots to be cataloged when the smart snap controller contacts the storage device to begin cataloging.

US Pat. No. 10,394,656

USING A RECOVERY SNAPSHOT DURING LIVE MIGRATION

VMware, inc., Palo Alto,...

1. A system for restoring consistency after performing consistency-breaking operations during live migration, said system comprising:a memory area associated with a computing device, said memory area storing a plurality of source objects in a consistency group (CG); and
a processor programmed to:
in response to receiving a request to perform a live migration of the plurality of source objects on a source host to a destination host, create a snapshot of the CG of the plurality of source objects;
perform the live migration of the source objects from the source host to the destination host, wherein consistency is not maintained during the live migration;
restore, in response to a failure during the live migration, the source objects using the snapshot; and
complete the live migration.

US Pat. No. 10,394,653

COMPUTING IN PARALLEL PROCESSING ENVIRONMENTS

Mellanox Technologies, Lt...

1. A compute node comprises:a multicore processor device including:
a plurality of cores, with multiple ones of the plurality of cores each comprising a processor; and
switching circuitry configured to couple the processor to a network among the plurality of cores; the node configured to:
detect presence of a potential deadlock condition between a device that communicates with a node over a serial peripheral interconnect, and memory;
generate by the node in response to the detection of the potential deadlock condition, a transaction to cause the device to rollback all write transactions that are currently in progress at the serial peripheral interconnect to temporarily remove the write transactions from the serial peripheral interconnect; and
cancel the rolled back write transaction.

US Pat. No. 10,394,652

MEMORY SYSTEM FOR PERFORMING READ RETRY OPERATION AND OPERATING METHOD THEREOF

SK hynix Inc., Gyeonggi-...

1. A memory system, comprising:a semiconductor memory device configured to include a plurality of memory blocks and a read retry storing unit, wherein each memory block includes a plurality of memory cells; and
a controller configured to control the semiconductor memory device to perform a read operation for selected memory cells among the plurality of memory cells and configured to transmit read retry table information to the semiconductor memory device when a read operation for the selected memory cells fails,
wherein the semiconductor memory device is further configured to determine a read retry voltage based on a read retry table stored in the read retry storing unit and the read retry table information received from the controller, and to perform a read retry operation with the read retry voltage,
wherein the controller does not fetch the read retry table from a Random-Access Memory (RAM) of the controller when the read retry voltage is determined by the semiconductor memory device,
wherein the read retry table is stored in one memory block among the plurality of memory blocks,
wherein the semiconductor memory device reads the read retry table from the one memory block and stores the read retry table in the read retry storing unit when power is supplied to the memory system, and
wherein the read retry table information is a set number indicating one among a plurality of offset voltages included in the read retry table.

US Pat. No. 10,394,650

MULTIPLE WRITES USING INTER-SITE STORAGE UNIT RELATIONSHIP

INTERNATIONAL BUSINESS MA...

1. A method comprises:determining, by a first computing device of a plurality of computing devices of a dispersed storage network (DSN), whether the first computing device is to write a set of encoded data slices to a sharing group of sites;
in response to a determination by the first computing device to write a set of encoded data slices to a sharing group of sites, utilizing, by the first computing device, a first writing pattern of a plurality of writing patterns to write a set of encoded data slices to a sharing group of sites, wherein each site of the sharing group of sites includes a set of storage units interconnected via a local area network, wherein the first writing pattern includes writing a write threshold number of encoded data slices to storage units of a first site of the sharing group of sites and writing a remaining number of encoded data slices to one or more storage units of one or more other sites of the sharing group of sites, and wherein the first computing device is affiliated with the first site;
sending, by at least some of the storage units of the set of storage units of the first site, one or more copies of encoded data slices of up to the write threshold number of encoded data slices to other storage units in the sharing group of sites in accordance with an inter-site storage unit relationship; and
based on the inter-site storage unit relationship, sending, by the one or more storage units of one or more other sites of the sharing group of sites, one or more copies of encoded data slices of the remaining number of encoded data slices to still other storage units in the sharing group of sites in accordance with the inter-site storage unit relationship, wherein each site of the sharing group of sites stores encoded data slices numbering the write threshold number.

US Pat. No. 10,394,649

FIRST READ SOLUTION FOR MEMORY

SanDisk Technologies LLC,...

1. An apparatus, comprising:word line layers separated by dielectric layers in a stack;
a set of memory cells arranged along vertical pillars in the stack; and
for each word line layer, a respective pulldown circuit comprising a transistor and a resistor in a path which connects the word line layer to ground.

US Pat. No. 10,394,648

METHOD TO DELIVER IN-DRAM ECC INFORMATION THROUGH DDR BUS

SAMSUNG ELECTRONICS CO., ...

1. A data chip, comprising:a data array;
read circuitry to read raw data from the data array;
a buffer to store the raw data read from the data array by the read circuitry;
a mask register to store a pollution pattern, wherein the pollution pattern includes both 0s and 1s;
a data pollution engine to modify the raw data stored in the buffer using the pollution pattern stored in the mask register to produce a polluted data; and
transmission circuitry to transmit the polluted data from the buffer.

US Pat. No. 10,394,647

BAD BIT REGISTER FOR MEMORY

INTERNATIONAL BUSINESS MA...

1. A computer-implemented method, comprising:configuring, in a non-volatile random access memory, a suspect bit register to store addresses of bits that are determined to have had errors; and
configuring, in the non-volatile random access memory, a bad bit register to store addresses of bits that both (i) appeared in the suspect bit register due to a first error and (ii) are determined to have had a second error occur after the addresses of the bits have already been stored in the suspect bit register.

US Pat. No. 10,394,646

INCREMENTAL DATA VALIDATION

EMC IP Holding Company LL...

1. A method of performing data validation comprising:determining, using a processor, an expected sequence of characters including a plurality of groups, each of the plurality of groups including a first expected sequence of one or more characters representing encoded information and a second expected sequence of one or more data validation characters determined in accordance with a corresponding portion of the expected sequence, the portion including at least the first expected sequence of one or more characters of said each group, wherein the expected sequence of characters includes a space character between each of the plurality of groups, and wherein, for each of the plurality of groups, the one or more data validation characters of the second expected sequence of said each group is determined using all non-space characters of the expected sequence of characters preceding said each group in the expected sequence of characters and excluding any of the space characters; and
performing, using a processor, data validation processing incrementally as data for each of the plurality of groups is received, wherein the data validation processing performed as data for said each group is received uses a received sequence of one or more data validation characters corresponding to the second expected sequence of one or more data validation characters of said each group, wherein performing data validation processing incrementally includes:
receiving a first input string including a first portion corresponding to a first of the plurality of groups, the first portion including a first received data sequence of one or more characters corresponding to the first expected sequence of one or more characters of the first group, the first portion including a first received data validation sequence of one or more data validation characters corresponding to the second expected sequence of one or more data validation characters of the first group;
performing data validation processing for the first input string after receiving the first portion;
receiving a second input string including the first portion and a second portion corresponding to a second of the plurality of groups different from the first group, said first group occurring in the expected sequence prior to the second group, the second portion including a second received data sequence of one or more characters corresponding to the first expected sequence of one or more characters of the second group, the second portion including a second received data validation sequence of one or more data validation characters corresponding to the second expected sequence of one or more data validation characters of the second group; and
performing data validation processing for the second input string after receiving the second portion.

US Pat. No. 10,394,645

CORRECTING OPERATIONAL STATE AND INCORPORATING ADDITIONAL DEBUGGING SUPPORT INTO AN ONLINE SYSTEM WITHOUT DISRUPTION

Cisco Technology, Inc., ...

1. A computer-implemented method of using a set of loadable functions to facilitate diagnosis and correction of error states of one or more running processes without requiring process termination, the computer-implemented method comprising:dynamically by operation of one or more computer processors, and without terminating any of the one or more running processes:
upon determining that the one or more running processes are in an error state, loading a debug function into a library statically linked to the one or more running processes;
extracting diagnostic information of the one or more running processes by invoking the debug function;
loading a change function into the library statically linked to the one or more running processes, for invocation in order to correct the error state of the one or more running processes, wherein the change function is based on the extracted diagnostic information; and
for each of at least one of the debug function and the change function, removing the respective function from the library statically linked to the one or more running processes, after the respective function is invoked.

US Pat. No. 10,394,644

PROCESSOR SYSTEM, ENGINE CONTROL SYSTEM AND CONTROL METHOD

RENESAS ELECTRONICS CORPO...

1. A processor system comprising:a master processor;
a checker processor; and
a control circuit that controls the master processor and the checker processor,
wherein when an address fetched by the master processor is a predetermined address, the control circuit controls the master processor and the checker processor to process a task associated with the address in lock-step mode,
wherein the control circuit performs control so that a period from when a task is processed in lock-step mode to when another task is processed in lock-step mode is equal to or shorter than a maximum test period,
wherein the maximum test period is defined by subtracting a sum of a fault reaction time and a time necessary for a test process, from a fault tolerant time interval,
wherein the fault reaction time is a period from when a fault is detected to when the processor system changes to a stopped state, and
wherein the fault tolerant time interval is a period from when the fault occurs in the processor system to when the processor system changes to the stopped state.

US Pat. No. 10,394,642

DATA LIFECYCLE MANAGEMENT

INTERNATIONAL BUSINESS MA...

1. A method for managing metrics from a monitored system comprising:identifying a fault from the monitored system;
storing in a memory and identifying from the monitored system one or more metrics that are related to the fault;
identifying a lifespan condition associated with the fault;
adding or changing a lifespan for the one or more metrics based on the identified lifespan condition; and
removing the one or more metrics from the memory if their associated lifespans are over.

US Pat. No. 10,394,641

APPARATUS AND METHOD FOR HANDLING MEMORY ACCESS OPERATIONS

ARM Limited, Cambridge (...

1. An apparatus comprising:processing circuitry to execute program instructions including memory access instructions; and
a memory interface to couple the processing circuitry to a memory system;
the processing circuitry being switchable between a synchronous fault handling mode and an asynchronous fault handling mode, when in the synchronous fault handling mode the processing circuitry applying a constraint on execution of the program instructions such that a fault resulting from a memory access operation processed by the memory system will be received by the memory interface before the processing circuitry has allowed program execution to proceed beyond a recovery point for the memory access instruction associated with said memory access operation, and when in the asynchronous fault handling mode the processing circuitry removing said constraint;
the processing circuitry arranged to switch between the synchronous fault handling mode and the asynchronous fault handling mode during execution of the program instructions, in dependence on a current context of the processing circuitry;
wherein the processing circuitry is arranged to switch to the synchronous fault handling mode when executing program instructions identified as being within a critical code portion, and to otherwise operate in the asynchronous fault handing mode.

US Pat. No. 10,394,640

REQUIREMENT RUNTIME MONITOR USING TEMPORAL LOGIC OR A REGULAR EXPRESSION

Infineon Technologies Aus...

1. A hardware monitor, comprising:one or more hardware components to:
receive information that identifies a requirement for a hardware system,
the requirement being associated with operation of the hardware system during a runtime operation of the hardware system in an intended operating environment;
program the one or more hardware components to analyze the hardware system based on the requirement;
receive a runtime signal, associated with a component of the hardware system, from the hardware system during the runtime operation of the hardware system in the intended operating environment;
analyze the runtime signal during the runtime operation of the hardware system based on programming the one or more hardware components to analyze the hardware system;
monitor another signal associated with a software module of the hardware system or another component of the hardware system;
determine, during the runtime operation of the hardware system, that the requirement was violated during the runtime operation of the hardware system based on analyzing the runtime signal and the other signal; and
output information indicating that the requirement was violated.

US Pat. No. 10,394,639

DETECTING AND SURFACING USER INTERACTIONS

Microsoft Technology Lice...

1. A computing system, comprising:a processor; and
memory storing instructions executable by the processor, wherein the instructions, when executed, configure the computing system to provide:
a data aggregation system configured to:
obtain incident data indicative of an incident that results in performance degradation of a hosted service, wherein the hosted service is hosted by a service computing system and accessible by a set of users, associated with a tenant, over a computing network; and
obtain tenant data, corresponding to the tenant, indicative of user activity for the set of users;
data mining logic configured to:
identify, based on tenant map information corresponding to the tenant, a plurality of servers that host the hosted service for the tenant;
identify, based on the incident data, a time corresponding to the incident and a set of servers, in the plurality of servers, that were impacted by the incident;
identify, based on the user activity, a set of impacted users of the tenant, who were actively using the set of servers during the time corresponding to the incident;
generate a metric indicative of a measure of the impacted users, impacted by the incident; and
data surfacing logic configured to:
generate a computer control signal that controls surfacing of a representation of the identified metric, based on the generated metric.

US Pat. No. 10,394,638

APPLICATION HEALTH MONITORING AND REPORTING

STATE FARM MUTUAL AUTOMOB...

1. A computer-implemented method comprising:retrieving, via a computer network, data about the health of a plurality of applications executing in a computing environment,
wherein at least some of the data about the health of the plurality of applications is generated while at least some of the plurality of applications are performing respective functions in the computing environment,
wherein the data about the health of the plurality of applications includes more than one dissimilar metric;
determining, by one or more processors operating a health indicator generation module, a plurality of normalized indications of health based upon the data about the health of the plurality of applications,
wherein each of the plurality of normalized indications of health corresponds to one of the plurality of applications,
wherein at least one of the plurality of normalized indications of health is based upon data generated directly by a corresponding one of the plurality of applications,
and wherein each of the plurality of normalized indications of health indicates one of:
(i) an availability of the corresponding one of the plurality of applications to perform the respective functions of the corresponding one of the plurality of applications, or
(ii) a performance of the corresponding one of the plurality of applications in performing the respective functions of the corresponding one of the plurality of applications,
wherein determining the plurality of normalized indications of health based upon the data about the health of the plurality of applications includes transforming the more than one dissimilar metric into comparable indications of health;
determining, by the one or more processors, an indication of an overall health of a portion of the computing environment based upon the plurality of normalized indications of health of the plurality of applications, the portion of the computing environment implementing two or more of the plurality of applications;
generating, by the one or more processors, a plurality of visual elements in a dashboard to be displayed on remote user devices, the dashboard including a plurality of tiles, each corresponding to one of the plurality of applications, and each including at least one of the normalized indications of health,
wherein one of the plurality of visual elements (a) presents the indication of the overall health of the portion of the computing environment, (b) is expandable, upon a selection by a user of one of the remote user devices, to present details about the performance or the availability of subdivisions of the portion of the computing environment, and
wherein other of the plurality of visual elements present at least some of the plurality of normalized indications of health of the plurality of applications; and
sending, via the computer network, the plurality of visual elements to at least one of the remote user devices.

US Pat. No. 10,394,637

SYSTEMS AND METHODS FOR DATA VALIDATION AND PROCESSING USING METADATA

AMERICAN EXPRESS TRAVEL R...

1. A method comprising:receiving, by a processor, a source,
identifying, by the processor, the source with an source type and a file type;
receiving, by the processor, a metadata layer that describes the source,
wherein the source comprises source records with source data fields containing source data,
wherein the metadata layer includes metadata comprising at least one of a field data type, a field data length, a field description, or a record length;
validating, by the processor, the metadata layer against the source;
validating, by the processor and using rules, a quality of the metadata layer;
correcting, by the processor, the metadata in response to the metadata being inaccurate;
completing, by the processor, the metadata in response to the metadata being incomplete;
writing, by the processor, results to a log;
transforming, by the processor, the source records into transformed records in an ASCII readable format for a load ready file,
performing, by the processor, data conversions by reading metadata describing source columns;
dynamically creating, by the processor and in the load ready file, target columns corresponding to the source columns;
determining, by the processor, a structure of the load ready file based on the target columns;
deriving, by the processor, new data fields from the source data fields to create derived fields in the transformed records;
tracking, by the processor, a history of the transforming in the load ready file;
detecting, by the processor, a number of failed transforms due to bad records during the importing;
writing, by the processor, the bad record to the log in response to a failed transformation;
evaluating, by the processor, the number of failed transforms with the bad records versus a threshold for the bad records to determine if the importing is a success or failure;
balancing, by the processor, a number of records in the source against a number of transformed records in the load ready file to generate a transformation failure rate;
deciding, by the processor, a state of the transforming the source records in response to the transformation failure rate and a predetermined acceptable failure rate; and
outputting, by the processor, the load ready file.

US Pat. No. 10,394,636

TECHNIQUES FOR MANAGING A HANG CONDITION IN A DATA PROCESSING SYSTEM WITH SHARED MEMORY

International Business Ma...

1. A method of operating a data processing system, comprising:detecting, by a master, that a processing unit within a first group of processing units in the data processing system has a hang condition;
in response to detecting that the processing unit has a hang condition, reducing, by an arbiter, a command issue rate for the first group of processing units;
notifying, by the master, one or more other groups of processing units in the data processing system that the first group of processing units has reduced the command issue rate for the first group of processing units; and
in response to the notifying, changing, by respective arbiters of the one or more other groups of processing units, respective command issue rates of the other groups of processing units to reduce a number of commands received by the first group of processing units from the other groups of processing units.

US Pat. No. 10,394,635

CPU WITH EXTERNAL FAULT RESPONSE HANDLING

Hewlett Packard Enterpris...

1. A system, comprising:a central processing unit (CPU) to process data;
a first memory management unit (MMU) in the CPU to generate an external request to a bus for data located external to the CPU; and
an external fault handler in the CPU to process a fault response received via the bus, wherein the fault response is generated externally to the CPU and relates to a fault being detected with respect to the external request;
wherein the CPU retires an operation from a protocol layer that caused the fault and signals a thread that is blocked on a given cache miss to execute an external fault corresponding to the fault response, and
the CPU comprises a cache that re-issues a cache miss request to the protocol layer in response to execution of the external fault to enable the external request to be completed after the fault is detected.

US Pat. No. 10,394,633

ON-DEMAND OR DYNAMIC DIAGNOSTIC AND RECOVERY OPERATIONS IN CONJUNCTION WITH A SUPPORT SERVICE

Microsoft Technology Lice...

1. A method to provide on-demand, dynamic diagnostic and recovery operations in conjunction with a support service, the method comprising:collecting hardware and software environment information associated with a user device at an assistance client application executed on the user device, wherein at least some of the hardware and software environment information being collected is received from an operating system executed on the user device;
receiving, at the assistance client application executed on the user device, hardware and software environment information associated with one or more servers from the one or more servers executing a hosted service, wherein a component of the hosted service is executed on the user device;
in response to exhausting a set of automatic diagnostic and recovery actions associated with the component of the hosted service, engaging the support service;
providing the collected hardware and software environment information associated with the user device and the received hardware and software environment information associated with the one or more servers to the support service;
automatically facilitating a communication between a user associated with the user device and an operator of the support service through the assistance client application based on one or more contact preferences of the user; and
performing one or more diagnostic and recovery actions on one or more of the component of the hosted service and the user device instructed by the operator of the support service.

US Pat. No. 10,394,632

METHOD AND APPARATUS FOR FAILURE DETECTION IN STORAGE SYSTEM

International Business Ma...

1. A method for improving failure detection in a storage system, the method comprising:determining, by one or more processors of a computing system, an amount of data received by a plurality of switches in the storage system within a predetermined time window to obtain a plurality of data amounts, the determining including excluding, for a first switch of the plurality of switches, a particular amount of data received from a host of the storage system;
determining, by the one or more processors of the computing system, a count of check errors detected in the amount of data received by the plurality of switches to obtain a plurality of check error counts;
requesting, in response to a given switch of the plurality of switches detecting a check error in data received from a neighboring device connected to the given switch, the neighboring device to retransmit the data to the given switch; and
calculating, by the one or more processors of the computing system, a failure risk for the plurality of switches based on the plurality of data amounts and the plurality of check error counts.

US Pat. No. 10,394,630

ESTIMATING RELATIVE DATA IMPORTANCE IN A DISPERSED STORAGE NETWORK

INTERNATIONAL BUSINESS MA...

1. A method for execution by one or more processing modules of one or more computing devices of a dispersed storage network (DSN) having a plurality of storage units, the plurality of storage units storing a plurality of data objects in the form of encoded data slices, the method comprises:generating a first importance ranking for a first data object of the plurality of data objects;
generating a second importance ranking for a second data object of the plurality of data objects, the first importance ranking and the second importance ranking based on one or more ranking factor;
detecting a plurality of the encoded data slices that require rebuilding, wherein each encoded data slice of the plurality of the encoded data slices is a dispersed storage error encoded portion of a respective one of the plurality of data objects, and wherein the plurality of the encoded data slices that require rebuilding include at least one encoded data slice of the first data object and at least one encoded data slice of the second data object;
performing a comparison of the first importance ranking and the second importance ranking; and
based on the comparison, assigning respective rebuilding priority levels to the at least one encoded data slice of the first data object and the at least one encoded data slice of the second data object.

US Pat. No. 10,394,629

MANAGING A PLUG-IN APPLICATION RECIPE VIA AN INTERFACE

Oracle International Corp...

1. One or more non-transitory machine-readable media storing instructions which, when executed by one or more processors, cause:storing a first mapping between:
a set of user-exposed fields selectable via a plug-in application recipe (“PIAR”) creation interface associated with a PIAR management engine, and
another set of fields exposed by an Application Programming Interface (API) of a third-party application,
wherein the PIAR management engine manages PIAR definitions, each PIAR definition identifying
(a) a trigger for which one or more trigger variables, values of which are necessary to evaluate the trigger on an ongoing basis, are exposed by a first plug-in application to the PIAR management engine, wherein an instance of evaluating the trigger comprises determining whether a condition is satisfied based at least in part on one or more values of the one or more trigger variables, and
(b) an action for which a second plug-in application exposes an interface to the PIAR management engine for causing the second plug-in application to carry out the action, wherein an instance of evaluating the action comprises carrying out the action based on one or more values of one or more input variables that are input to the action in the PIAR definition,
wherein the PIAR management engine makes the action conditional on the trigger on an ongoing basis, and
wherein the PIAR definition comprises a particular trigger and a particular action;
receiving, via one or more PIAR creation interfaces, a plurality of PAIR definitions based at least on a user-selected field of the set of user-exposed fields;
wherein the user-selected field is mapped, in the first mapping, to a first third-party application field exposed by the API of the third-party application, wherein the first third-party application field is associated with the particular trigger or the particular action;
managing a particular PIAR in an active state, wherein the particular PIAR corresponds to a PIAR definition of the plurality of PIAR definitions, and wherein managing the particular PIAR comprises periodically receiving and checking, against a condition of the particular PIAR, data from the first third-party application field as provided by the third-party application via the API;
during or after managing the particular PIAR in the active state, storing information comprising an update from the first mapping to a second mapping, wherein the second mapping maps the user-selected field to a second third-party application field, wherein the second third-party application field differs from the first third-party application field;
without modifying the particular PIAR, managing the particular PIAR in the active state at least in part by periodically receiving and checking, against the condition of the particular PIAR, data from the second third-party application field as provided by the third-party application via the API.

US Pat. No. 10,394,628

IN-LINE EVENT HANDLERS ACROSS DOMAINS

Microsoft Technology Lice...

1. A computing system, comprising:a computer processor;
a communication system that communicates, through a network interface, with a first domain computing system and a second domain computing system that is different from the first domain computing system, the first and second domain computing systems communicating with one another through corresponding network interfaces; and
an event handler orchestrator service that stores a first event handler record corresponding to a first event handler in the first domain computing system, the first event handler record including filter criteria identifying an event of interest raised by an invoking process running the second domain computing system, the event handler orchestrator service receiving a call from the invoking process when the event of interest is raised by the invoking process in the second domain computing system and returning, to the invoking process, an endpoint in the first domain computing system, corresponding to the first event handler, for invoking the first event handler.

US Pat. No. 10,394,626

EVENT FLOW SYSTEM AND EVENT FLOW CONTROL METHOD

HITACHI, LTD., Tokyo (JP...

1. An event flow system connecting nodes, for which processes are defined, from an upstream side to a downstream side by an event which is generated due to a certain process and is used by another process to realize a process flow,the event flow system comprising:
at least a memory and a processor;
a process flow builder configured to build a process flow via a flow table using a plurality of nodes, one or more events, and a reverse event that sends a predetermined request, from a downstream node to an upstream node disposed further toward an upstream side than the downstream node;
wherein the process flow builder receives information for changing the flow table by any one of adding at least one node, deleting at least one node, adding at least one link, deleting at least one link or updating a node parameter,
wherein a link is a connection between at least two nodes that indicates the event or the reverse event,
wherein when the process flow builder deletes at least one node from the flow table, the process flow builder searches for related nodes to the at least one deleted node and deletes the related nodes,
wherein when the process flow builder adds at least one link to the flow table, the process flow builder adds destination information and source information for the at least one added link, and
wherein when the process flow builder deletes at least one link to the flow table, the process flow builder deletes destination information and source information for the at least one deleted link; and
a process flow executer configured to execute a process, stored on the flow table, defined for each of the plurality of nodes according to the event and the reverse event.

US Pat. No. 10,394,625

REACTIVE COINCIDENCE

Microsoft Technology Lice...

1. An event processing method, comprising:executing, on a processor, instructions stored in a memory that cause an event processing system to perform the following acts:
creating a second event stream, embedded within a first event stream, to represent duration of a first point event in the first event stream, wherein creation of the second event stream represents start of the duration of the first point event;
creating a third event stream, embedded within the first event stream, to represent duration of a second point event in the first event stream, wherein creation of the third event stream represents the start of the duration of the second point event; and
determining coincidence between the first point event and the second point event based on a comparison of the second event stream and third event stream.

US Pat. No. 10,394,624

ATTACHING APPLICATIONS BASED ON FILE TYPE

VMware, Inc., Palo Alto,...

1. A method of operating an application attaching system to dynamically make applications available to a computing device, the method comprising:identifying an application attach triggering event based on a file selection of a certain file type on the computing device;
in response to the application attach triggering event, identifying an application within an application volume based on the certain file type, wherein the application volume comprises a virtual or physical storage element;
in response to identifying the application, mounting the application volume to the computing device;
modifying one or more registry keys on the computing device to make the application executable on the computing device from the application volume; and
executing files for the application stored on the application volume to support the file selection.

US Pat. No. 10,394,622

MANAGEMENT SYSTEM FOR NOTIFICATIONS USING CONTEXTUAL METADATA

INTERNATIONAL BUSINESS MA...

1. A computer-implemented method comprising:determining that an application executing on a mobile computing device of a user generated a notification on the mobile computing device for an event, the event comprising an upcoming calendar appointment of the user, a missed telephone call of the user, or a communication from a social media contact of the user, and the application one of a calendar application, a conferencing application, and a social media application;
collecting contextual data from one or more sources associated with the mobile computing device, the sources comprising at least one selected from the group consisting of a sensor on the mobile computing device, the social media application, the calendar application, and the conferencing application;
generating contextual metadata using the contextual data;
analyzing the contextual metadata;
generating, based on the contextual metadata, a determination that an action associated with the notification has been implemented by the user, wherein the implementation of the action renders the notification obsolete; and
dismissing the notification from a notification queue of the user based on the determination that the action associated with the notification has been implemented by the user.

US Pat. No. 10,394,621

METHOD AND COMPUTER READABLE MEDIUM FOR PROVIDING CHECKPOINTING TO WINDOWS APPLICATION GROUPS

OPEN INVENTION NETWORK LL...

9. A method, comprising:launching one or more applications each comprising one or more processes and threads;
initializing a checkpointer using one or more of a checkpoint library or checkpoint kernel module;
creating, by said checkpointer, a set of objects that are used to record data and a computation state of application processes and threads;
creating a synchronization point for said one or more applications;
triggering one or more checkpoints of application processes and threads using one or more of user-mode or kernel-mode Asynchronous Procedure Calls (APC) and signaling application threads to enter a checkpoint APC signal handler;
removing, by said checkpoint APC handler, one or more of said user-mode or kernel-mode APCs from the applications' APC queues when at said synchronization point for said one or more applications; and
checkpointing one or more joining applications jointly with said one or more applications by launching said one or more joining applications, initializing said one or more of a checkpoint library and kernel module, including said one or more joining applications in said synchronization point for said one or more applications, and including the processes and threads of said joining applications in said triggering of one or more checkpoints.

US Pat. No. 10,394,620

METHOD FOR CHANGING ALLOCATION OF DATA USING SYNCHRONIZATION TOKEN

INTERNATIONAL BUSINESS MA...

1. A system configured to process data with data processing modules provided in parallel, the system comprising:a processor; and
one or more computer readable mediums collectively including instructions that, when executed by the processor, cause the processor to:
input a synchronization token into at least one data processing module that is in an operational state from among the data processing modules provided in parallel, in response to a request to change allocation of the data;
change the allocation of the data to the data processing modules provided in parallel, after the synchronization token is input, two or more of the data processing modules being configured for processing in parallel; and
in response to the synchronization token having arrived at a data processing module that receives data at a later stage than the at least one data processing module into which the synchronization token was input, process data for which processing has been stopped by the at least one data processing module among the data processing modules after the synchronization token is input to the at least one data processing module;
wherein pieces of data are respectively provided with key values indicating groups to which the respective pieces of data belong;
wherein an order in which the pieces of data in each of the group are to be processed within each of the group is determined; and
wherein the key values are allocated to every data module that processes the pieces of data among the data processing modules provided in parallel.

US Pat. No. 10,394,619

SIGNATURE-BASED SERVICE MANAGER WITH DEPENDENCY CHECKING

Western Digital Technolog...

1. A computer-implemented method comprising:monitoring a plurality of services, wherein:
each service is a process managed by an operating system and running on a computer, wherein the process is uniquely identifiable in a process table of the operating system based on a signature;
the signature is based on a combination of at least two service attributes in an entry in the process table for the process, wherein at least one service attribute of the at least two service attributes comprises one or more of a file system location and a command line argument associated with the service;
the signature excludes a unique process identifier assigned to the process and included in the entry for the process in the process table; and
the signature is used to identify each of the plurality of services in the process table for monitoring the plurality of services;
receiving a request to add a new service to the plurality of services;
determining, using signatures to lookup processes in the process table, whether service dependencies of the plurality of services in the process table and the new service are compatible; and
responsive to determining that the service dependencies of the plurality of services in the process table and the new service are compatible:
starting the new service; and
determining a new signature for the new service based on a new entry in the process table for the new service.

US Pat. No. 10,394,618

THERMAL AND POWER MEMORY ACTIONS

International Business Ma...

1. A system comprising:a memory module including a volatile memory, a non-volatile memory, and one or more sensors; and
one or more processing circuits, wherein the one or more processing circuits are configured to perform a method comprising:
obtaining, from the one or more sensors, a set of volatile memory sensor data;
obtaining, from the one or more sensors, a set of non-volatile memory sensor data;
analyzing the set of volatile memory sensor data and the set of non-volatile memory sensor data, wherein analyzing the set of volatile memory sensor data and the set of non-volatile memory sensor data comprises:
comparing the set of volatile memory sensor data to a set of volatile memory thresholds; and
comparing the set of non-volatile memory sensor data to a set of non-volatile memory thresholds;
determining, based on the analyzing, that a memory condition exists, wherein determining that a memory condition exists further comprises:
determining, in response to the set of volatile memory sensor data not satisfying the set of volatile memory thresholds, that a volatile memory condition exists; and
issuing, in response to determining that the memory condition exists, one or more memory actions, wherein issuing the one or more memory actions further comprises:
generating, in response to determining that the volatile memory condition exists, a set of parity volatile memory data;
storing the set of parity volatile memory data in the non-volatile memory; and
reducing the refresh rate of the volatile memory.

US Pat. No. 10,394,617

EFFICIENT APPLICATION MANAGEMENT

International Business Ma...

1. A method for application management, the method comprising:receiving an application and system configuration, wherein the application and system configuration details which of one or more systems each of one or more applications are configured to operate on;
establishing baseline energy consumptions of each of the one or more applications on each of the one or more systems by briefly operating each of the one or more applications on each of the one or more systems and measuring an energy consumption of each of the one or more systems and individual energy consuming hardware components of each of the one or more systems;
assigning operation of a first application of the one or more applications to a first system of the one or more systems based on the measured energy consumption of the first system being a least amount of the one or more systems while operating the first application;
assigning operation of a second application of the one or more applications to a second system of the one or more systems based on the measured energy consumption of the second system being a least amount of the one or more systems while operating the second application;
determining whether the second application of the one or more applications can operate with less energy consumption than it is currently operating with by:
referencing the energy consumption baseline of the second application to obtain the energy consumption of the first system and individual energy consumptions of each one or more energy consuming hardware components of the first system, and
comparing a real-time energy consumption of the energy consuming hardware component utilized by the second application on the second system with the obtained energy consumption of a same energy consuming hardware component on the first system; and
operating the second application on the first system based on determining that the second application operates with less energy consumption when using the energy consuming hardware component of the first system than it is currently operating on the second system,
wherein one or more steps of the above method are performed using one or more computers.

US Pat. No. 10,394,616

EFFICIENT APPLICATION MANAGEMENT

International Business Ma...

8. A computer system for application management, the computer system comprising:one or more computer processors, one or more computer-readable storage media, and program instructions stored on one or more of the computer-readable storage media for execution by at least one of the one or more processors, the program instructions comprising:
program instructions to receive an application and system configuration, wherein the application and system configuration details which of one or more systems each of one or more applications are configured to operate on;
program instructions to establish baseline energy consumptions of each of the one or more applications on each of the one or more systems by briefly operating each of the one or more applications on each of the one or more systems and measuring an energy consumption of each of the one or more systems and individual energy consuming hardware components of each of the one or more systems;
program instructions to assign operation of a first application of the one or more applications to a first system of the one or more systems based on the measured energy consumption of the first system being a least amount of the one or more systems while operating the first application;
program instructions to assign operation of a second application of the one or more applications to a second system of the one or more systems based on the measured energy consumption of the second system being a least amount of the one or more systems while operating the second application;
program instructions to determine whether the second application of the one or more applications can operate with less energy consumption than it is currently operating with by:
referencing the energy consumption baseline of the second application to obtain the energy consumption of the first system and individual energy consumptions of each one or more energy consuming hardware components of the first system, and
comparing a real-time energy consumption of the energy consuming hardware component utilized by the second application on the second system with the obtained energy consumption of a same energy consuming hardware component on the first system; and
program instructions to operate the second application on the first system based on determining that the second application operates with less energy consumption when using the energy consuming hardware component of the first system than it is currently operating on the second system.

US Pat. No. 10,394,615

INFORMATION PROCESSING APPARATUS AND JOB MANAGEMENT METHOD

FUJITSU LIMITED, Kawasak...

1. An information processing apparatus comprising:a processor configured to perform a procedure including:
taking currently executing jobs respectively as candidate jobs, and specifying, when a migration of a candidate job to a migration destination node selected from free nodes, which are not executing any jobs, is expected to expand a continued range of free nodes, the migration of the candidate job to the migration destination node as a possible migration;
determining, when a plurality of possible migrations is specified, a possible migration to be performed from among the plurality of possible migrations, based on amounts of communication needed to perform individual migrations indicated by the plurality of possible migrations and numbers of nodes used for executing candidate jobs to be migrated in the individual migrations; and
performing the determined possible migration,
the determining of the possible migration to be performed includes:
repeatedly extracting, as a pair of possible migrations, two possible migrations that have not been excluded from comparisons, from among the plurality of possible migrations;
calculating a first amount of communication needed to perform a first migration indicated by a first possible migration included in the pair of possible migrations, and a second amount of communication needed to perform a second migration indicated by a second possible migration included in the pair of possible migrations;
calculating a first number of nodes used for executing a first candidate job to be migrated in the first migration, and a second number of nodes used for executing a second candidate job to be migrated in the second migration;
calculating, as a first evaluation value, an amount of communication by dividing the second amount of communication by the first amount of communication;
calculating, as a second evaluation value, a sum of values for the amount of communication and the number of nodes being calculated by dividing the second number of nodes by the first number of nodes; and
carrying out a first evaluation using the first evaluation value to make a determination on which of the possible migrations included in the pair takes a shorter time for the job migration than another of the possible migrations included in the pair, and carrying out, if the first evaluation value is within a prescribed range, a second evaluation using the second evaluation value to determine which of the possible migrations included in the pair takes a shorter time for the job migration than another of the possible migrations included in the pair.

US Pat. No. 10,394,614

TASK QUEUING AND DISPATCHING MECHANISMS IN A COMPUTATIONAL DEVICE

INTERNATIONAL BUSINESS MA...

1. A method comprisingmaintaining a plurality of ordered lists of dispatch queues corresponding to a plurality of processing entities, wherein each dispatch queue includes one or more task control blocks or is empty;
determining whether a primary dispatch queue of a processing entity is empty in an ordered list of dispatch queues for the processing entity;
in response to determining that the primary dispatch queue of the processing entity is empty, selecting a task control block for processing by the processing entity from another dispatch queue of the ordered list of dispatch queues for the processing entity, wherein the another dispatch queue from which the task control block is selected meets a threshold criteria for the processing entity, wherein a data structure indicates that the task control block that was selected was last executed in the processing entity, and wherein in response to determining that the primary dispatch queue of the processing entity is not empty, processing at least one task control block in the primary dispatch queue of the processing entity;
determining that another task control block is ready to be dispatched; and
in response to determining that the another task control block was dispatched earlier, placing the another task control block in a primary dispatch queue of a processing entity on which the another task control block was dispatched earlier.

US Pat. No. 10,394,613

TRANSFERRING TASK EXECUTION IN A DISTRIBUTED STORAGE AND TASK NETWORK

PURE STORAGE, INC., Moun...

1. A method for execution by a computer to manage distributed computing of a task, the method comprises:encoding a data object using an encoding matrix having a unity matrix portion to produce a plurality of sets of encoded data slices, wherein a set of encoded data slices of the plurality of sets of encoded data slices includes data encoded slices and redundancy encoded slices, wherein the data encoded slices results from the unity matrix portion of the encoding matrix;
dividing the task into a set of partial tasks, wherein a number of partial tasks corresponds to a number of data encoded slices in a set of encoded data slices;
determining processing speeds of a set of distributed storage and task (DST) execution units allocated for storing the plurality of sets of encoded data slices;
mapping storage and partial task assignments regarding the data encoded slices of the plurality of sets of encoded data slices to the set of DST execution units based on the processing speeds to produce storage-task mapping; and
outputting the data encoded slices of the plurality of sets of encoded data slices and the set of partial tasks to the set of DST execution units in accordance with the storage-task mapping.

US Pat. No. 10,394,612

METHODS AND SYSTEMS TO EVALUATE DATA CENTER PERFORMANCE AND PRIORITIZE DATA CENTER OBJECTS AND ANOMALIES FOR REMEDIAL ACTIONS

VMware, Inc., Palo Alto,...

1. In a process that prioritizes objects of a data center for application of remedial actions to correct performance problems with the objects, the specific improvement comprising:calculating an object rank of each object of the data center over a period of time, wherein each object rank is calculated as a weighted function of relative frequencies of alerts that occur within the period of time for the object;
calculating an object trend of each object of the data center, wherein each object trend is calculated as a weighted function of differences between a first relative frequency of alerts at a first time stamp and second relative frequency of alerts at a second time stamp for the object;
determining an order of priority of the objects for applying remedial actions based on the object ranks and object trends; and
executing the remedial actions based on the order of priority, thereby correcting performance problems of the objects.

US Pat. No. 10,394,611

SCALING COMPUTING CLUSTERS IN A DISTRIBUTED COMPUTING SYSTEM

Amazon Technologies, Inc....

1. A system, comprising:a plurality of computing devices configured to implement:
a current cluster having a plurality of nodes storing cluster data, wherein each node comprises a respective at least one storage device that stores a respective portion of the cluster data, wherein the current cluster receives access requests for the cluster data at a network endpoint for the current cluster; and
a cluster control interface configured to:
receive a cluster scaling request for the current cluster, wherein said cluster scaling request indicates a change in a number or type of the plurality of nodes in the current cluster;
in response to receiving the cluster scaling request:
create a new cluster having a plurality of nodes as indicated in the cluster scaling request, wherein the new cluster comprises the change in the number or type of the plurality of nodes from the current cluster; and
initiate a copy of the cluster data from the plurality of nodes of the current cluster to the plurality of nodes in the new cluster, wherein after completion of the copy of a respective portion of the cluster data from one of the plurality of nodes of the current cluster and before completion of the copy of another respective portion of the cluster data from another one of the plurality of nodes of the current cluster, the current cluster continues to respond to all read requests directed to the network endpoint for the current cluster including a read request directed to the respective portion that has already been copied to the new cluster; and
subsequent to completion of the copy of the cluster data to the plurality of nodes in the new cluster: move the network endpoint for the current cluster to the new cluster, wherein after the network endpoint is moved, access requests directed to the network endpoint are sent to the new cluster; and
disable the current cluster.

US Pat. No. 10,394,610

MANAGING SPLIT PACKAGES IN A MODULE SYSTEM

Oracle International Corp...

1. One or more non-transitory machine-readable media storing instructions which, when executed by one or more processors, cause:generating a first module membership record, for a first module in a module system that specifies accessibility of each module in a plurality of modules to other modules in the plurality of modules, at least by:
identifying a first set of one or more packages in the first module;
determining that a first package, from the first set of one or more packages, comprises a first set of executable code;
based at least in part on determining that the first package comprises the first set of executable code: including, in the first module membership record, an indication that the first package belongs to the first module;
generating a second module membership record, for a second module in the module system, at least by:
identifying a second set of one or more packages in the second module;
determining that a second package, from the second set of one or more packages, does not comprise any sets of executable code;
based at least in part on determining that the second package does not comprise any sets of executable code: omitting, from the second module membership record, any indication that the second package belongs to the second module;
determining, based at least on the first module membership record and the second module membership record, whether a code conflict exists in the module system,
wherein generating the first module membership record, generating the second module membership record, and determining whether the code conflict exists are performed by executable code associated with one or more of: an integrated development environment (IDE), a compiler, a loader, a runtime environment, a module assembler, or a runtime image assembler.

US Pat. No. 10,394,609

DATA PROCESSING

INTERNATIONAL BUSINESS MA...

1. A computer-implemented method for data processing in a multi-threaded processing arrangement, the method comprising:receiving a data processing task to be executed on a data file comprising a plurality of data records in a nested records structure, where one or more data records are within one or more other data records, the data file and the plurality of data records each having an associated record description that defines a data layout, including information relating to parameters or attributes of the plurality of data records, wherein the record description comprises metadata;
based on the received data processing task, pre-processing the data file to analyze the record descriptions associated with the data file and the plurality of data records, and determine therefrom characteristics of the data records;
dividing the data file into a plurality of data sets based on the analyzing of the record descriptions associated with the data file, and a comparing of the determined characteristics of the data records, wherein one or more data records of the plurality of data records are divided between different data sets of the plurality of data sets;
based on the determined plurality of data sets, allocating the data sets of the divided data file to processing threads for parallel processing by the multi-threaded processing arrangement; and
wherein the record descriptions comprise a record descriptor associated with each data record, each record descriptor comprising information relating to parameters or attributes of the associated data record.

US Pat. No. 10,394,608

PRIORITIZATION OF LOW ACTIVE THREAD COUNT VIRTUAL MACHINES IN VIRTUALIZED COMPUTING ENVIRONMENT

International Business Ma...

1. A method of scheduling virtual machines in a virtualized computing environment, the method comprising:determining that a first virtual machine among a plurality of virtual machines active in the virtualized computing system has a low active thread count, wherein determining that the first virtual machine among the plurality of virtual machines has a low active thread count includes determining that a workload of the first virtual machine is currently on a low number of active software threads, wherein the first virtual machine includes a multi-threading aware operating system, and wherein the low number of active software threads is insufficient for the multi-threading aware operating system to spread out onto available hardware threads of execution for the first virtual machine;
determining that a high system load exists in the virtualized computing environment; and
in response to determining that the high system load exists in the virtualized computing environment, prioritizing scheduling of the first virtual machine over at least one other virtual machine among the plurality of virtual machines active in the virtualized computing system based upon the determination that the first virtual machine has the low active thread count.

US Pat. No. 10,394,607

PRIORITIZATION OF LOW ACTIVE THREAD COUNT VIRTUAL MACHINES IN VIRTUALIZED COMPUTING ENVIRONMENT

International Business Ma...

1. An apparatus, comprising:at least one processor; and
program code configured upon execution by the at least one processor to schedule virtual machines in a virtualized computing environment by:
determining that a first virtual machine among a plurality of virtual machines active in the virtualized computing system has a low active thread count, wherein determining that the first virtual machine among the plurality of virtual machines has a low active thread count includes determining that a workload of the first virtual machine is currently on a low number of active software threads, wherein the first virtual machine includes a multi-threading aware operating system, and wherein the low number of active software threads is insufficient for the multi-threading aware operating system to spread out onto available hardware threads of execution for the first virtual machine;
determining that a high system load exists in the virtualized computing environment; and
in response to determining that the high system load exists in the virtualized computing environment, prioritizing scheduling of the first virtual machine over at least one other virtual machine among the plurality of virtual machines active in the virtualized computing system based upon the determination that the first virtual machine has the low active thread count.

US Pat. No. 10,394,606

DYNAMIC WEIGHT ACCUMULATION FOR FAIR ALLOCATION OF RESOURCES IN A SCHEDULER HIERARCHY

Hewlett Packard Enterpris...

1. A method for resource allocation, the method comprising:assigning a plurality of weights to a plurality of child schedulers, each child scheduler associated with multiplier the child schedulers being in a scheduler hierarchy with a plurality of parent schedulers, wherein each of the plurality of parent schedulers is associated with a unique group of the child schedulers;
for each child scheduler that is active, propagating a value based on the assigned weight of the child scheduler upwards in said scheduler hierarchy through a respective chain of schedulers to cause a given scheduler at a given level in the respective chain of schedulers to be associated with an accumulation of values based on the weights of descendent schedulers of the given scheduler along the respective chain;
for the given scheduler at the given level, factoring in the multiplier applied to said accumulated values of the descendant schedulers in the respective chain of schedulers to generate a multiplied value;
propagating said multiplied value upwards through said respective chain of schedulers; and
distribute a given set of resources assigned to said scheduler hierarchy based on multiplied value at each level of schedulers to cause the schedulers in the scheduler hierarchy to be proportioned resources from said given set of resources based on said multiplied value.

US Pat. No. 10,394,605

MUTABLE CHRONOLOGIES FOR ACCOMMODATION OF RANDOMLY OCCURRING EVENT DELAYS

Ab Initio Technology LLC,...

1. A method for causing a computing system to process events from a sequence of events that defines a correct order for said events independent from an order in which those events are received over an input device or port, said method including:defining a first variable,
defining, for said first variable, a first chronology of operations on said first variable associated with received events,
receiving a first event that pertains to said first variable,
executing a first operation on said first variable, wherein said first operation results in a first update of said first chronology,
after having received said first event, receiving a delayed event that pertains to said first variable,
executing a second operation on said first variable, wherein said second operation results in a second update of said first chronology, wherein said first update occurred earlier than said second update,
determining whether to reprocess said previously executed first operation based on a determination, using said first chronology, of whether said earlier first update is valid or invalid, and
reprocessing said previously executed first operation based on the determination that said earlier first update is invalid;
wherein said delayed event precedes said first event in said sequence,
wherein said first update is based on said first event, and
wherein said second update is based on said delayed event.

US Pat. No. 10,394,604

METHOD FOR USING LOCAL BMC TO ALLOCATE SHARED GPU RESOURCES INSIDE NVME OVER FABRICS SYSTEM

SAMSUNG ELECTRONICS CO., ...

1. A system comprising:a non-volatile memory (NVM) device stores data and manages execution of a task,
and wherein the NVM device comprises:
a network interface configured to receive data and the task,
a NVM processor configured to determine if the NVM processor will execute that task or if the task will he assigned to a shared resource within the system based on the shared resource more efficiently performing the task than the NVM processor, and
a local communication interface configured to communicate with at least one other device within the system;
a main board sub-system comprising:
a switched fabric in communication with the NVM device, wherein the switched fabric sends the data and task to the NVM device as a destination for the task, and
a resource arbitration circuit configured to:
receive, a request to assign the task to the shared resource, and
manage the execution of the task by the shared resource; and
the shared resource configured to execute the task.

US Pat. No. 10,394,601

MEDIA BALANCER EMPLOYING SELECTABLE EVALUATION SOURCE

iHeartMedia Management Se...

15. A system comprising:at least one processor and associated memory configured to implement a media balancer, the media balancer configured to:
receive option parameters indicating preferences related to generation of a target schedule, wherein generation of the target schedule is based on a master schedule;
select a selected media scheduler from a plurality of potential media schedulers based on the option parameters;
transmit from the media balancer to the selected media scheduler:
first information associated with the option parameters;
a request to perform, based on the first information, an evaluation of potential replacement media items to be inserted into the target schedule in place of original media items included in the master schedule;
receive, in response to the request, second information indicating results of the evaluation; and
at least one processor and associated memory configured to implement a local scheduling system, the local scheduling system configured to:
generate the target schedule by replacing at least one original media item included in the master schedule with a replacement media item selected based on the second information.

US Pat. No. 10,394,600

SYSTEMS AND METHODS FOR CACHING TASK EXECUTION

CAPITAL ONE SERVICES, LLC...

1. A method for processing a job in a form of computer-executable code, comprising:receiving, at a client device over a network, information representing the job;
receiving the job at a job scheduler of a master device;
dividing, by the job scheduler, the job into at least two tasks comprising a first task and a second task;
for the first task:
generating, by the job scheduler, a signature corresponding to the first task representative of whether the first task has been processed;
searching, by a task scheduler of the master device, a data structure for the generated signature;
if the signature is found in the data structure retrieving a result associated with the first task by the task scheduler;
if the signature is not found in the data structure,
sending the first task by the task scheduler over the network to a task executor device,
processing the first task by the task executor device,
receiving a result of the first task processing by the task scheduler, and
storing the task result and a signature corresponding to the processed first task in the data structure by the task scheduler;
aggregating, by the job scheduler, the task result into a job result;
sending, by the job scheduler and over the network, the job result to the client device; and
processing the job result by the client device.

US Pat. No. 10,394,599

BREAKING DEPENDENCE OF DISTRIBUTED SERVICE CONTAINERS

International Business Ma...

1. A computer-implemented method for managing service container dependency, the computer-implemented method comprising:receiving, by a computer, a notification that a first service container is running on a host environment;
determining, by the computer, whether the first service container is dependent on a second service container being up and running on the host environment;
responsive to the computer determining that the first service container is dependent on a second service container being up and running on the host environment, determining, by the computer, whether the second service container is running on the host environment;
responsive to the computer determining that the second service container is not running on the host environment, responding, by the computer, to service requests from the first service container to the second service container using stub data running on the computer that corresponds to the second service container; and
responsive to the computer determining that the second service container is running on the host environment, generating, by the computer, the stub data corresponding to the second service container based on an image, an image identifier, and a port number identifier corresponding to the second service container, the service requests received from the first service container to the second service container, and responses to the service requests sent from the second service container to the first service container.

US Pat. No. 10,394,598

SYSTEM AND METHOD FOR PROVIDING MSSQ NOTIFICATIONS IN A TRANSACTIONAL PROCESSING ENVIRONMENT

ORACLE INTERNATIONAL CORP...

1. A system for providing multiple servers, single queue (MSSQ) notifications in a transactional processing environment, comprising:a transactional processing environment executing on one or more microprocessors;
a first server of the transactional processing environment, wherein the first server includes a first main thread and a subsidiary thread, wherein the first server provides a unanimous service and a specific service, wherein the first server is associated with a specific request queue, and wherein the first server includes an internal memory queue;
a second server of the transactional processing environment, wherein the second server includes a second main thread, and wherein the second server provides the unanimous service;
a main request queue of the transactional processing environment, wherein the first server and the second server share the main request queue; and
an application programming interface (API) for use by the first server and the second server, wherein the first server and the second server use the API to advertise the unanimous service on the main request queue, and wherein the first server uses the API to advertise the specific service on the specific request queue;
wherein the main request queue receives and queues a plurality of request messages for the unanimous service, wherein a first request message of the plurality of request messages is dequeued by the first main thread of the first server, and wherein a second request message of the plurality of request messages is dequeued by the second main thread of the second server;
wherein the specific request queue of the first server receives and queues request messages for the specific service, wherein each of the queued request messages for the specific service is dequeued by the subsidiary thread of the first server, and stored in the internal memory queue of the first server; and
wherein the first main thread of the first server checks the internal memory queue of the first server before checking the main request queue for request messages to process.

US Pat. No. 10,394,595

METHOD TO MANAGE GUEST ADDRESS SPACE TRUSTED BY VIRTUAL MACHINE MONITOR

Intel Corporation, Santa...

1. A processor comprising:a register to store a first reference to a context data structure specifying a virtual machine context, the context data structure comprising a second reference to a target array; and
an execution unit comprising a logic circuit to:
execute a virtual machine (VM) based on the virtual machine context, wherein the VM comprises a guest operating system (OS) associated with a page table comprising a first memory address mapping between a guest virtual address (GVA) space and a guest physical address (GPA) space;
receive a request by the guest OS to switch from the first memory address mapping to a second memory address mapping, the request comprising an index value and a first root value;
retrieve an entry, identified by the index value, from the target array, the entry comprising a second root value; and
responsive to determining that the first root value matches the second root value, cause a switch from the first memory address mapping to the second memory address mapping.

US Pat. No. 10,394,594

MANAGEMENT OF A VIRTUAL MACHINE IN A VIRTUALIZED COMPUTING ENVIRONMENT BASED ON A CONCURRENCY LIMIT

International Business Ma...

1. A method of managing a virtualized computing environment, the method comprising:monitoring active virtual machine management operations on a first host among a plurality of hosts in the virtualized computing environment, wherein each active virtual machine management operation includes a plurality of sub-operations with associated concurrency limits that represent maximum numbers of concurrent sub-operations;
receiving a request to perform a virtual machine management operation, wherein the virtual machine management operation includes at least first and second sub-operations, the first sub-operation associated with a first concurrency limit that is a hypervisor concurrency limit, a storage system concurrency limit, a virtualization library concurrency limit, or a network concurrency limit, and the second sub-operation associated with a second concurrency limit that is a hypervisor concurrency limit, a storage system concurrency limit, a virtualization library concurrency limit, or a network concurrency limit;
in response to receiving the request, determining whether any of the first and second concurrency limits associated with the first and second sub-operations for the requested virtual machine management operation has been met based at least in part on the monitored active virtual machine management operations on the first host; and
initiating performance of the requested virtual machine management operation on a second host among the plurality of hosts in response to determining that at least one of the first and second concurrency limits associated with the first and second sub-operations for the requested virtual machine management operation has been met.

US Pat. No. 10,394,593

NONDISRUPTIVE UPDATES IN A NETWORKED COMPUTING ENVIRONMENT

International Business Ma...

1. A computer-implemented method for facilitating nondisruptive maintenance on a virtual machine (VM) in a networked computing environment, comprising:creating, in response to a receipt of a request to implement an update on an active VM, a copy of the active VM, wherein the copy is a snapshot VM;
installing, while saving any incoming changes directed to the active VM to a storage system, the update on the snapshot VM, wherein the update is not installed on the active VM;
applying, when the update on the snapshot VM is complete, the saved incoming changes on the snapshot VM; and
switching from the active VM to the snapshot VM so the snapshot VM becomes a new active VM and the active VM becomes an inactive VM,
wherein the storage system includes a first-in-first-out (FIFO) queue, and
wherein the saving includes inserting the incoming change in the FIFO queue with a reference count that indicates that the incoming change needs to be executed on both the active VM and the snapshot VM.

US Pat. No. 10,394,592

DEVICE AND METHOD FOR HARDWARE VIRTUALIZATION SUPPORT USING A VIRTUAL TIMER NUMBER

HUAWEI TECHNOLOGIES CO., ...

1. A device for hardware virtualization support to handle an interrupt targeted to a running virtual machine (VM), the device comprising:memory; and
a processor coupled to the memory and configured to:
run the VM;
access, from the running VM, a host system of the device, wherein the host system is accessible by a hypervisor configured to launch the VM;
process, by the host system, a configuration flag (CF) in the host system that enables delivery of a virtual timer of the host system to a guest operating system (OS), wherein the virtual timer is controlled by the host system;
record, by the host system, a virtual interrupt request (IRQ) number of the virtual timer when the CF is set, the virtual IRQ number identifying which of the running VM or the host system a physical interrupt targets; and
process, by the hypervisor, the virtual IRQ number to deliver the virtual timer to the guest OS in a first manner when the virtual IRQ number indicates that the physical interrupt is targeted to the running VM and in a second manner when the virtual IRQ number indicates that the physical interrupt is targeted to the host system, wherein the first manner delivers the virtual timer to the quest OS without loading a host OS state.

US Pat. No. 10,394,591

SANITIZING VIRTUALIZED COMPOSITE SERVICES

International Business Ma...

1. A computer-implemented method for sanitizing a virtualized composite service, the computer-implemented method comprising:providing, by one or more processors, a sanitization policy for each image within a virtualized composite service, wherein the virtualized composite service employs multiple virtual machine (VM) instances that initially use different policies for sanitizing sensitive data within each VM instance;
analyzing, by one or more processors, sanitization policies for multiple images in the virtualized composite service in order to detect inconsistencies among the sanitization policies;
analyzing, by one or more processors, the sanitization policies for images within the virtualized composite service for inconsistences with sanitization policies for entities external to the virtualized composite service;
in response to detecting inconsistencies between the sanitization policies for images within the virtualized composite service and sanitization policies for entities external to the virtualized composite service, modifying, by one or more processors, the sanitization policies for images within the virtualized composite service to match the sanitization policies for entities external to the virtualized composite service;
responsive to finding inconsistencies between the sanitization policies for multiple images within the virtualized composite service, resolving, by one or more processors, the inconsistencies to produce a consistent sanitization policy;
using, by one or more processors, the consistent sanitization policy to sanitize the virtualized composite service to create a sanitized virtualized composite service;
receiving, by one or more processors, a request for the virtualized composite service from a requester; and
responding, by one or more processors, to the request for the virtualized composite service by returning the sanitized virtualized composite service to the requester.

US Pat. No. 10,394,590

SELECTING VIRTUAL MACHINES TO BE MIGRATED TO PUBLIC CLOUD DURING CLOUD BURSTING BASED ON RESOURCE USAGE AND SCALING POLICIES

International Business Ma...

1. A method for selecting virtual machines to be migrated to a public cloud during cloud bursting, the method comprising:determining current resource usage for each of a plurality of virtual machine instances running in a private cloud;
obtaining one or more scaling policies for said plurality of virtual machine instances running in said private cloud;
computing additional resource usage for each of said plurality of virtual machine instances with a scaling policy when scaled out;
receiving a cost for running virtual machine instances in said public cloud based on resource usage;
determining, by a processor, a cost of running a virtual machine instance of said plurality of virtual machine instances in said public cloud using said current resource usage and said additional resource usage when said virtual machine instance of said plurality of virtual machine instances is scaled out based on said received cost for running said virtual machine instances in said public cloud;
selecting said virtual machine instance of said plurality of virtual machine instances to be migrated from said private cloud to said public cloud in response to said cost being less than a threshold value; and
migrating said selected virtual machine instance of said plurality of virtual machine instances to said public cloud from said private cloud.

US Pat. No. 10,394,589

VERTICAL REPLICATION OF A GUEST OPERATING SYSTEM

International Business Ma...

1. A method for testing a host machine including a processing unit, the method comprising:creating one or more virtual disks in memory assigned to a host operating system;
assessing available virtual storage space within the memory assigned to the host operating system; and
assessing a an operational parameter associated with the available virtual storage space, wherein the operational parameter corresponds to a performance capacity and comprises a performance limitation of the processing unit, a performance limitation of program code executable by the processing unit, and a performance limitation of the host operating system, including:
creating a hierarchy of guest operating systems utilizing the one or more virtual disks, including assigning a first guest operating system to a first layer in the hierarchy and assigning the first guest operating system in a replication role;
vertically replicating the first layer, including creating a second guest operating system and one or more additional virtual disks in the virtual storage assigned to the first guest operating system and placing the second guest operating system in a second layer of the hierarchy; and
repeating the vertical replication, including placing the second guest operating system in the replication role, wherein a conclusion of the vertical replication is responsive to a characteristic of the parameter, including initiating paging responsive to determining a sum of utilization of the host operating system memory by the created hierarchy of guest operating systems exceeds a predetermined amount, wherein paging augments the virtual storage space with secondary storage.

US Pat. No. 10,394,588

SELF-TERMINATING OR SELF-SHELVING VIRTUAL MACHINES AND WORKLOADS

International Business Ma...

1. A method, comprising:receiving, by a cloud tuning service from a first workload, a first abstract request to perform a shelving operation on the first workload, wherein the first workload is executing on a first virtual machine on a first host in a first cloud computing environment, of a plurality of cloud computing environments, wherein the cloud tuning service executes on a system external to each of the plurality of cloud computing environments, wherein the first abstract request identifies the first workload but does not identify the first host or the first cloud computing environment, does not include required credentials, and wherein the first abstract request does not include specific operations required to shelve the first workload;
determining, by the cloud tuning service, that use of a first system resource of a plurality of system resources of the first host by the first virtual machine does not exceed a threshold;
upon receiving the first abstract request, generating, by the cloud tuning service operating external to each of the plurality of cloud computing environments, a specific request that is compatible with the first cloud computing environment by:
determining, by the cloud tuning service, based on a predefined configuration, that the first workload is executing on the first host in the first cloud computing environment;
identifying a first set of commands that are specific to the first cloud computing environment based on the predefined configuration, wherein the first set of commands cause the first cloud computing environment to shelve the first workload and wherein the first set of commands includes at least one command that was not specified in the first abstract request; and
identifying a set of login credentials needed to access the first cloud computing environment; and
initiating, by the cloud tuning service, performance of the shelving operation on the first workload using the specific request, wherein the specific request includes the set of login credentials and the first set of commands, and wherein shelving the first workload removes the first workload from the first virtual machine and the first host and stores an image of the first workload in a data store.

US Pat. No. 10,394,587

SELF-TERMINATING OR SELF-SHELVING VIRTUAL MACHINES AND WORKLOADS

International Business Ma...

1. A system, comprising:a computer processor; and
a memory containing a program, which when executed by the processor, performs an operation comprising:
monitoring, by a cloud tuning service, use of each of a plurality of system resources by a first workload executing on a first virtual machine on a first host in a first cloud computing environment, of a plurality of cloud computing environments, wherein the cloud tuning service executes on the system, wherein the system is external to each of the plurality of cloud computing environments;
determining, by the cloud tuning service, that the use of a first system resource of the plurality of system resources by the first workload does not exceed a threshold;
determining, based on the use of the first system resource by the first workload not exceeding the threshold, that the first workload has completed processing a set of tasks;
receiving, by the cloud tuning service from the first workload, a first abstract request to shelve the first workload, wherein the first abstract request identifies the first workload does not identify the first host or the first cloud computing environment, does not include required credentials, and wherein the first abstract request does not include specific operations required to shelve the first workload;
upon receiving the first abstract request, generating, by the cloud tuning service operating external to each of the plurality of cloud computing environments, a specific request that is compatible with the first cloud computing environment by:
determining, by the cloud tuning service, based on a predefined configuration, that the first workload is executing on the first host in the first cloud computing environment;
identifying a first set of commands that are specific to the first cloud computing environment based on the predefined configuration, wherein the first set of commands cause the first cloud computing environment to shelve the first workload and wherein the first set of commands includes at least one command that was not specified in the first abstract request; and
identifying a set of login credentials needed to access the first cloud computing environment; and
initiating, by the cloud tuning service, shelving of the first workload by transmitting the specific request to the first cloud computing environment, wherein the specific request includes the set of login credentials and the first set commands, and wherein shelving the first workload removes the first workload from the first virtual machine and the first host and stores an image of the first workload in a data store.

US Pat. No. 10,394,586

USING CAPABILITY INDICATORS TO INDICATE SUPPORT FOR GUEST DRIVEN SURPRISE REMOVAL OF VIRTUAL PCI DEVICES

Red Hat Israel, Ltd., Ra...

1. A method for removing a virtual device from a virtual machine having a guest operating system (OS), the virtual machine managed by a hypervisor executing on a processing device, comprising:receiving, by the hypervisor, a notification from the guest OS, the notification comprising a capability indicator value indicating a support level for surprise removal of a virtual device of the guest OS, the surprise removal of the virtual device comprising removal of the virtual device from the virtual machine without first providing a warning to the guest OS;
storing, by the hypervisor, the capability indicator value corresponding to the virtual device in a mapping table;
subsequently receiving, by the processing device executing the hypervisor, a request to remove the virtual device from the virtual machine;
responsive to receiving the request to remove the virtual device, accessing, by the hypervisor, the mapping table to obtain the capability indicator value corresponding to the virtual device;
identifying, by the processing device executing the hypervisor, in view of the obtained capability indicator value, a particular set of actions associated with the obtained capability indicator value, the particular set of actions to be performed to remove the virtual device from the virtual machine, the particular set of actions including at least removing the virtual device from the virtual machine without first providing the warning to the guest OS when the capability indicator indicates a safe support level, or at least first providing the warning to the guest OS before removing the virtual device from the virtual machine when the capability indicator indicates an unsafe support level; and
removing the virtual device from the virtual machine using the particular set of actions.

US Pat. No. 10,394,585

MANAGING GUEST PARTITION ACCESS TO PHYSICAL DEVICES

Microsoft Technology Lice...

1. A method, implemented in a computing device comprising a memory space, the method comprising:identifying, in a host of the computing device, a physical device to be made accessible to a guest partition of the computing device, the physical device comprising a memory-mapped I/O device where first and second portions of the physical device are mapped to first and second regions of the memory space, respectively;
virtualizing, by the host of the computing device, the first portion of the physical device for indirect access to the physical device by the guest partition, the first portion including at least part of a control plane for the physical device, the virtualizing providing an exposed control plane available to guest partitions through the host;
the virtualizing comprising intermediating, by the host of the computing device, accesses to the first portion of the physical device by the guest partition, wherein the guest partition interfaces with the exposed control plane, and wherein the host virtualizes access to the control plane by mapping the accesses between the first portion and the first memory region; and
allowing the guest partition to directly access the non-virtualized second portion of the physical device, the non-virtualized second portion including at least part of a data plane for the physical device, wherein the guest partition directly accesses the second portion by directly accessing the second memory region.

US Pat. No. 10,394,584

NATIVE EXECUTION BRIDGE FOR SANDBOXED SCRIPTING LANGUAGES

Atlassian Pty Ltd, Sydne...

1. A computer-implemented method, comprising:receiving, at a scripting language execution sandbox, a request to execute one or more scripting language commands, the scripting language execution sandbox executing using one or more processors of a single computing device and programmed to execute scripting language computer program scripts with restrictions on certain memory accesses and program calls, wherein the scripting language execution sandbox includes a scripting language component communicatively coupled to a native execution component of a native execution bridge, the native execution component executing using the one or more processors of the single computing device and programmed to securely programmatically communicate with the scripting language component and to execute natively executable commands without the same restrictions on certain memory accesses and program calls;
sending the one or more scripting language commands from the scripting language component of the native execution bridge to the native execution component of the native execution bridge;
determining, using the native execution component of the native execution bridge, based at least in part on a security policy, whether to execute the one or more scripting language commands as corresponding native commands outside the scripting language component;
in response to determining to execute the one or more scripting language commands, translating the one or more scripting language commands into one or more natively executable commands;
in response to translating the one or more scripting language commands into the one or more natively executable commands, executing, at the native execution component of the native execution bridge, the one or more natively executable commands;
in response to determining not to execute the one or more scripting language commands as corresponding native commands, executing, at the scripting language execution sandbox, the one or more scripting language commands,
wherein the method is performed on the single computing devices.

US Pat. No. 10,394,583

AUTOMATED MODEL GENERATION FOR A SOFTWARE SYSTEM

CA, Inc., Islandia, NY (...

1. A method comprising:accessing transaction data generated from monitoring of a plurality of transactions in a system comprising a plurality of software components, wherein at least a particular one of the plurality of transactions comprises data generated by a particular model simulating operation of a particular one of the plurality of software components in the particular transaction;
analyzing the transaction data, using a data processing apparatus, to identify respective sets of attributes for each of the plurality of transactions;
determining, using the data processing apparatus, that the set of attributes of the particular transaction meets a particular one of a set of conditions, wherein the particular transaction involves a subset of the plurality of software components and the particular model;
selecting a portion of the transaction data describing the particular transaction based on the particular transaction meeting the particular condition;
determining that the set of attributes of another one of the plurality of transactions does not satisfy the particular condition, wherein the other transaction involves the subset of software components;
identifying another portion of the transaction data describing the other transaction; and
autonomously generating an additional model of a another one of the subset of software components using the portion of the transaction data based on the particular transaction meeting the particular condition, wherein the other portion of the transaction data describing the other transaction is excluded from use in generation of the additional model based on the other transaction failing to meet the particular condition, and the additional model is used to launch a computer-implemented simulation of the other software components within subsequent transactions of the system.

US Pat. No. 10,394,582

METHOD AND ARRANGEMENT FOR MANAGING PERSISTENT RICH INTERNET APPLICATIONS

TELEFONAKTIEBOLAGET LM ER...

1. An application execution server comprising:a processor and a memory, the memory containing instructions executable by the processor whereby the application execution server is configured to:
responsive to receiving a request from a Rich Internet Application (RIA) executing on a user device, create a background process on the application execution server;
after execution of the RIA on the remote user device has terminated and in response to the background process recognizing an event associated with the RIA, trigger, by the background process, re-execution of the RIA on the remote user device;
wherein the RIA is accessible via a web browser of the user device.

US Pat. No. 10,394,581

OPTIMIZED USER INTERFACE RENDERING

International Business Ma...

1. A computer program product for optimized user interface rendering, the computer program product comprising:one or more tangible computer-readable hardware storage devices and program instructions stored on at least one of the one or more tangible storage devices, the program instructions comprising:
program instructions to identify one or more functional elements having a priority level, and one or more device characteristics of a device, wherein each one of the one or more functional elements is a segment of a computer software composed in one or more technology layers;
program instructions to determine a hardware index that represents computation capabilities of the device based on values and weights associated with each component of the device;
program instructions to determine a video index that represents video rendering capabilities of the device based on values and weights of the device components that are associated with at least one of visually rendering and auditorily rendering a user interface;
program instructions to determine a selection index based on scaling a result of dividing the hardware index by the video index;
program instructions to determine a first functional element of the one or more functional elements that has a highest priority level from the priority level;
program instructions to determine whether there is an appropriate technology layer for the first functional element based on comparing the selection index to one or more technology layer ranges corresponding to one or more technology layers associated with the first functional element;
based on determining that there is an appropriate technology layer for the first functional element:
program instructions to determine a second functional element of the one or more functional elements that has a next highest priority level from the priority level;
program instructions to determine an appropriate technology layer for the second functional element based on comparing the selection index to one or more technology layer ranges corresponding to one or more technology layers associated with the second functional element;
program instructions to determine a cumulative index based on adding the appropriate rendering index of the technology layer of the first functional element and the appropriate technology layer rendering index of the second functional element;
program instructions to determine that the cumulative index exceeds the hardware index; and
program instructions to determine whether there is another appropriate technology layer for the second functional element, based on determining whether another appropriate technology layer has a lower rendering index than the appropriate technology layer of the second functional element.

US Pat. No. 10,394,580

DYNAMIC ADDITION AND REMOVAL OF OPERATING SYSTEM COMPONENTS

Microsoft Technology Lice...

1. A method in a computing device, comprising:receiving a call from an application executing on the computing device;
determining an operating system component intended to receive the call that does not exist in an operating system of the computing device; and
hydrating the component into the operating system of the computing device based at least in part on said determining, said hydrating comprising dynamic installation by the computing device of the component into the operating system to handle the call.

US Pat. No. 10,394,578

INTERNET OF THINGS DEVICE STATE AND INSTRUCTION EXECUTION

International Business Ma...

1. A computer-implemented method, comprising:intercepting, by one or more processors in a computing device, an instruction, upon receipt on the instruction, by the one or more processors in the computing device on a communications network, via the communications network, prior to execution of the instruction by the one or more processors in the computing device, wherein the computing device comprises an Internet of Things computing device;
determining, by the one or more processors, a state of the computing device is a first state, wherein the state of the computing device is accessible only to the one or more processors;
based on the computing device being in the first state and a portion of the instruction, determining, by the one or more processors, that the instruction is precluded from executing on the computing device, wherein the determining the instruction is precluded from executing on the computing device further comprises:
mapping, by the one or more processors, the first state to a hierarchy of rules stored on a memory comprising a rule based engine in the computing device; and
determining, by the one or more processors, that the hierarchy of the rules precludes execution of the instruction when the computing device is in the first state;
based on the determining that the hierarchy of the rules precludes execution of the instruction, queuing, by the one or more processors, the instruction on a memory in the computing device while the computing device is in the first state;
changing, by the one or more processors, the state of the computing device from the first state to a second state, wherein the state is changed exclusively in the rule based engine by the one or more processors;
based on the computing device being in the second state and a portion of the instruction, determining, by the one or more processors, that the queued instruction is allowed to execute on the computing device; and
automatically transmitting, by the one or more processors, the queued instruction, from the memory, for execution on the computing device, wherein the queued instruction is executed upon receipt from the transmitting.

US Pat. No. 10,394,576

CONTROL FOR THE SAFE CONTROL OF AT LEAST ONE MACHINE

SICK AG, Waldkirch (DE)

1. A control for the safe control of at least one machine, the control comprising:at least one input unit for receiving input signals from at least one signal generator;
at least one output unit for outputting output signals to the at least one machine;
a control unit for generating the output signals in dependence on the input signals; and
a connection unit having at least one connection socket for connecting an external input device that can be used for configuring the control,
wherein the connection unit has at least one connection terminal for connecting the signal generators and/or the machine and is separable from the control,
wherein the connection socket can be removed from the connection unit or from the control and comprises a memory with configuration data of the control,
and wherein the connection unit in the connected state provides a first connection and a second connection between the connection socket and the control unit in the control.