US Pat. No. 11,113,174

METHODS AND SYSTEMS THAT IDENTIFY DIMENSIONS RELATED TO ANOMALIES IN SYSTEM COMPONENTS OF DISTRIBUTED COMPUTER SYSTEMS USING TRACES, METRICS, AND COMPONENT-ASSOCIATED ATTRIBUTE VALUES

VMware, Inc., Palo Alto,...


1. A system that determines relevant attribute dimensions correlated with anomalous operational behaviors of components of a distributed computer system, the system comprising:one or more processors;
one or more memories; and
computer instructions, stored in one or more of the one or more memories that, when executed by one or more of the one or more processors, control the system tocollect metric data comprising a series of timestamped metric values associated with each metric of multiple metrics, wherein each metric of the multiple metrics is associated with a component or component type of the distributed computer system,
identify components of the distributed computer system which exhibit anomalous operational behaviors using the collected metric data,
access collected call traces from a call-tracing service,
access attribute values for selected components of the distributed computer system,
employ decision-tree-based analyses to determine relevant attribute dimensions of component types that are correlated with the identified components of the distributed computer system which exhibit anomalous operational behaviors, and
transmit the determined relevant attribute dimensions of the component types to a computational entity to facilitate amelioration of the anomalous operational behaviors.


US Pat. No. 11,113,173

SYSTEMS AND METHODS FOR DETECTING, ANALYZING, AND EVALUATING INTERACTION PATHS

Rovi Guides, Inc., San J...


1. A method for generating interaction path details from a plurality of user interactions, the method comprising:retrieving a time threshold, wherein the time threshold is an amount of time used in determining whether two consecutive interactions are in one interaction path;
retrieving, from memory, a plurality of user interaction records, wherein each user interaction record comprises a timestamp and wherein the plurality of user interaction records correspond to interactions of a plurality of users with a content delivery platform across a plurality of devices;
sorting the plurality of user interaction records based on the timestamp in each user interaction record;
identifying a path-start record from the plurality of user interaction records, wherein the difference between the timestamp in the path-start record and the timestamp of a user interaction record immediately before the path-start record exceeds the time threshold;
identifying a path-end record from the plurality of user interaction records, wherein the difference between the timestamp in the path-end record and the timestamp of a user interaction record immediately after the path-end record exceeds the time threshold;
creating a first interaction path record comprising the path-start record, the path-end record, and a plurality of user interaction records between the path-start record and the path-end record;
storing, in the memory, the first interaction path record;
identifying, based on the first interaction path record, a second interaction path record;
determining, based on a first timestamp corresponding to the first interaction path path-end record and a second timestamp corresponding to the second interaction path path-start record, an optimal time threshold;
creating, based on the optimal time threshold, a candidate interaction path; and
generating, for display, a user interface to display details of the first interaction path record, the second interaction path record, and the candidate interaction path, wherein the details comprise respective summary statistics for each respective path.

US Pat. No. 11,113,172

METHOD, TERMINAL, AND COMPUTER-READABLE STORAGE MEDIUM FOR DISPLAYING ACTIVITY RECORD INFORMATION

Beijing Xiaomi Mobile Sof...


1. A method for displaying activity record information, the method applicable to a mobile phone comprising a processor, the mobile phone terminal having a plurality of applications for performing activities installed therein, the method comprising:in response to switching to a specified interface in the mobile phone, configuring the processor to extract and integrate activity record information of all applications of the plurality of applications in an activity category to acquire specified activity record information in the activity category, wherein the activity category is determined according to a user selection; and
displaying the specified activity record information in the activity category determined by the user selection in the specified interface in the mobile phone, wherein the specified interface is a hiboard interface; and
wherein the specified activity record information includes a money transaction value, a money transaction time, or a money transaction name, and the specified activity record information is obtained by statistics for at least two pieces of activity record information during a specified period of time.

US Pat. No. 11,113,171

EARLY-CONVERGENCE DETECTION FOR ONLINE RESOURCE ALLOCATION POLICIES FOR ITERATIVE WORKLOADS

EMC IP Holding Company LL...


1. A method, comprising:obtaining a dynamic system model based on a relation between an amount of at least one resource for a plurality of workloads in a controlled workload list and at least one predefined service metric, wherein the plurality of workloads in the controlled workload list participate in an adaptation cycle;
obtaining an instantaneous value of the at least one predefined service metric;
obtaining an adjustment to the amount of the at least one resource for a given one of the plurality of workloads in the controlled workload list based at least in part on a difference between the instantaneous value of the at least one predefined service metric and a target value for the at least one predefined service metric;
determining whether the given one of the plurality of workloads has converged based on an evaluation of one or more predefined convergence criteria;
removing the given one of the plurality of workloads from the controlled workload list when the given one of the plurality of workloads has converged; and
initiating an application of the determined adjustment to the amount of the at least one resource to the given one of the plurality of workloads,
wherein the method is performed by at least one processing device comprising a processor coupled to a memory.

US Pat. No. 11,113,170

TECHNOLOGIES FOR MANAGING MEMORY ON COMPUTE DEVICE

Intel Corporation, Santa...


1. A compute device for managing memory use on the compute device, the compute device comprising:an interface; and
low memory application killer circuitry to:determine a current combination of applications running on the compute device;
retrieve, from a database of the compute device, one or more entries of a plurality of entries of the database, wherein each entry of the plurality of entries is associated with a combination of applications of a plurality of combinations of applications and includes an indication of a quality of a user experience associated with the corresponding combination of applications;
determine an expected quality of a user experience associated with the current combination of applications based on the one or more indications of the quality of the user experiences associated with the one or more entries; and
kill one or more of the current combination of applications based on the expected quality of the user experience associated with the current combination of applications.


US Pat. No. 11,113,169

AUTOMATIC CREATION OF BEST KNOWN CONFIGURATIONS

Dell Products L.P., Roun...


1. A method for automatically generating a best known configuration for a platform, the method comprising:receiving, by a best known configuration engine and from a first set of end user devices that have the platform, health reports pertaining to a first driver or firmware that is installed on the first set of end user devices, each health report including performance information of the first driver or firmware on the corresponding end user device;
evaluating, by the best known configuration engine, the performance information of the first driver or firmware that is included in the health reports;
based on the evaluation, determining that the first driver or firmware is functioning properly on the first set of end user devices;
based on the determination, including the first driver or firmware in the best known configuration for the platform; and
publishing the best known configuration for the platform.

US Pat. No. 11,113,168

DISTRIBUTED ARCHITECTURE FOR FAULT MONITORING

University of Connecticut...


1. A distributed architecture system for detecting an anomaly in a power semiconductor device, the system comprising:a server computing device; and
one or more local components communicatively coupled to the server computing device and remote from the server computing device, each one of the one or more local components comprising one or more sensors positioned adjacent to the power semiconductor device for sensing properties of the power semiconductor device comprising a device cycling current, a change in device temperature, a normalized junction-to-ambient thermal resistance, a power step, a device maximum junction temperature, a device minimum junction temperature, a Vcold measurement, a Vhot measurement, and a Von measurement,
wherein:each one of the one or more local components:receives sensed data corresponding to the properties,
designates a first portion of the sensed data as testing data and designates a second portion of the sensed data as training data, and
transmits the training data to the server computing device,

the server computing device:utilizes the training data, via one or more of a machine learning algorithm, a fault diagnostic algorithm, and a fault prognostic algorithm, to generate a set of eigenvalues and associated eigenvectors based on the properties, and
selects a selected set of eigenvalues and associated eigenvectors, and

the processing device of the one or more local components conducts a statistical analysis of the selected set of eigenvalues and associated eigenvectors to determine that the testing data is indicative of the anomaly.


US Pat. No. 11,113,167

SYSTEM TESTING INFRASTRUCTURE WITH HIDDEN VARIABLE, HIDDEN ATTRIBUTE, AND HIDDEN VALUE DETECTION

INTERNATIONAL BUSINESS MA...


1. A method for detecting and localizing a fault when testing a system under test (SUT), the method comprising:modeling inputs to the SUT as a collection of attribute-value pairs;
generating an initial set of test vectors that provides complete n-wise coverage of a test space represented by the attribute-value pairs;
generating a set of testcases from the initial set of test vectors using combinatorics test design;
executing the set of testcases to obtain a set of execution results, the execution results being in binary form indicative that a testcase succeeded or failed, the set of testcases executed a plurality of times;
updating, for each execution of the set of testcases, for each testcase, a non-binary success rate (ST) based on the set of execution results;
in response to a first success rate corresponding to a particular testcase being below a predetermined threshold:generating a second set of testcases based on the test vectors, the second set of testcases generated using inverse combinatorics test design;
executing the second set of testcases to obtain a second set of execution results using a second set of test vectors, the second set of testcases executed at least a predetermined number of times; and
updating, for each execution of the second set of testcases, for each testcase, a second success rate (ST?) based on the second set of execution results; and
in response to the second success rate corresponding to the particular testcase being within a predetermined threshold of the first success rate, notifying a user of a defect in the modeling of the inputs to the SUT.


US Pat. No. 11,113,166

MONITORING SYSTEM AND METHOD WITH BASEBOARD MANAGEMENT CONTROLLER

Wiwynn Corporation, New ...


1. A monitoring system, comprising:a baseboard management controller (BMC) disposed on a same baseboard as a system under test including a plurality of target devices;
an administrator device electrically connected to the BMC;
a software test fixture stored in the BMC, the software test fixture including a plurality of in-target probe (ITP) firmware programs each generating an electrical signal, which is transferred to a corresponding target device of the system under test to access a register of the corresponding target device; and
a plurality of bridge integrated circuits each being connected between the BMC and the corresponding target device, each bridge integrated circuit having an associated Internet Protocol (IP) address used to select the corresponding target device;
wherein the administrator device updates firmware of the system under test according to a monitoring result of the system under test.

US Pat. No. 11,113,165

METHOD FOR DETECTING REPAIR-NECESSARY MOTHERBOARDS AND DEVICE USING THE METHOD

HONGFUJIN PRECISION ELECT...


1. A method of implementing repairable board test of a repairable board detection device, the method comprising:obtaining, by the repairable board detection device, repair-relevant information of a plurality of sample repairable boards;
extracting, by the repairable board detection device, predetermined features from the repair-relevant information of the plurality of the sample repairable boards;
obtaining, by, the repairable board detection device, a features list which comprises the predetermined features;
encoding a value of each feature of the features list and converting feature values of the predetermined features to truth-values based on a predetermined converting rule to establish a features truth-value list, by the repairable board detection device;
applying, by the repairable board detection device, the features truth-value list and detection results of the plurality of the sample repairable boards as training features;
establishing and training, by the repairable board detection device, a board detection model based on the training features; and
receiving, by the repairable board detection device, repair-relevant information of a repairable board and transmitting, by the repairable board detection device, the repair-relevant information of the repairable board to the board detection model to obtain a detection result of the repairable board;
wherein each feature value is trained to obtain a weighted value, and the board detection model calculates the detection result of the repairable board according to the weighted value of each feature value; and
wherein the predetermined features are selected from the group consisting of agent identification (ID), category, severity, timestamp, message, message ID, fully qualified device descriptor (FQDD), argument (ARG), and raw event data.

US Pat. No. 11,113,164

HANDLING ERRORS IN BUFFERS

Arm Limited, Cambridge (...


1. An apparatus comprising:a buffer comprising a plurality of entries to buffer items associated with data processing operations performed by at least one processing circuit; and
buffer control circuitry having a redundant allocation mode in which:when allocating a given item to the buffer, the buffer control circuitry is configured to allocate the given item to each entry of a set of N redundant entries of the buffer, where N?2; and
when reading or removing the given item from the buffer, the buffer control circuitry is configured to compare the items stored in said set of N redundant entries and to trigger an error handling response when a mismatch is detected between the items stored in said set of N redundant entries.


US Pat. No. 11,113,163

STORAGE ARRAY DRIVE RECOVERY

International Business Ma...


1. A computer implemented method, the method comprising:determining that a first drive in a storage array has met a criterion that dictates that the first drive be replaced;
predicting a predictive failure analysis of each spare drive in a set of spare drives;
ranking each spare drive in the set of spare drives based on respective predicted risk of failure of each spare drive, wherein a lower risk of failure dictates a higher ranking;
responsive to a determination that a risk of failure associated with a given spare drive exceeds a threshold, flagging the given spare drive for replacement, and removing the given spare drive, as a selectable option for array drive replacement, from the set of spare drives; and
replacing the first drive in the storage array with a highest ranked spare drive of the set of spare drives.

US Pat. No. 11,113,162

APPARATUSES AND METHODS FOR REPAIRING MEMORY DEVICES INCLUDING A PLURALITY OF MEMORY DIE AND AN INTERFACE

Micron Technology, Inc., ...


1. An apparatus comprising:a first stack comprising a plurality of first dies stacked with one another, the plurality of first dies comprising a plurality of first channels;
a second stack comprising a plurality of second dies stacked with one another, the second stack being stacked with the first stack, the plurality of second dies comprising a plurality of second channels, wherein each of the plurality of second channels corresponds to a respective channel in the plurality of first channels in the plurality of first dies; and
a control circuit configured to, responsive to a command for accessing a channel in the plurality of first channels:access a channel in the plurality of second channels corresponding to the channel in the plurality of first channels in place of accessing the channel if the channel is defective; and
access the channel in the plurality of first channels, otherwise.


US Pat. No. 11,113,161

LOCAL STORAGE CLUSTERING FOR REDUNDANCY CODED DATA STORAGE SYSTEM

Amazon Technologies, Inc....


1. A computer-implemented method, comprising:obtaining a data storage request sent to an interface of a plurality of data transfer devices, the interface being shared with a data storage system, the plurality of data transfer devices including a local version of the interface corresponding to a remote interface provided by the data storage system, the data storage request requesting data storage at the data storage system while an operable connection between the data storage system and a customer device associated with the data storage request is unavailable;
generating, by the plurality of data transfer devices and based at least in part on data of the data storage request, a plurality of redundancy encoded shards such that a subset of the plurality of redundancy encoded shards, each having less than all the data, is sufficient to reconstruct the data; and
distributing the plurality of redundancy encoded shards among at least a subset of the plurality of data transfer devices to be stored.

US Pat. No. 11,113,160

APPARATUS FOR MIGRATING DATA AND METHOD OF OPERATING SAME


1. An apparatus for performing data migration, the apparatus comprising:a processor; and
a memory storing instructions thereon, the instructions when executed by the processor cause the processor to:monitor a change in performance of each application while each application is executed;
calculate an arithmetic intensity of each application, based on a monitoring result; and
select a specific application predicted to have a smallest number of computing arithmetic requests compared to memory access requests as a target application for memory migration, based on the arithmetic intensity.


US Pat. No. 11,113,159

LOG STRUCTURE WITH COMPRESSED KEYS

Intel Corporation, Santa...


1. An electronic processing system, comprising:a processor;
a system memory communicatively coupled to the processor;
a solid state drive communicatively coupled to the processor;
a logger communicatively coupled to the processor and the solid state drive, the logger including logic to log memory access data in the solid state drive;
a log indexer communicatively coupled to the logger, the log indexer including logic to index the memory access log data in the system memory in an index table; and
a key compressor communicatively coupled to the log indexer, the key compressor including logic to:compress an index key for the index table,
identify a conflict in two or more compressed index keys,
append the compressed index key to the index table when a check of metadata determines a full key match and when the check of metadata confirms authorization after the full key match, and
combine key-value pairs corresponding to the conflicted compressed index keys into a new entry when the check of metadata does not determine the full key match.


US Pat. No. 11,113,158

ROLLING BACK KUBERNETES APPLICATIONS

ROBIN SYSTEMS, INC., San...


1. A method comprising:creating, by a first orchestrator, a first application including a plurality of first objects in a network computing environment, the plurality of first objects including a first portion and a second portion;
mounting, by a storage manager, one or more storage volumes to the first portion of the plurality of first objects;
creating, by a second orchestrator, an application snapshot of the plurality of first objects;
creating, by the storage manager, one or more storage volume snapshots of the one or more storage volumes;
receiving, by the second orchestrator, an instruction to rollback the first application to the application snapshot; and
in response to the instruction, performing, by the second orchestrator:(a) deleting the second portion of the plurality of first objects but not the first portion of the plurality of first objects;
(b) following performing (a), instructing the storage manager to rollback the one or more storage volumes to the one or more storage volume snapshots and mount the one or more storage volumes following rolling back to the first portion of the plurality of first objects; and
(c) following performing (b), instructing the first orchestrator to recreate the second portion of the plurality of first objects according to the application snapshot.


US Pat. No. 11,113,157

PLUGGABLE RECOVERY IN A DATA PROTECTION SYSTEM

EMC IP HOLDING COMPANY LL...


1. A method for preparing a recovery operation, the method comprising:presenting a user interface that includes a first portion and a second portion;
selecting a client backup module from the user interface that was used to perform a backup operation;
retrieving a plug-in based on the selected client backup module, wherein the plug-in is associated with the second portion of the user interface; and
using the retrieved plug-in, configuring the recovery operation through a workflow presented in the first portion and the second portion of the user interface such that the first portion includes first elements of the recovery operation that are common to multiple recovery operations and such that the second portion includes second elements of the recovery operation that are associated with the retrieved plug-in and that are specific to the recovery operation;
displaying the first and second elements in the user interface.

US Pat. No. 11,113,156

AUTOMATED RANSOMWARE IDENTIFICATION AND RECOVERY

KASEYA US LLC, Miami, FL...


1. A method of managing a client system, comprising:recording back-up data for the client system;
analyzing the recorded back-up data;
wherein analyzing the recorded back-up data comprises:
identifying an inconsistent data storage pattern based on a percentage of the back-up data being deduplicated, wherein the percentage of the back-up data being deduplicated diverges from computing systems related to the client system;
detecting, based on the analysis, malicious activity;
identifying, from the recorded back-up data, an infection point; and
restoring the client system to a state prior to the infection point, wherein the client system is restored based on snapshots in both cloud-based object storage and cloud-based block storage.

US Pat. No. 11,113,155

ARCHIVING AND RESTORATION OF DISTRIBUTED DATABASE LOG RECORDS

Amazon Technologies, Inc....


1. A method, comprising:obtaining a request to store data associated with a distributed database;
in response to the request, storing the data based at least in part on one or more operations performed using a concurrency parameter and a hash function applied to at least a portion of the data, wherein the data is stored with information indicative of a position of the data in a storage hierarchy of the distributed database;
obtaining a request to retrieve the stored data;
in response to the request to retrieve the stored data, applying the hash function to the portion of the data to generate a hash value;
identifying a computing node to obtain the stored data based on the hash value;
retrieving the stored data; and
storing the retrieved stored data on the computing node.

US Pat. No. 11,113,154

USER-LEVEL QUOTA MANAGEMENT OF DATA OBJECTS STORED IN INFORMATION MANAGEMENT SYSTEMS

Commvault Systems, Inc., ...


1. A computer-implemented method using at least one computing device comprising one or more hardware processors, the method comprising:determining, by a quota manager in an information management system managed by a storage manager, that a total storage usage amount associated with an end-user exceeds a quota amount for the end-user,wherein data associated with the end-user comprises primary data objects stored in primary storage and secondary copy data objects stored in secondary storage;

based on the determination that the total storage usage amount associated with the end-user exceeds the quota amount, allowing the end-user to create more primary data objects in the primary storage while blocking future backups thereof, wherein the blocking overrides one or more storage policies for backing up primary data objects to secondary storage;
receiving a request to delete from the information management system a selection of data that is associated with the end-user, wherein the selection comprises one or more of: a selected primary data object and a selected secondary copy data object;
based on determining that the selected primary data object is under legal hold: generating a secondary copy of the selected primary data object, storing the generated secondary copy to secondary storage, retaining the generated secondary copy under legal hold in the secondary storage, deleting the selected primary data object after generating the secondary copy thereof, and removing an amount of storage used by the generated secondary copy from the total storage usage amount associated with the end-user; and
based on determining that the selected secondary copy data object is under legal hold: retaining the selected secondary copy data object in the secondary storage, and removing an amount of storage used by the selected secondary copy data object from the total storage usage amount associated with the end-user.

US Pat. No. 11,113,153

METHOD AND SYSTEM FOR SHARING PRE-CALCULATED FINGERPRINTS AND DATA CHUNKS AMONGST STORAGE SYSTEMS ON A CLOUD LOCAL AREA NETWORK

EMC IP Holding Company LL...


9. A non-transitory computer readable medium (CRM) comprising computer readable program code, which when executed by a computer processor, enables the computer processor to perform a method for implementing data chunk transfer by a Protection Storage System (PSS), the method comprising:performing an optimization protocol by maintaining a fingerprint hit probability (FHP), for each PSS of a set of PSSs to obtain a set of FHPs, wherein the FHP is a ratio of a number of query responses indicating a fingerprint match to a number query of responses indicating no fingerprint match;
selecting, based on the FHPs, a first subset of PSSs of the set of PSSs;
receiving, by a first protection storage system (PSS), a first backup request from a client, wherein the first backup request comprises a first fingerprint, wherein the first fingerprint is a digital signature that uniquely identifies a first data chunk, wherein the client and the first PSS are connected via a wide area network (WAN);
generating a first fingerprint query comprising the first fingerprint;
transmitting the first fingerprint query to the first subset of PSSs, wherein the first PSS and the first subset of PSSs are connected via a local area network (LAN);
obtaining, in response to the first fingerprint query and from a second PSS of the first subset of PSSs, the first data chunk; and
updating, in response to obtaining the first data chunk from the second PSS, a fingerprint hit probability (FHP) associated with the second PSS,
wherein the performing of the optimization protocol further comprises:receiving a set of backup requests from the client, wherein a cardinality of the set of backup requests meets a predefined request count criterion;
maintaining, based on the set of backup requests, the FHP for each PSS of the set of PSSs to obtain the set of FHPs; and
assessing each FHP of the set of FHPs to select the first subset of PSSs.


US Pat. No. 11,113,152

SYSTEMS AND METHODS FOR MANAGING FILE BACKUP

NortonLifeLock Inc., Tem...


1. A computer-implemented method for managing file backup, at least a portion of the method being performed by a computing device comprising at least one processor, the method comprising:detecting, by the computing device, an attempt to upload a file to a backup storage;
calculating a degree of difference indicating changes between the file and a previous version of the file on the backup storage;
comparing, by the computing device, a list of acceptable software applications for the file with a list of software applications that have made the changes to the file, wherein the computing device tracks each software application that writes to the file by adding previously undetected software applications to the list of software applications that have made the changes to the file;
calculating a change score for the file that indicates a likelihood of corruption in the file, wherein the change score is based on:the degree of difference; and
the comparison of the list of software applications that have made the changes with the list of acceptable software applications; and

applying, based on the likelihood of corruption in the file indicated by the change score, a backup policy to the attempt to upload the file, wherein the backup policy sets a time-to-live period for the previous version of the file.

US Pat. No. 11,113,151

CLUSTER DIAGNOSTICS DATA FOR DISTRIBUTED JOB EXECUTION

Snowflake Inc., Bozeman,...


1. A method comprising:receiving, by a computing cluster, an application for processing by nodes of the computing cluster, the nodes including a driver node and a plurality of execution nodes for processing of tasks of the application;
distributing, by the driver node, the tasks of the application for processing by the plurality of execution nodes;
processing the tasks by the plurality of execution nodes;
identifying, by the computing cluster, one or more errors in the application received by the computing cluster;
in response to the one or more errors, receiving, by the driver node, telemetry metadata for access to a telemetry network service of a distributed database that provides data to the computing cluster through a database connector, the driver node receiving the telemetry metadata through the database connector of the distributed database;
distributing, by the driver node, the telemetry metadata to the plurality of execution nodes for re-processing;
re-processing the tasks of the application by the plurality of execution nodes; and
transmitting, by the plurality of execution nodes, log data to the telemetry network service using one or more database connectors of the plurality of execution nodes, the log data being generated by the plurality of execution nodes while re-processing the tasks of the application.

US Pat. No. 11,113,150

DISTRIBUTING DATA ON DISTRIBUTED STORAGE SYSTEMS

Google LLC, Mountain Vie...


1. A method of distributing data in a distributed storage system, the method comprising:selecting, by the data processing hardware, a first set of storage devices as storage destinations from a plurality of storage devices of the distributed storage system for storing chunks of stripe replicas of stripes divided from a file, the first set of storage devices being in an active state when the distributed storage system is affected by a power maintenance event or a network maintenance event;
determining, by the data processing hardware, whether the file is accessible from the selected first set of storage devices when the distributed storage system is affected by a power maintenance event or a network maintenance event; and
when the selected first set of storage devices is incapable of maintaining accessibility of the file when the distributed storage system is affected by a power maintenance event or a network maintenance event, selecting, by the data processing hardware, a second set of storage devices as alternative storage destinations from the plurality of storage devices of the distributed storage system for storing the chunks of the stripe replicas of the stripes divided from the file.

US Pat. No. 11,113,149

STORAGE DEVICE FOR PROCESSING CORRUPTED METADATA AND METHOD OF OPERATING THE SAME

SAMSUNG ELECTRONICS CO., ...


1. A method of operating a storage device including a non-volatile memory and a volatile memory, the method comprising:receiving a first logical address from a host, the first logical address being associated with first metadata stored in the volatile memory, and the first metadata indicating the first logical address corresponds to a first physical address of the non-volatile memory;
upon determining that the first metadata is corrupted and uncorrectable, changing the first metadata to indicate that the first logical address corresponds to a second physical address that does not exist in the non-volatile memory;
providing a first error message to the host indicating that an operation cannot be performed on data associated with the first logical address when the first metadata has been changed; and
after the providing of the first error message, receiving a second logical address from the host and performing an operation of accessing the non-volatile memory based on second metadata stored in the volatile memory and associated with the second logical address.

US Pat. No. 11,113,148

METHODS AND SYSTEMS FOR METADATA TAG INHERITANCE FOR DATA BACKUP

International Business Ma...


1. A computer-executed method comprising:maintaining a plurality of data storage systems for storing electronic data, each of the plurality of data storage systems having one or more processors having circuits and logic for processing information and performing logic operations;
maintaining an external metadata system separate from and in communication with the plurality of data storage systems, wherein the metadata management system has one or more processors having circuitry and logic for processing information and performing logic operations;
operating the metadata management system to collect and store metadata corresponding to all the electronic data residing on the plurality of data storage systems as a plurality of metadata entries in the metadata management system, wherein each of the plurality of metadata entries in the metadata management system comprises metadata, wherein operating the metadata management system comprises applying one or more custom metadata tags by the metadata management system to each metadata entry in the metadata system, wherein applying the one or more custom metadata tags to each metadata entry comprises analyzing the metadata in a respective metadata entry to derive the one or more custom metadata tags to apply to the respective metadata entry, and the one or more custom metadata tags are associated with and relate to the metadata in the metadata entry;
detecting execution of a backup data operation command on a data set of the electronic data residing in at least one data storage system of the plurality of data storage systems that causes creation of a backup copy of the data set in the at least one data storage system in a destination back-up data storage system, wherein the destination back-up data storage system has one or more processors having circuitry and logic for processing information and performing logic operations; and
in response to detecting the execution of the backup data operation command, creating, by the metadata management system, a new metadata entry in the metadata management system corresponding to the execution of the backup data operation command, wherein the new metadata entry includes applying at least one custom metadata tag to the metadata entry before execution of the backup data operation command.

US Pat. No. 11,113,147

SYSTEM AND METHOD FOR DETECTION OF, PREVENTION OF, AND RECOVERY FROM SOFTWARE EXECUTION FAILURE

Rovi Guides, Inc., San J...


1. A method for handling software execution failures on a user device, the method comprising:monitoring execution of a software process on the user device;

detecting a reboot of the user device;determining whether the reboot was caused by a prerequisite process not having been completed; and
in response to determining that the cause of the reboot is a prerequisite process not having been completed, slowing a speed at which the software process is executed to allow the prerequisite process to complete prior to execution of the software process.

US Pat. No. 11,113,146

CHUNK SEGMENT RECOVERY VIA HIERARCHICAL ERASURE CODING IN A GEOGRAPHICALLY DIVERSE DATA STORAGE SYSTEM

EMC IP HOLDING COMPANY LL...


1. A system, comprising:a processor; and
a memory that stores executable instructions that, when executed by the processor, facilitate performance of operations, comprising:in response to determining that a first segment has become inaccessible, determining an index of the first segment, wherein the first segment is comprised in a first chunk of a first zone of zones of a geographically diverse data storage system, and wherein the zones of the geographically diverse data storage system employ hierarchical erasure coding;
determining a second chunk of a second zone of the zones based on the hierarchical erasure coding, wherein the second chunk and the second zone are determined to be relevant to a recovery of the first segment;
determining a second segment of the second relevant chunk based on the index and the hierarchical erasure coding, wherein the second segment is determined to be relevant to the recovery of the first segment;
generating a first recovered segment based in at least in part on the second segment and the hierarchical erasure coding, wherein the first recovered segment represents the same information as was represented on the first segment prior to the first segment becoming inaccessible.


US Pat. No. 11,113,145

MEMORY DEVICE, SEMICONDUCTOR DEVICE, AND SEMICONDUCTOR SYSTEM

SK hynix Inc., Icheon (K...


1. A semiconductor system comprising:a memory device including a plurality of pages and configured to store a page write count for each of the plurality of pages; and
a semiconductor device configured to change address mapping between an input address and a physical address based on a block write count of a specific block corresponding to the input address and the page write counts of pages included in the specific block,
wherein each of the plurality of pages includes:
a data region configured to store data;
an error correction code (ECC) region configured to store ECC data that is used to detect and correct one or more errors occurring in the data stored in the data region; and
a metadata region configured to store a page write count of a corresponding page, and
wherein the semiconductor device generates block section information indicating which one of predetermined sections corresponds to the block write count of the specific block and changes the address mapping based on the block section information, and
wherein the metadata region is accessed independently from the data region and the ECC region by the semiconductor device.

US Pat. No. 11,113,144

METHOD AND SYSTEM FOR PREDICTING AND MITIGATING FAILURES IN VDI SYSTEM

Wipro Limited, Bangalore...


1. A method for predicting and mitigating failures in Virtual Desktop Infrastructure (VDI) systems, the method comprising:receiving, by a computing system, a plurality of system logs from a plurality of VDI systems;
segregating, by the computing system, one or more error logs from the plurality of system logs;
generating, by the computing system, a prediction score for each of the plurality of VDI systems based on respective one or more error logs, using a machine learning model, wherein the prediction score of a VDI system among the plurality of VDI systems is indicative of a possible failure in the VDI system, wherein the machine learning model is trained by performing steps of:receiving a plurality of feature vectors associated with a plurality of training error logs and one or more rules; and
determining a failure in the plurality of VDI systems and a value associated with the plurality of training error logs, based on the plurality of feature vectors and the one or more rules;determining a correlation between the one or more rules, the determined failure and the plurality of feature vectors to train the machine learning model;

wherein the trained machine learning model is used to predict failures in the plurality of VDI systems in real-time;

predicting, by the computing system, a failure in at least one VDI system from the plurality of VDI systems based on the prediction score of the at least one VDI system and the respective one or more error logs using the trained machine learning model; and
determining, by the computing system, at least one response action associated with the predicted failure, thereby mitigating the failure in the plurality of VDI systems.

US Pat. No. 11,113,143

SYSTEM AND METHOD OF DETERMINING COMPATIBLE MODULES

AO Kaspersky Lab, Moscow...


1. A computer-implemented method for detecting system anomalies, comprising:receiving system parameters specifying functionality of a first computing system;
querying a state model using the received system parameters to detect an anomaly within the first computing system; and
responsive to detecting the anomaly in the first computing system based on the state model:determining a recovery method based on a recovery-method model and information about the detected anomaly, wherein the determined recovery method is configured to ensure requirements of the first computing system are met,
selecting, from a tool database, a third-party, system-compatible tool configured to implement the determined recovery method, and
implementing the determined recovery method in response to installation of the selected system-compatible tool.


US Pat. No. 11,113,142

EARLY RISK DETECTION AND MANAGEMENT IN A SOFTWARE-DEFINED DATA CENTER

VMware, Inc., Palo Alto,...


1. A non-transitory machine-readable medium storing instructions executable by a processing resource to cause a computing system to perform operations comprising:receive historical logs associated with a log source of a software-defined data center (SDDC);
parse the historical logs to determine an association rule, wherein the association rule relates a particular risk to the SDDC to a sequence of operations in the historical logs;
monitor logs associated with the log source;
determine a potential risk based on an occurrence of the sequence of operations in the logs; and
provide a notification responsive to a determination that a probability associated with the potential risk exceeds a probability threshold.

US Pat. No. 11,113,141

MESSAGE INPUT/OUTPUT DEVICE, METHOD, AND RECORDING MEDIUM

NEC CORPORATION, Tokyo (...


1. A message input/output device comprising one or more memories storing instructions and one or more processors configured to execute the instructions to:receive a reception message; and
output when a reception time of the reception message falls within a first predetermined time from reception of a related message related to the reception message, the reception message when a next of the related message is not received within a second predetermined time exceeding the first predetermined time from the reception time.

US Pat. No. 11,113,140

DETECTING ERROR IN EXECUTING COMPUTATION GRAPH ON HETEROGENEOUS COMPUTING DEVICES

Alibaba Group Holding Lim...


1. A method for detecting error in executing a computation graph on heterogeneous computing devices, the method comprising:receiving a first reference value from a reference computing device included in the heterogeneous computing devices, the first reference value being an execution result by the reference computing device for a first node of the computation graph representing a machine learning model;
receiving a first target value from a target computing device included in the heterogeneous computing devices as an execution result by the target computing device for the first node;
comparing the first reference value and the first target value; and
determining whether the first target value is in error based on the comparison of the first reference value and the first target value.

US Pat. No. 11,113,139

PROACTIVE OUTAGE DETECTION BASED ON USER-REPORTED ISSUES

Espressive, Inc., Santa ...


1. One or more non-transitory computer-readable media storing instructions for detecting outages in one or more objects, wherein each object is accessible by one or more users of a customer, wherein the instructions, when executed by one or more computing devices, cause at least one of the one or more computing devices to:identify a reported incident as a candidate outage incident based upon an input received from a first user of a customer concerning the reported incident;
determine whether the candidate outage incident relates to at least a threshold number of prior candidate outage incidents reported by other users; and
if so, associate the candidate outage incident with the prior candidate outage incidents, identify an outage relating to the candidate outage incident and the associated prior candidate outage incidents, and provide information for presentation to the first user and the other users to inform them of resolution of the candidate outage incident and the prior candidate outage incidents, respectively,
wherein the one or more objects is one or more physical objects or software, and at least one of the candidate outage incidents relating to the identified outage is resolved.

US Pat. No. 11,113,138

SYSTEM AND METHOD FOR ANALYZING AND RESPONDING TO ERRORS WITHIN A LOG FILE

Carrier Corporation, Pal...


1. A method for analyzing an error log file comprising:receiving an error log file at a processor, the error log file including a plurality of distinct entries;
determining a token pattern of each entry by tokenizing each of the distinct entries, wherein tokenizing each of the distinct entries comprises converting each series of sequential alphanumeric characters in each error log entry into an alphanumeric token, converting each series of sequential punctuation characters in each error log entry into a punctuation token, and each series of spaces in each error log entry into a space token;
further comprising converting known sub-patterns of tokens in each tokenized entry into a corresponding secondary token and
grouping the plurality of distinct entries into groups having similar token patterns.

US Pat. No. 11,113,137

ERROR INCIDENT FINGERPRINTING WITH UNIQUE STATIC IDENTIFIERS

SAP SE, Walldorf (DE)


1. A computer-implemented method comprising:upon encountering an error during runtime of an application, collecting an error context of the error, wherein the error context comprises a unique static identifier automatically generated at development time of the application responsive to a developer command to create a new application component, and the unique static identifier identifies an application component definition of an application component in which the error occurred;
based at least on the unique static identifier, generating an error incident fingerprint of the error;
generating an error incident record comprising the error incident fingerprint of the error;
finding a match between the generated error incident record and one or more other error incident records based on the error incident fingerprint generated from the unique static identifier; and
storing an association between the generated error incident record and the one or more matching other error incident records.

US Pat. No. 11,113,136

PROCESSING SYSTEM, RELATED INTEGRATED CIRCUIT AND METHOD

STMicroelectronics Applic...


1. A processing system, comprising:a plurality of circuits configured to produce a plurality of error signals, each of the plurality of circuits being configured to generate a respective error signal of the plurality of error signals;
a first error pad;
a second error pad; and
a fault collection circuit configured to receive, at an input of the fault collection circuit, the plurality of error signals and configured to generate a combined error signal for the first error pad and a combined error signal for the second error pad, wherein the fault collection circuit comprises:a first combinational logic circuit configured to generate the combined error signal for the first error pad and the combined error signal for the second error pad, the combined error signal for the first error pad comprising a first selection of error signals and the combined error signal for the second error pad comprising a second selection of error signals, the first selection of error signals and the second selection of error signals being selected as a function of a first set of configuring bits.


US Pat. No. 11,113,135

MEMORY DEVICE AND METHOD FOR HANDLING INTERRUPTS THEREOF

WINBOND ELECTRONICS CORP....


1. A memory device, comprising:a memory cell array;
a monitoring circuit, configured to detect one or more event parameters of the memory cell array, wherein the one or more event parameters correspond to one or more interrupt events of the memory cell array;
an event-checking circuit, configured to determine whether to assert an interrupt signal according to the one or more event parameters detected by the monitoring circuit; and
a control logic, configured to control the memory cell array according to a command from a processor, wherein the control logic comprises a mode register that pre-records a plurality of operation modes of the memory cell array and a status of the interrupt event corresponding to each event parameter,
wherein in response to the event-checking circuit determining to assert the interrupt signal, the event-checking circuit directly transmits the interrupt signal to the processor via a physical interrupt pin of the memory device, so that the processor handles the one or more interrupt events of the memory device according to the interrupt signal,
wherein in response to the processor having handled each interrupt event sequentially, the processor changes the status of each interrupt event in the mode register to a normal status.

US Pat. No. 11,113,134

COMPUTER SYSTEM, COMMUNICATIONS SYSTEM, CONTROL METHOD BY COMPUTER SYSTEM, AND PROGRAM

NEC Platforms, Ltd., Kan...


1. A computer system comprising:an active system service processor,
a standby system service processor having a memory, and
a unit,
wherein the active system service processor includes
a first controller configured to acquire log information indicating a log of the unit, cause the memory to store the log information, and output a read instruction for reading the log information to the standby system service processor according to an operation of instructing to read the log information, and
the standby system service processor includes
a second controller configured to read the log information from the memory according to the read instruction, and execute processing related to the read log information.

US Pat. No. 11,113,133

CROSS-COMPONENT HEALTH MONITORING AND IMPROVED REPAIR FOR SELF-HEALING PLATFORMS

Intel Corporation, Santa...


1. A computing system comprising:a power supply to provide power to the computing system;
a first firmware component;
a second firmware component communicatively coupled to the first firmware component; and
a cross-component health monitor apparatus communicatively coupled to the first firmware component and the second firmware component, the cross-component health monitor apparatus including a substrate and logic coupled to the substrate, wherein the logic is to:
detect a successful boot of the first firmware component;
receive a signal from the second firmware component; and
detect an incompatibility of the first firmware component with respect to the second firmware component based on the signal.

US Pat. No. 11,113,132

LOCALIZATION OF POTENTIAL ISSUES TO OBJECTS

Hewlett Packard Enterpris...


1. A non-transitory machine-readable storage medium comprising instructions that upon execution cause a system to:based on comparing measurement data acquired at different hierarchical levels of a computing environment, identify a potential issue and a particular hierarchical level of the different hierarchical levels as a potential cause of the potential issue, the different hierarchical levels comprising a physical machine level and a program level comprising a program to execute on one or more hosts of the physical machine level; and
based on measurement data acquired for objects within a particular hierarchical level of the different hierarchical levels:compute respective localization scores for a plurality of subsets of the objects within the particular hierarchical level;
rank the plurality of subsets according to the respective localization scores;
determine that the localization scores for the plurality of subsets of the objects each violate a criterion;
based on the determination, determine that the potential issue is localized to the plurality of subsets of the objects within the particular hierarchical level; and
trigger a remediation action to address the potential issue responsive to the ranking of the plurality of subsets of the objects.


US Pat. No. 11,113,131

METHOD FOR COLLECTING PERFORMANCE DATA IN A COMPUTER SYSTEM

EMC IP Holding Company LL...


1. A computer-implemented method, comprising:configuring an event collection system to capture event data for a plurality of events of a computer system, wherein the event collection system is configured to run an event collection process configured to filter event data as it is collected, in accordance with a predetermined setting, wherein the predetermined setting is configured to determine the event data that is captured by the event collection process, a duration of the event collection process, and to determine how captured event data is stored in an event storage structure of the event collection system; and
configuring the event collection process to run in cooperation with the event collection system, the event collection process comprising:capturing first information relating to a first event of the computer system, the first information comprising information relating to a first respective plurality of event parameters;
generating a first event key during the event collection process, wherein the first event key is generated based on at least some of the first plurality of event parameters;
storing the first event key and the captured first information, as part of a first event storage in the event storage structure, wherein the first event key is configured to index the first event and the captured first information in the event storage structure;
capturing second information relating to a second event of the computer system, the second information comprising information relating a second respective plurality of event parameters;
generating a second event key during the event collection process, wherein the second event key is generated based on the second plurality of event parameters;
analyzing the first event key and the second event key, during the event collection process, to determine whether the first event key matches the second event key;
if the second event key does not match the first event key, then storing the second event key and the captured second information as a second event in the event storage structure, wherein the second event key is configured to index the second event and the captured second information in the event storage structure, and wherein the second event is distinct from the first event, in the event storage structure; and
if the second event key matches the first event key, then dynamically updating the first event stored in the event collection system, to store the second event information as part of the first stored event, wherein storing the second event information as part of the first stored event is configured to minimize any additional storage space needed, in the event collection structure, for second event storage and wherein dynamically updating the first event takes place during the event collection process.


US Pat. No. 11,113,130

ELECTRONIC APPARATUS AND CONTROL METHOD THEREOF

SAMSUNG ELECTRONICS CO., ...


14. A server comprising:a communication circuitry configured to communicate with a plurality of terminals; and
a processor configured to:control the communication circuitry to receive error information about one or more errors occurred in the plurality of terminals from the plurality of terminals;
select an error which occurred the most among the one or more errors based on the received error information,
select one or more terminals among the plurality of terminals which transmitted the error information about the selected error,
control the communication circuitry to transmit policy information indicating a data type to be collected to debug the selected error to the selected one or more terminals, and
collect data corresponding to the data from the selected one or more terminals.


US Pat. No. 11,113,129

REAL TIME BLOCK FAILURE ANALYSIS FOR A MEMORY SUB-SYSTEM

Micron Technology, Inc., ...


1. A system comprising:a memory array including a plurality of memory cells; and
a processing device coupled to the memory array, the processing device configured to:in response to detection of an error associated with a subset of the plurality of memory cells, sense a state associated with each memory cell of the subset of the plurality of memory cells; and
store state distribution information in a persistent memory, the state distribution information comprising the sensed state associated with each memory cell of the subset,

wherein sensing the state associated with each memory cell comprises stepping a corresponding word line with a plurality of sensing voltages to determine at which sensing voltage a corresponding bitline output changes, and
wherein storing the state distribution information comprises storing the sensed state associated with each memory cell of the subset in a corresponding disjoint category of a histogram, wherein the corresponding disjoint category of the histogram corresponds to the determined sensing voltage.

US Pat. No. 11,113,128

CIRCULAR QUEUE FOR MICROKERNEL OPERATING SYSTEM

Facebook Technologies, LL...


1. A method comprising, by a kernel of an operating system executing on a computing device:receiving a request to store a message to communicate from a producer process to a consumer process using a circular buffer, wherein the circular buffer comprises memory segments;
determining an ownership of a first memory segment of the circular buffer, based on a corresponding first ownership segment of an ownership array for the circular buffer, wherein the ownership array comprises a corresponding ownership segment for each memory segment of the circular buffer; and
determining, based on the determined ownership of the first memory segment, that the first segment is available to the producer process;
determining that criteria are satisfied after a comparison of a current iteration around the circular buffer taken by the producer process to a number of messages previously stored in the first memory segment, wherein the satisfied criteria indicate that the messages previously stored in the first memory segment have been consumed;
responsive to determining that the first memory segment is available to the producer process and that the criteria are satisfied:storing the message in the first memory segment; and

after storing the message in the first memory segment, changing a value stored in the first ownership segment to indicate the first memory segment is owned by the consumer process to allow the consumer process to consume the stored message.

US Pat. No. 11,113,127

COMMAND LINE OUTPUT REDIRECTION

Micron Technology, Inc., ...


1. A computer-implemented method, comprising:receiving, in a computer, a request to run a command line utility and a routine of an application unable to receive command line outputs via a command line interface of the command line utility, wherein the command line utility is a utility executable in an operating system from a command line prompt;
executing, by the computer in response to the request, the command line utility and the routine;
providing, by the computer according to the request, an output of the command line utility as an input of the routine via a piping function of the operating system of the computer;
receiving, in the routine running in the computer, the output received from the command line utility via the piping function of the operating system of the computer;
receiving, in the routine running in the computer, a storage location identifier associated with the output received, wherein the storage location identifier is configured to selectively identify between a first storage location of a shared memory maintained by the operating system, and a second storage location of a system registry database managed by the operating system, and wherein the first storage location is located outside of the system registry database;
determining a storage location identified by the storage location identifier; storing, by the routine running in the computer, the output received from the command line utility in the determined storage location; and
updating a value associated with the storage location identifier to indicate an amount of data stored.

US Pat. No. 11,113,126

VERIFYING TRANSFER OF DETECTED SENSITIVE DATA

International Business Ma...


1. A method for verified data transfer, the method comprising:determining a first data type of a copy field including a copied data;
determining a second data type of a paste field intended for receiving the copied data;
identifying a data type mismatch between the first determined data type of the copy field including the copied data and the second determined data type of the paste field; and
preventing an input of the copied data into the paste field responsive to the identified data type mismatch between the first determined data type of the copy field including the copied data and the second determined data type of the paste field;
generating a textual alert including a message indicating the identified data type mismatch between the first determined data type of the copy field and the second determined data type of the paste field; and
prompting, in the generated textual alert, a selection between a first confirmation and a second confirmation, wherein the selection of the first confirmation aborts the input of the copied data into the paste field and wherein the selection of the second confirmation continues the input of the copied data into the paste field.

US Pat. No. 11,113,125

CONTROL DEVICE, SENSOR MANAGEMENT DEVICE, CONTROL METHOD, SENSOR MANAGEMENT METHOD AND PROGRAM

OMRON Corporation, Kyoto...


1. A control device comprising a processor configured with a program to perform operations comprising:operation as a first acquiring unit configured to acquire sensor-side metadata comprising information relating to sensors configured to output sensing data;
operation as a second acquiring unit configured to acquire application-side metadata comprising information relating to an application that uses the sensing data;
operation as a comparison unit configured to execute matching between the sensor-side metadata and the application-side metadata, and extract a sensor that can provide sensing data that meets a request of the application; and
operation as an instruction unit configured to transmit, to a sensor management device that manages the sensor extracted by the comparison unit, a data flow control command comprising information specifying the sensor extracted by the comparison unit, and the application;
wherein the sensor-side metadata comprises information that designates part of the metadata as dynamic data,
the application-side metadata comprises a condition that part of the metadata is required to be dynamic data, and a dynamic handling condition, and
the data flow control command comprises the dynamic handling condition and, in response to the dynamic data output by the sensor extracted by the comparison unit or dynamic data output by another sensor attached to the sensor extracted by the comparison unit satisfying the dynamic handling condition, an instruction for the corresponding sensor to transmit the sensing data to the application.

US Pat. No. 11,113,124

SYSTEMS AND METHODS FOR QUICKLY SEARCHING DATASETS BY INDEXING SYNTHETIC DATA GENERATING MODELS

Capital One Services, LLC...


1. A system for searching datasets, comprising:one or more memory units storing instructions; and
one or more processors that execute the instructions to perform operations comprising:receiving a test dataset from a client device;
generating, using a data model, test activation function values based on a test dataset, the test activation function values corresponding to nodes of the data model and determining whether the corresponding nodes fire;
processing the test activation function values, the processing comprising implementing at least one of an encoding method, a factorizing method, or a vectorizing method on the test activation function values;
retrieving reference activation function values from a dataset index, the reference activation function values being based on a reference dataset;
generating a similarity metric based on the reference activation function values and the processed test activation function values;
classifying the test dataset based on the similarity metric; and
transmitting, to the client device, information comprising the classification of the test dataset.


US Pat. No. 11,113,123

SYSTEM AND METHOD FOR AUTOMATED APPLICATION PROGRAMMING INTERFACE GENERATION

Kleeen Software, Inc., S...


1. A computer-implemented method, comprising:receiving entry files including source code and configuration data, wherein the source code is for a user interface program, wherein the source code includes a plurality of Application Programming Interface (API) calls, and wherein the configuration data includes a definition for each of the plurality of API calls;
generating the user interface program by a complier configured to compile the source code, wherein the user interface program includes executable code; and
generating skeleton API data based on definitions of the plurality of API calls included in the configuration data, wherein the skeleton API data comprises a set of rules generated by the compiler, wherein the set of rules define a structure for the each of the plurality of API calls used by the user interface program, wherein the structure for the each of the plurality of API calls includes syntax and semantics to be used by a middleware program when interfacing with the user interface program.

US Pat. No. 11,113,122

EVENT LOOP DIAGNOSTICS

New Relic, Inc., San Fra...


1. A method, comprising, by a circuitry:receiving, from an agent of a host, event loop data defining suspend and resume times of tasks of an event loop executing on the host;
determining, based on the event loop data, event loop analysis data defining execution times for each of the tasks; and
determining, based on the event loop analysis data, a cause of a delay in a task of the event loop as resulting from at least one of (i) a task delay in execution of the task or (ii) a loop delay in execution of another task of the event loop.

US Pat. No. 11,113,121

HETEROGENEOUS AUTO-SCALING BIG-DATA CLUSTERS IN THE CLOUD


1. A method for heterogeneously auto-scaling cloud-based big data clusters to use multiple instance types, comprising:determining a primary instance type, the primary instance type selected from a group of approved instances with varying amounts of diskspace and/or computer processing units (CPUs);
determining provisioning requirements based on characteristics of the primary instance type including associated amount of diskspace and/or CPUs;
assigning a weight of 1.0 to the primary instance type;
identifying other approved instance types and determining an instance weight for each other approved instance type, wherein the determination of instance weight for each other approved instance type is based upon a comparison with the characteristics of the primary instance;
determining if using any other instance types is advantageous in that it would result in reduced costs or faster processing;
if using other instance types is advantageous, determining which other instance types to use, and determining the number of other instance types by using the instance weight to determine corresponding processing power.

US Pat. No. 11,113,120

INTENT-BASED AUTO SCALING OF VIRTUAL COMPUTING RESOURCES

Amazon Technologies, Inc....


1. A computer-implemented method comprising:receiving a request from a user to create a scalable group of compute instances at a service provider network, the request indicating:a primary scaling configuration used by an auto scaling service of the service provider network to scale the scalable group of compute instances over time in response to changes in load on the scalable group of compute instances, and
a secondary scaling configuration used to scale the scalable group of compute instances when the auto scaling service is unable to scale the scalable group of compute instances according to the primary scaling configuration due to capacity limitations or conflicting requests from other sources;

generating, based on the primary scaling configuration and the secondary scaling configuration, a scaling fulfillment plan used to scale the scalable group of compute instances at one or more points in time in the future according to capacity available to the auto scaling service;
scaling the scalable group of compute instances over time according to the scaling fulfillment plan; and
monitoring a total capacity available to the auto scaling service and a rate at which the auto scaling service can process new scaling requests over time, wherein scaling the scalable group of compute instances over time according to the scaling fulfillment plan is based in part on the total capacity available to the auto scaling service and the rate at which the auto scaling service can process new scaling requests over time.

US Pat. No. 11,113,119

MANAGING COMPUTER RESOURCES

International Business Ma...


1. A computer-automated method of managing resources in applications running on a computer system, the resources being managed in groups, each group having a unique group name which is dynamically resolvable to an address specific to a particular application, the method comprising:receiving a resource placement request to place a resource in a first application having a first address;
processing the resource placement request by assigning a group to the resource and a universally unique resource identifier which combines with the group name to form a unique endpoint for the resource, wherein assigning a group to the resource is based on an amount of memory required attribute of the resource;
receiving a move request to move the resource out of the first application into a second application having a second address; and
acting on the move request for the resource by moving its group from the first application into the second application by reassigning its group name to the second address, thereby moving all resources in that group to the second application.

US Pat. No. 11,113,118

SYSTEM AND METHOD FOR MANAGING NETWORK ACCESS CONTROL PRIVILEGES BASED ON COMMUNICATION CONTEXT AWARENESS

Hewlett Packard Enterpris...


1. A method comprising:detecting, by a centralized network access control server, a collaboration session between at least a first user and a second user;
retrieving, by the centralized network server from a directory server, a first role of the first user, the first role corresponding with privileges including a first quality of service level;
retrieving, by the centralized network access control server from the directory server, a second role of the second user, the second role corresponding with privileges including a second quality of service level;
determining, by the centralized network access control server, that the first quality of service level is greater than the second quality of service level;
assigning, by the centralized network access control server, the second user at least the privileges corresponding to the first role including the first quality of service level; and
conducting the collaboration session between the first user and the second user based on the privileges assigned to the second user that correspond to the first role.

US Pat. No. 11,113,117

CLUSTERING ROUTINES FOR EXTRAPOLATING COMPUTING RESOURCE METRICS

VMWARE, INC., Palo Alto,...


1. A system for extrapolating metrics for computing resources of a computing environment, comprising:at least one computing device;
program instructions stored in memory and executable in the at least one computing device that, when executed by the at least one computing device, direct the at least one computing device to:generate classes of computing resources based at least in part on a time parameter;
for individual ones of the classes, generate clusters of the computing resources using a clustering routine, the clusters being generated such that individual ones of the computing resources in a respective one of the clusters have similar configuration parameters with respect to one another;
determine a number of metrics required to extrapolate the metrics to other ones of the computing resources in a corresponding one of the clusters such that extrapolated metrics generated for the other ones of the computing resources satisfy a predetermined accuracy threshold, the number of metrics determined as a function of a number of the clusters generated;
receive the number of the metrics from a client device, individual ones of the metrics corresponding to one of the computing resources in one of the clusters;
determine a difference between respective ones of the number of the metrics as received and a corresponding metric value stored in a reference library;
determine an average of the difference determined for individual ones of the computing resources; and
update a corresponding one of the extrapolated metrics generated for other ones of the computing resources in the one of the clusters using the individual ones of the metrics and the corresponding metric value stored in the reference library based at least in part on the average of the difference.


US Pat. No. 11,113,116

TASK MAPPING METHOD OF NETWORK-ON-CHIP SEMICONDUCTOR DEVICE

SK hynix Inc., Gyeonggi-...


1. A task mapping method of a network-on-chip (NoC) semiconductor device, which migrates a task of a first node among multiple nodes, the method comprising:checking second nodes to which no tasks are assigned;
computing the number of hops from the first node to the second nodes;
setting candidate nodes to which the task can be migrated, among the second nodes;
analyzing performance overheads of the candidate nodes in ascending order with respect to the number of hops;
selecting a target node having the smallest performance overhead according to the analysis result; and
migrating the task to the target node.

US Pat. No. 11,113,115

DYNAMIC RESOURCE OPTIMIZATION


1. A method for dynamically allocating a fixed number of CPU resources within a compute platform;the method comprising:

obtaining a first data sample at a first time for a first parameter for a set of workloads running within the compute platform;
obtaining a second data sample at a second time, later than the first time, for the first parameter for the set of workloads;
comparing for each workload within the set of workloads, a value for the first parameter taken at the second time and a value for the first parameter taken at the first time;
based upon a comparison of the value for the first parameter taken at the second time and the value for the first parameter taken at the first time setting a determination for each workload within the set of workloads whether the workload should bein a Dedicated Class of workloads and assigned to a dedicated CPU resource or
in a Shared Class of workloads that is handled by a set of at least one shared CPU resource, wherein a shared CPU resource may service more than one workload;

based upon the determination for each workload within the set of workloads, mapping each workload within the Dedicated Class of workloads to have exclusive use of a Dedicated CPU resource and mapping each workload in the shared class of workloads to be handled by the set of at least one shared CPU resource;
obtaining a third data sample at a third time after the second time for a second parameter for the set of workloads running within the compute platform;
obtaining a fourth data sample at a fourth time, later than the third time, for the second parameter for the set of workloads;
comparing for each workload within the set of workloads, a value for the second parameter taken at the fourth time and a value for the second parameter taken at the third time;
based upon a comparison of the value for the second parameter taken at the fourth time and the value for the second parameter taken at the third time setting a determination for each workload within the set of workloads whether the workload should bein the Dedicated Class of workloads and assigned to a dedicated CPU resource or
in the Shared Class of workloads that is handled by a set of at least one shared CPU resource, wherein a shared CPU resource may service more than one workload; and

based upon the determination for each workload within the set of workloads, mapping each workload within the Dedicated Class of workloads to have exclusive use of a Dedicated CPU resource and mapping each workload in the shared class of workloads to be handled by the set of at least one shared CPU resource.

US Pat. No. 11,113,114

DISTRIBUTED OBJECT PLACEMENT, REPLICATION, AND RETRIEVAL FOR CLOUD-SCALE STORAGE AND DATA DELIVERY

Cisco Technology, Inc., ...


1. A method comprising:receiving, at a dispatcher of a storage system, a request for an object from a client;
identifying, by the dispatcher, candidate storage nodes of the storage system for serving the object to the client by generating an ordered list of the candidate storage nodes using a two-dimensional consistent hashing function that depends on both a flow associated with the request for the object and the object itself; and
facilitating distribution of the request for the object through one or more candidate storage nodes of the candidate storage nodes according to the ordered list of the candidate storage nodes for filling the request for the object, wherein the one or more candidate storage nodes are configured to facilitate the distribution of the request for the object by selectively filling the request for the object to the client using cache admission policies formed based on popularity characteristics of requested objects at the one or more candidate storage nodes and the one or more candidate storage nodes are configured to selectively replicate the object across at least a portion of the candidate storage nodes based on selectively filing the request for the object.

US Pat. No. 11,113,113

SYSTEMS AND METHODS FOR SCHEDULING VIRTUAL MEMORY COMPRESSORS

Apple Inc., Cupertino, C...


1. An apparatus comprising:a first processor configured to execute one or more applications;
a plurality of codecs comprising one or more hardware codecs and one or more software codecs executable by the first processor; and
a compressor selector configured to:select a first codec of the plurality of codecs for data compression;
monitor one or more factors corresponding to a dynamic behavior of the apparatus, wherein values of the one or more factors vary in response to execution of tasks by the first processor;
select a second codec of the plurality of codecs for data compression, based at least in part on a determination that the one or more factors indicate a first condition is satisfied during execution of the tasks by the first processor, wherein one of the first codec and the second codec is a hardware codec and another one of the first codec and the second codec is a software codec; and
maintain the first codec as a selected codec for data compression, based at least in part on a determination that the first condition is not satisfied during execution of tasks by the first processor, wherein the determination that the first condition is satisfied comprises a determination that a difference between a target compression ratio and an expected compression ratio of the first codec exceeds a threshold.


US Pat. No. 11,113,111

MACHINE LEARNING TASK COMPARTMENTALIZATION AND CLASSIFICATION

Bank of America Corporati...


1. An apparatus comprising:one or more processors; and
memory comprising instructions that, when executed by the one or more processors, cause the apparatus to at least:identify a task associated with a plurality of skillsets, wherein the task is comprised of at least one task component;
query a database server for at least one work profile associated with at least one skillset of the plurality of skillsets, wherein the at least one work profile is a combined profile comprising at least a first work profile and a second work profile, wherein the first work profile is associated with a first skillset of the plurality of skillsets and the second work profile is associated with a second skillset of the plurality of skillsets;
generate an affinity between the at least one work profile and the at least one task component of the task, the at least one task component associated with the at least one skillset; and
in response to determining that the affinity satisfies a threshold, allocate, to the at least one work profile, the at least one task component.


US Pat. No. 11,113,110

INTELLIGENT POOLING OF ISOLATED HIERARCHICAL RUNTIMES FOR CLOUD SCALE DATABASES IN A MULTI-TENANT ENVIRONMENT

ORACLE INTERNATIONAL CORP...


1. A method, comprising:allocating an allocation of operating system resources of a container database management system (CDBMS) to each generic nest of a pool of generic nests, said allocation including a quota of one or more processors, and a quota of memory;
wherein for a particular generic nest of said pool, an amount of operating system resources in the allocation of operating system resources allocated to said particular generic nest differs from an amount of operating system resources in the allocation of operating system resources allocated to at least one other generic nest in said pool;
after said allocating, establishing a database session for a particular PDB in the CDBMS, wherein establishing the database session for the particular PDB includes:determining a PDB configuration profile for the particular PDB, wherein the PDB configuration profile includes information regarding CPU usage, memory usage, and isolation characteristics of the particular PDB;
based on the PDB configuration profile determined for the particular PDB, determining a matching generic nest from said pool;
assigning said matching generic nest to said particular PDB; and
configuring the matching generic nest for said particular PDB, thereby converting the matching generic nest to a PDB nest;

limiting how much operating system resources may be used by said database session to the respective allocation of operating system resources allocated to said PDB nest.

US Pat. No. 11,113,109

CLUSTER RESOURCE MANAGEMENT USING ADAPTIVE MEMORY DEMAND

VMWARE, INC., Palo Alto,...


1. A system, comprising:at least one computing device;
at least one memory comprising instructions, wherein the instructions, when executed by at least one processor, cause the at least one computing device to at least:identify workload data for a plurality of workloads currently executed in a computing environment, the workload data comprising, for a respective workload: a granted memory, a consumed memory, and a page sharing saved memory;
determine local memory estimates for the plurality of workloads, a respective local memory estimate being determined based on at least one of: the consumed memory, and a full memory estimate reduced by the page sharing saved memory;
determine a destination memory estimate for a candidate workload, the destination memory estimate being determined based on the granted memory for the candidate workload;
determine a plurality of goodness scores corresponding to the candidate workload being executed on a plurality of hosts of the computing environment, the plurality of goodness scores being determined based on the local memory estimates for the plurality of workloads, and wherein at least one of the plurality of goodness scores is further determined based on the destination memory estimate for the candidate workload; and
balance the plurality of workloads based on a comparison of the plurality of goodness scores for the candidate workload.


US Pat. No. 11,113,108

MANAGING PROGRAMMABLE LOGIC-BASED PROCESSING UNIT ALLOCATION ON A PARALLEL DATA PROCESSING PLATFORM

ThroughPuter, Inc., Will...


1. A method on a parallel data processing platform comprising a plurality of programmable logic-based processing units for accelerating a service comprising at least a first program and a second program, different than the first program, the method comprising:forming a first set of inter-task communication paths connecting a first set of programmable logic-based processing units of the plurality of programmable logic-based processing units into a first multi-stage program instance, wherein the first multi-stage program instance is configured to accelerate the first program, and wherein the first set of programmable logic-based processing units are interconnected using reconfigurable cross-connects;
forming a second set of inter-task communication paths connecting a second set of programmable logic-based processing units of the plurality of programmable logic-based processing units to a second multi-stage program instance, wherein the second multi-stage program instance is configured to accelerate the second program, and wherein the second set of programmable logic-based processing units are interconnected using reconfigurable cross-connects;
in response to a first increased demand for the first multi-stage program instance determined by monitoring a first characteristic corresponding to the first set of programmable logic-based processing units, based on an adaptive optimized resource allocation policy, adding at least one more programmable logic-based processing unit to the first set of programmable logic-based processing units from the plurality of programmable logic-based processing units; and
in response to a second increased demand for the second multi-stage program instance determined by monitoring a second characteristic different from the first characteristic, corresponding to the second set of programmable logic-based processing units, based on the adaptive optimized resource allocation policy, adding at least one more programmable logic-based processing unit to the second set of programmable logic-based processing units from the plurality of programmable logic-based processing units.

US Pat. No. 11,113,107

HIGHLY EFFICIENT INEXACT COMPUTING STORAGE DEVICE

SAMSUNG ELECTRONICS CO., ...


1. A system, comprising:a receiver to receive at least a task, the task representing an iteration of an inexact algorithm and including a first power level and a first precision;
an identifier to identify at least an Arithmetic Logic Unit (ALU), the ALU including a second power level and a second precision; and
an assignment module to assign the task to the ALU in order to optimize a total ALU power and to assign the task to the ALU having the second precision that is greater than the first precision.

US Pat. No. 11,113,106

COORDINATING DISTRIBUTED TASK EXECUTION

Microsoft Technology Lice...


1. A method comprising:retrieving, by a task poller executing on a first processor, a first batch of tasks from multiple message queues in a distributed messaging system;
assigning, by the task poller, the first batch of tasks to multiple task executors in a thread pool based on availabilities of the multiple task executors;
tracking, by an offset manager executing on a second processor, statuses associated with processing the first batch of tasks based on communications from the multiple task executors; and
periodically committing, by the offset manager based on the tracked statuses, offsets of completed tasks in the multiple message queues to the distributed messaging system;
wherein periodically committing the offsets of the completed tasks in the multiple message queues to the distributed messaging system comprises:when a pending task blocks completion of a series of tasks represented by sequential offsets that immediately follow a last-committed offset in a message queue:notifying a producer of messages in the message queue to resend a message representing the pending task; and
committing, to the distributed messaging system, a highest offset associated with the series of tasks in the message queue.



US Pat. No. 11,113,105

COMPUTER IMPLEMENTED SYSTEM AND METHOD FOR GENERATING PLATFORM AGNOSTIC DIGITAL WORKER

INFOSYS LIMITED, Bangalo...


1. A computer implemented method for generating of platform agnostic digital worker comprising:identifying, by a computing device, a task to be performed by a digital worker;
generating, by the computing device, a specification for the digital worker, wherein the specification comprises at least a predefined standard for configuration of input and output parameters and error handling, wherein the digital worker is further configured to communicate across heterogeneous platforms based on the predefined specification using a common data communication format, and wherein the predefined specification comprises a template to define input, output and exception details;
receiving, by the computing device, a digital worker created by an agent based on the specification to deploy the digital worker at an orchestration engine;
sequencing, by the computing device, the digital worker, into a repository of digital workers in a predefined manner of workflow, wherein the digital worker is configured to execute sequentially to perform a scheduled task and wherein the repository of digital workers is generated based on the digital worker metadata information, digital worker binaries, and digital worker source code.

US Pat. No. 11,113,104

TASK PARALLEL PROCESSING METHOD, APPARATUS AND SYSTEM, STORAGE MEDIUM AND COMPUTER DEVICE

Shanghai Cambricon Inform...


1. A computer system, comprising:first and second processors;
first and second memories; and
a secure application,
wherein:the first memory stores offline models and corresponding input data of a plurality of original networks, and a runtime system configured to run on the first processor;
the second memory stores an operating system configured to run on the first processor or the second processor;
the runtime system is a secure runtime system built based on a trusted operating environment;
the first memory is a secure storage medium;
when the runtime system runs on the first processor:the runtime system obtains an offline model and corresponding input data of an original network from the first memory, and controls the second processor to run the offline model of the original network; and
the runtime system causes the first processor to implement a plurality of virtual devices comprising:a data processing device configured to:
?provide an offline model Application Programming Interface (API) and an input data API; and
?obtain the offline model and the corresponding input data of the original network from the first memory;
an equipment management device configured to:
?provide a driving API for the second processor; and
?control the second processor to turn on or turn off; and
a task execution device configured to:
?provide an operation API for the second processor; and
?control the second processor to run the offline model of the original network;


the offline model of the original network comprises model parameters, instructions, and interface data of respective computation nodes of the original network; and
the secure application is configured to run on the runtime system, wherein the secure application is configured to call the offline model API, the input data API, the driving API, and the operation API.


US Pat. No. 11,113,103

TASK PARALLEL PROCESSING METHOD, APPARATUS AND SYSTEM, STORAGE MEDIUM AND COMPUTER DEVICE

Shanghai Cambricon Inform...


1. A method for scheduling an instruction list, the method comprising:obtaining an instruction set in the instruction list to be scheduled;
determining data dependencies among instructions in the instruction set by performing a data dependency analysis on the instruction set;
obtaining, based on the data dependencies, selection nodes for performing instruction selections during the scheduling of the instruction list; and
determining, based on a preset rule, an order of instructions in a scheduled instruction list according to a corresponding order of the selection nodes, comprising:accessing a selection node and obtaining a longest execution time corresponding to the selection node; and
when the longest execution time is shorter than an initial execution time:determining an order of sorted instructions of the selection node as the order of corresponding instructions in the scheduled instruction list; and
changing the initial execution time to the longest execution time,
wherein the initial execution time corresponds to an execution time of an instruction sequence in the instruction list to be scheduled.



US Pat. No. 11,113,102

DATA STORAGE DEVICE

SILICON MOTION, INC., Jh...


1. A data storage device, comprising:a non-volatile memory; and
a controller, operating the non-volatile memory and including a central processing unit,
wherein:
the central processing unit has a multi-stage architecture, and different processing systems of the different stages communicate with each other;
in a first processing system within the central processing unit of the controller of the data storage device, a command controller is provided to implement the first processing system as a transmitting end, and the command controller includes a plurality of command queues;
when a first command is queued in a first command queue within the command controller for transmission, a first processor of the first processing system fills a second command into a second command queue within the command controller for transmission;
after transferring the first command queued in the first command queue to a receiving end, implemented by another processing system in another stage, through a communication interface, the command controller waits for an acknowledgement signal from the receiving end to confirm the transfer of the first command;
after receiving the acknowledgement signal, the command controller uses the communication interface to transfer the second command queued in the second command queue to output the second command from the first processing system without holding the second command to wait for a complete execution of the first command; and
the communication interface is an Advanced Extensible Interface.

US Pat. No. 11,113,101

METHOD AND APPARATUS FOR SCHEDULING ARBITRATION AMONG A PLURALITY OF SERVICE REQUESTORS

MARVELL ASIA PTE, LTD., ...


8. A method for scheduling arbitration among a numbered plurality of service requestors, comprising:enqueuing at the plurality of service requestors, requests generated by at least one host and at least one engine, the at least one engine comprising at least one digital signal processor and/or at least one hardware accelerator, wherein each request comprises a job or a non-job;
performing, by a job arbitrator and assignor communicatively coupled to the plurality of service requestors the actions of;designating among the plurality of service requestors all the service requestors that have an active request at a top entry indicating that a job is ready to be processed and that at least one engine capable of processing the job is available;
determining whether at least one of the designated service requesters has an un-served status indicator which is set; and in response to the determination being positive:
selecting one of the at least one designated service requestors in accordance with a pre-determined policy;
clearing the un-served status indicator for the selected service requestor; and
submitting the active request from the selected service requestor to one of the at least one engine capable of processing the job.


US Pat. No. 11,113,100

METHOD AND A MIGRATION COMPONENT FOR MIGRATING AN APPLICATION

Telefonaktiebolaget LM Er...


1. A method comprising:performing by a migration component, for migrating an application executing in a source compute sled to a target compute sled, wherein the application is associated with data stored in a set of source pages of a source local memory of the source compute sled, wherein the data comprises a respective content stored in a respective source page of the source local memory, wherein at least a portion of the data is stored in a set of target pages of a target local memory of the target compute sled when the application executes in the target compute sled after the migration of the application, wherein a memory is capable of supporting the migration of the application, and wherein the memory is excluded from the source and target compute sleds;
selecting a first sub-set of source pages, wherein a respective source status of each source page of the first sub-set is modified according to a source table of the source compute sled, wherein the source table indicates the respective source status of each source page, wherein the respective source status for any source page indicates that said any source page is modified or unmodified, wherein each source page has a respective utility indication relating to at least one of access frequency, latency, and memory type;
setting a target table of the target compute sled to indicate that a first sub-set of target pages are modified, wherein the first sub-set of target pages is associated with the first sub-set of source pages, wherein the target table indicates a respective target status of each target page of the target local memory, wherein the respective target status for any target page indicates that said any target page is modified or unmodified;
migrating the respective content stored in the first sub-set of source pages to the first sub-set of target pages;
selecting a second sub-set of source pages, wherein the respective source status of each source page of the second sub-set is modified according to the source table, wherein the first sub-set of source pages is different from the second sub-set of source pages;
setting the target table to indicate that a second sub-set of target pages is allocated in the memory, wherein the second sub-set of target pages is associated with the second sub-set of source pages, wherein the first sub-set of target pages is different from the second sub-set of target pages; and
moving the respective content stored in the second sub-set of source pages to the memory.

US Pat. No. 11,113,099

METHOD AND APPARATUS FOR PROTECTING A PROGRAM COUNTER STRUCTURE OF A PROCESSOR SYSTEM AND FOR MONITORING THE HANDLING OF AN INTERRUPT REQUEST

Robert Bosch GmbH, Stutt...


1. A method for protecting a program counter structure of a processor system in case of an interrupt request, the processor system comprising at least the program counter structure, an interrupt control device, and a memory, the method comprising:responding to the interrupt request with the interrupt control device by providing the program counter structure with an address associated with the interrupt request, the address being one of an interrupt vector and a starting address of an interrupt routine associated with the interrupt request;
outputting the address with the program counter structure via a memory interface with respect to the memory;
reading in the address from the memory interface;
comparing the address with a target address assigned to the interrupt request, in order to obtain a comparison result; and
providing a match signal based on the comparison result.

US Pat. No. 11,113,098

INTERRUPT PROCESSING METHOD, MASTER CHIP, SLAVE CHIP, AND MULTI-CHIP SYSTEM

SHENZHEN GOODIX TECHNOLOG...


1. An interrupt processing method applied to a master chip, comprising:obtaining all current interrupt requests of a slave chip when an interrupt transport request sent by the slave chip through an interrupt line is detected, wherein the interrupt transport request is generated by a first peripheral of the slave chip, wherein the interrupt transport request is a level signal; and
obtaining an interrupt subroutine corresponding to each interrupt request of all the current interrupt requests, and processing the corresponding interrupt request by using the interrupt subroutine, wherein all current interrupt requests of the slave chip are mapped to the master chip in advance, and each interrupt request mapped to the master chip has a corresponding interrupt subroutine on the master chip.

US Pat. No. 11,113,097

SYSTEM AND METHOD FOR PROVISIONING INTEGRATION INFRASTRUCTURE AT RUNTIME INDIFFERENT TO HYBRID NATURE OF ENDPOINT APPLICATIONS

BOOMI, INC., Round Rock,...


1. An information handling system operating a hybrid endpoint integration process liaison system comprising:a memory storing code instructions of the hybrid endpoint integration process liaison system; and
a processor operatively connected to the memory, the processor executing the code instructions to:determine an estimated volume of electronic data to be migrated via a Virtual Private Network (VPN) tunnel through a secure firewall pursuant to an integration process modeled by a plurality of visual modeling elements selected from a menu of visual modeling elements, each of the plurality of visual modeling elements being associated with one of a plurality of code sets, wherein the integration process is executable by a customized software integration application for transforming the electronic data to enable access and manipulation of the electronic data at an electronic data storage location located behind the secure firewall;
estimate a time duration for migration of the estimated volume of the electronic data via the VPN tunnel based on the estimated volume of the electronic data to be migrated via the VPN tunnel;
when the estimated time duration for migration of the estimated volume of the electronic data via the VPN tunnel exceeds a preset threshold time:determine an optimal configuration for execution of the plurality of code sets, wherein the optimal configuration includes an execution location behind the secure firewall for execution of one or more of the plurality of code sets by a first runtime engine; and
transmit the first runtime engine to the execution location behind the secure firewall, wherein the first runtime engine is configured to access or manipulate at least a portion of the estimated volume of electronic data behind the secure firewall to reduce calls through the VPN tunnel and to reduce a time to execute the integration process.



US Pat. No. 11,113,096

PERMISSIONS FOR A CLOUD ENVIRONMENT APPLICATION PROGRAMMING INTERFACE

Hewlett Packard Enterpris...


1. A non-transitory machine-readable storage medium comprising instructions that upon execution cause a system to:test a program that comprises code to invoke calls of an application programming interface (API) for managing resources of a cloud environment;
as part of the testing of the program, determine permissions for the invoked calls of the API;
create a collection of the determined permissions; and
associate the collection of the determined permissions with an access policy of the cloud environment, the access policy to control use of the API for managing resources of the cloud environment.

US Pat. No. 11,113,095

ROBOTIC PROCESS AUTOMATION SYSTEM WITH SEPARATE PLATFORM, BOT AND COMMAND CLASS LOADERS

Automation Anywhere, Inc....


1. A robotic process automation system comprising:data storage for storing,a plurality of sets of task processing instructions, each set of task processing instructions operable to interact at a user level with one or more designated user level application programs; and
a plurality of work items, each work item stored for subsequent processing by executing a corresponding set of task processing instructions;

a server processor operatively coupled to the data storage and configured to execute instructions that when executed cause the server processor to respond to a request to perform an automation task to process a work item from the plurality of work items, by:initiating a java virtual machine on a second device;
initiating on the second device, a first user session, employing credentials of a first user, for managing execution of the automation task;
permitting retrieval of the set of task processing instructions that correspond to the work item;
loading into the java virtual machine, with a platform class loader, a logging module and a security module wherein the logging module logs events that occur during operation of the platform class loader and wherein the security module provides security functions during operation of the platform class loader;
loading with a first class loader a first set of task processing instructions;
loading each instruction in the first set of task processing instructions with a separate class loader; and
causing execution, under control of the first user session, on the second device, the task processing instructions that correspond to the work item.


US Pat. No. 11,113,094

PHYSICAL MEMORY MANAGEMENT FOR VIRTUAL MACHINES

Parallels International G...


1. A system comprising:a non-transitory memory storing data;
one or more memories storing computer executable instructions for execution by one or more microprocessors;
the one or more microprocessors in communication with the non-transitory memory and the one or more memories; wherein
the computer executable instructions when executed by the one or more microprocessors configure the one or more microprocessors to:execute a virtual machine comprising at least a virtual processor;
execute a first guest operating system (OS) process of a plurality of guest OS processes with the virtual machine;
inject a guest hardware interrupt into the virtual processor executing the first guest OS process of the plurality of guest OS processes to switch the virtual processor to execute another guest OS process of the plurality of guest OS processes whilst the first guest OS process of the plurality of guest OS processes waits for data requested by a previous instruction or the current instruction to become available to the first guest OS process of the plurality of guest OS processes;
provide a hypervisor or virtual machine monitor;
generate a first fault exception from the first guest OS process of the plurality of guest OS processes, the first fault exception associated with a page fault established in dependence upon a read request by the first guest OS process of the plurality of guest OS processes to read a page from the non-transitory memory, the page having a virtual address within the first guest OS process of the plurality of guest OS processes and a physical address associated with the non-transitory memory;
execute an operating system kernel in dependence upon the first fault exception;
generate a second fault exception from the first guest OS process of the plurality of guest OS processes;
configure an operating system of a computer system comprising the one or more microprocessors to allocate memory to store the page;
add the allocated memory into a virtual machine working set of the hypervisor or virtual machine monitor;
map the virtual machine working set to a set of cache tables associated with the hypervisor or virtual machine monitor; and
restart the instruction associated with the first guest OS process of the plurality of guest OS processors which generated the second fault exception; and

the hypervisor or virtual machine monitor controls the virtual machine.

US Pat. No. 11,113,093

INTERFERENCE-AWARE SCHEDULING SERVICE FOR VIRTUAL GPU ENABLED SYSTEMS

VMWARE, INC., Palo Alto,...


1. A system comprising:at least one computing device comprising at least one processor and at least one data store;
machine readable instructions stored in the at least one data store, wherein the instructions, when executed by the at least one processor, cause the at least one computing device to at least:perform a respective workload of a plurality of workloads alone in a virtual graphics processing unit (vGPU)-enabled system to determine baseline parameters for the plurality of workloads;
perform the respective workload co-located in the vGPU-enabled system with each other workload of the plurality of workloads to determine measured interferences for the plurality of workloads;
train a machine learning model to predict interference based on the measured interferences and predictor variables corresponding to at least a subset of the baseline parameters for the plurality of workloads, the predictor variables being provided as inputs to the machine learning model to predict the interference, the predictor variables comprising: an average kernel length of a plurality of kernels of the respective workload;
process a workload using the machine learning model to determine a set of predicted interferences corresponding to a set of graphics processing units (GPUs), a respective one of the predicted interferences being determined based on: a first set of the baseline parameters comprising a first average kernel length for the workload, and a second set of the baseline parameters comprising a second average kernel length for a currently-assigned workload of a respective GPU of the set of GPUs; and
assign the workload to a particular GPU of the set of GPUs based on the particular GPU having a minimum predicted interference of the set of predicted interferences.


US Pat. No. 11,113,092

GLOBAL HRTF REPOSITORY

Sony Corporation, Tokyo ...


1. A system comprising:a. a computer readable non-transitory medium, said medium further comprising:
a program memory portion further comprising an application program interface (API) module configured for receiving data items from a client device and for sending data items to a client device,
a first data memory portion comprising client validation data representing at least one of a user-related identifier, a type of device, and
a second data memory portion comprising head related transfer function (HRTF) data sorted according to said client validation data, and
b. a processor in electronic communication with said computer readable non-transitory medium configured to receive request for a client HRTF, the request comprising client validation data, the API to cause the processor to:
compare the client validation data received in the request with the client validation data stored in the first data memory portion, and responsive to validating that the client validation data received in the request matches the client validation data stored in the first memory portion:
send a selection prompt to the client device, wherein API executed by the processor is to cause at least one user interface (UI) to be presented at the client device to generate transfer mode selection data, the UI comprising a perceptible prompt to select at least one of retrieve HRTF data, upload HRTF data, specify a virtual audio mode, or cancel the transaction, and the at least one UI comprising three or more of location cues for identifying the client HRTF, the three or more location cues comprising three of more of center (C), left front (LF), right front (RF), left surround (LS), right surround (RS), center height (CH), left front height (LFH), right front height (RFH), left surround height (LSH), right surround height (RSH), center bottom (CB), right front bottom (RFB), left front bottom (LFB) low frequency effects (LFE) right surround bottom (RSB), and left surround bottom (LSB);
receive the transfer mode selection data and location cues from the client device; and
responsive to receiving from the client device a request for audio content, cause the API to receive audio content from an audio source, such that the audio content can be convoluted with the client HRTF to provide virtualized audio to the client device in accordance with the selected location cues.

US Pat. No. 11,113,091

APPARATUS FOR FORWARDING A MEDIATED REQUEST TO PROCESSING CIRCUITRY IN RESPONSE TO A CONFIGURATION REQUEST

Arm Limited, Cambridge (...


1. An apparatus comprising:processing circuitry configured to execute software including first software and second software, in which the second software has a higher privilege level than the first software; and
hardware-implemented interface configured to receive, from the processing circuitry, a configuration request from the first software requesting configuration of a hardware accelerator or input/output device;
in which:
the hardware-implemented interface is responsive to receiving the configuration request from the first software to forward a mediated request to the processing circuitry for consideration by the second software, the mediated request being sent to the second software by the hardware-implemented interface, rather than being sent by the first software;
the mediated request comprises a request that the second software having the higher privilege level than the first software determines a mediated response indicating to the hardware-implemented interface whether the hardware-implemented interface should allow or reject the configuration of the hardware accelerator or the input/output device according to the configuration request received by the hardware-implemented interface from the first software; and
the hardware-implemented interface is configured to receive the mediated response from the processing circuitry in response to sending the mediated request.

US Pat. No. 11,113,090

SYSTEMS AND METHODS FOR CONTAINER MANAGEMENT

United Services Automobil...


1. A computer-implemented method for managing containers, the method comprising:receiving, by a computer, a registration of a set of one or more containers to a cluster load balancing service, wherein the cluster load balancing service locates the set of one or more containers based on the registration, and wherein a group of the one or more containers are designated as corresponding to a specific system function;
in response to the registration, updating a dependency map associated with the set of one or more containers;
identifying, based on the dependency map, at least one additional container that needs to be started to maintain integrity of a computing environment executing the group of one or more containers;
providing a user interface for management of at least collections of containers;
receiving user input, provided via the user interface, specifying one or more additional containers for the system function;
in response to the user input, updating the group of the one or more containers designated as corresponding to the specific system function to include the one or more additional containers indicated in the user input;
receiving, by the computer, a request to perform a first action associated with the updated group of one or more containers;
querying, by the computer, a machine-readable computer file comprising the dependency map, wherein the dependency map comprises a block list indicating to the computer at least one container, of the one or more containers in the updated group of one or more containers, that are prohibited from performing a second action, wherein the prohibition to perform the second action is based upon a dependent resource of each respective container of the at least one container;
storing, by the computer and based on the querying, into an action queue a request to perform the second action;
determining that at least one container, from the group of one or more containers, is not fully initialized, and in response, blocking the first action until the group of containers are fully initialized; and
in response to a first container, of the at least one container, performing part of the first action:instructing, by the computer, the first container to perform part of the second action, wherein the performing the part of the second action is based on the action queue and the block list of the dependency map.


US Pat. No. 11,113,089

SHARING DATA VIA VIRTUAL MACHINE TO HOST DEVICE BRIDGING


1. A system, comprising:an assigned device including data storage;
a guest device in a first virtual machine (VM); and
a host system including a processor in communication with a memory, wherein the processor is configured to execute a second VM, the second VM including a virtual host device for use by the guest device, a first driver and a second driver, where the guest device communicates directly with the virtual host device on a data path, through the second VM, without a hypervisor executing on the data path, the second VM is in communication with the assigned device via the first driver and in communication with the guest device via the second driver, wherein the first driver and the second driver are configured to:read, by the first driver, a first descriptor available in a first ring supplied by the assigned device,
read, by the second driver, a second descriptor available in a second ring supplied by the guest device,
translate, by the second driver after reading the second descriptor, an address in the first VM of the guest device using an offset within a register of the virtual host device,
perform, by the first driver after reading the first descriptor, a first operation on at least one first packet within the first ring of the assigned device, and
perform, by the second driver, a second operation on at least one second packet within the second ring of the guest device in the first VM based on the translated address without the hypervisor executing on the data path, wherein the second VM is configured to access the register of the virtual host device to write the second descriptor used in the second ring of the quest device and access a register of a pass-through device in the second VM to write the first descriptor used in a first ring supplied by the assigned device.


US Pat. No. 11,113,088

GENERATING AND MANAGING GROUPS OF PHYSICAL HOSTS ASSOCIATED WITH VIRTUAL MACHINE MANAGERS IN AN INFORMATION HANDLING SYSTEM

Dell Products, L.P., Rou...


1. A computer-implemented method for attaching and detaching physical hosts to/from virtual machine manager (VMM) host groups associated with a VMM in a cluster of information handling systems (lHSs), the method comprising:identifying, by a processor, (i) first target physical hosts that are part of a VMM host group but are not part of a VMM host list, and (ii) second target physical hosts from the VMM host list that are not included in the VMM host group;
wherein a VMM host group comprises all of the physical hosts that are running virtual machines (VMs) that are controlled by the VMM, and wherein the VMM host list is generated by the VMM and contains identifiers for all of the physical hosts that are controlled by the VMM within a networked computing system;
wherein the identifying comprises:determining that at least one the physical hosts of the VMM host group are not included in the VMM host list; and
in response to determining that at least one the physical hosts of the VMM host group is not included in the VMM host list: identifying the physical hosts of the VMM host group that are not included in the VMM host list; and transmitting an indication of the physical hosts of the VMM host group that are not included in the VMM host list to the controller;

storing the identified first target physical hosts and second target physical hosts to a remote access controller (RAC) memory;
retrieving, by a RAC from the RAC memory, first identifying data of the first target physical hosts;
retrieving, by the RAC, second identifying data of the second target physical hosts:
transmitting, by the RAC, to each of the first target physical hosts a request to detach from the VMM host group; and
transmitting, by the RAC, to each of the second target physical hosts, a request to attach to the VMM host group.

US Pat. No. 11,113,087

TECHNIQUES OF DISCOVERING VDI SYSTEMS AND SYNCHRONIZING OPERATION INFORMATION OF VDI SYSTEMS BY SENDING DISCOVERY MESSAGES AND INFORMATION MESSAGES

AMZETTA TECHNOLOGIES, LLC...


1. A method of managing a plurality of virtual desktop infrastructure (VDI) systems, comprising:broadcasting or multicasting, at a first management service managing a first VDI system of the plurality of VDI systems, discovery messages periodically for discovering the first VDI system by other systems, each of the discovery messages including a first key uniquely associated with the first VDI system and a first network locator for locating the first VDI system in a network, the first VDI system including first plurality of collections of virtual machines;
receiving, from a second management service managing a second VDI system of the plurality of VDI systems, a web service request in response to the discovery messages, the web service request including the first key, a second key uniquely associated with the second VDI system, a second network locator for locating the second VDI system in the network, and second operation information of the second VDI system, the second VDI system including second plurality of collections of virtual machines (VMs), the second operation information specifying the second plurality of collections of VMs;
sending, at the first management service, a web service response including first operation information of the first VDI system to the second management service based on the second network locator, the first operation information specifying the first plurality of collections of VMs;
continuing, at the first management service, monitoring changes of the first operation information of the first VDI system; and
resending, at the first management service, a web service response including the first operation information of the first VDI system to the second VDI system when the first operation information of the first VDI system has changed.

US Pat. No. 11,113,086

VIRTUAL SYSTEM AND METHOD FOR SECURING EXTERNAL NETWORK CONNECTIVITY

FireEye, Inc., Milpitas,...


1. A computing device comprising:one or more hardware processors; and
a memory coupled to the one or more hardware processors, the memory comprises one or more software components that, when executed by the one or more hardware processors, operate as (i) a visualization layer deployed in a host environment of a virtualization software architecture and (ii) a plurality of virtual machines deployed within a guest environment of the virtualization software architecture, the plurality of virtual machines comprises (a) a first virtual machine that is operating under control of a first operating system and including an agent collecting runtime state information of a network adapter and (b) a second virtual machine that is separate from the first virtual machine and is operating under control of a second operating system in response to determining that the first operating system has been compromised, the second virtual machine being configured to drive the network adapter,
wherein after receipt of the state information by the virtualization layer,
transmitting at least a portion of the state information to a threat protection component being deployed within the virtualization layer,
analyzing, by the threat protection component, the state information to determine whether the first operating system is compromised by at least determining whether (i) an external network connection through the network adapter has been disabled or (ii) a kernel of the first operating system is attempting to disable the external network connection through the network adapter, and
upon receipt of the results of the analyzing by the threat protection component that the first operating system is compromised,signaling, by the virtualization layer, to halt operations of the first virtual machine,
installing, by the virtualization layer, a second operating system image retained within the memory of the computing device into the second virtual machine,
reassigning, by the virtualization layer, the network adapter and adapter resources to the second operating system, the second virtual machine configured to drive the network adapter, and
booting the second virtual machine subsequent to the reassignment of the network adapter and the adapter resources from the first operating system to the second operating system.


US Pat. No. 11,113,085

VIRTUAL NETWORK ABSTRACTION

NICIRA, INC., Palo Alto,...


1. A method of defining a virtual network across a plurality of physical host computers, the method comprising:at an agent executing on a particular host computer that hosts a plurality of data compute nodes (DCNs) belonging to one or more tenant logical networks:receiving a command from a network controller that provides commands to a plurality of agents executing on a plurality of different host computers that each host sets of DCNs belonging to one or more tenant logical networks, wherein at least two different host computers execute two different types of network virtualization software provided by two different vendors, the command comprising (i) an identification of a resource associated with a particular tenant logical network by to which a DCN executing on the particular host computer connects and (ii) an action to perform on the identified resource;
determining a type of network virtualization software executing on the particular host computer;
translating the received action into a set of configuration commands compatible with the type of network virtualization software executing on the particular host computer; and
sending the configuration commands to a network configuration interface on the host computer to perform the action on the identified resource.


US Pat. No. 11,113,084

METHOD AND SYSTEM FOR APPROXIMATE QUANTUM CIRCUIT SYNTHESIS USING QUATERNION ALGEBRA

Microsoft Technology Lice...


1. A quantum circuit synthesizer system, comprising:a processor; and
at least one memory coupled to the processor and having stored thereon processor-executable instructions for a quantum computer synthesis procedure that comprises:receiving a target unitary described by a target angle and target precision;
determining a corresponding quaternion approximation of the target unitary; and
synthesizing a quantum circuit corresponding to the quaternion approximation, the circuit being over a single qubit gate set, the single qubit gate set being realizable by a target quantum computer architecture, wherein the determining a corresponding quaternion approximation of the target unitary comprises finding a quaternion from an order of a totally definite quaternion algebra defined over a totally real number field F that has the following two properties:d(Uq,Rz(?))??;??(1) and
nrd(q)F=1L1 . . . MLM,??(2)



wherein d is a distance function, 1, . . . , M are the appropriate prime ideals of F1, L1, . . . , LM? are their respective multiplicities in the decomposition of nrd(q)F, and nrd(q) is a reduced norm of a quaternion q.

US Pat. No. 11,113,083

NOTIFICATION INTERACTION IN A TOUCHSCREEN USER INTERFACE

International Business Ma...


1. A computer-implemented method for notification interaction in a touchscreen user interface, comprising:monitoring, by a computing device, a user interaction with a current application via the touchscreen user interface;
preparing, by a computing device, a notification display for an event occurring on the computing device to be displayed on top of the current application;
recognizing, by the computing device, that the notification display for the event is about to be displayed on top of the current application in use by the user, wherein a notification for display is recognized by at least one of a group consisting of: the current application, a secondary application, or the operating system resident on the computing device;
determining, by the computing device, one or more distinct user interactions for input to the notification display based on a set of rules, wherein the set of rules determines the one or more distinct user interactions that are different from the monitored user interaction with the current application;
selecting, by the computing device, a distinct user interaction within the one or more distinct user interactions for input by the user to the notification display that is distinct from the monitored user interaction with the current application; and
displaying, by the computing device, the notification display with an instruction for input of the distinct user interaction with the notification display.

US Pat. No. 11,113,082

HELP CONTENT BASED APPLICATION PAGE ANALYSIS

Hewlett-Packard Developme...


8. A computer implemented method comprising:ascertaining, for an application, pages that are to be analyzed with respect to application help content;
identifying, for each of the ascertained pages, a user interaction element;
ascertaining, for a page of the ascertained pages, selection of a user interaction element from the identified user interaction elements;
ascertaining, for the selected user interaction element, responses to inquiries from a decision tree that corresponds to the selected user interaction element;
determining, based on the ascertained responses, a score that represents relevancy of page help content to the application, wherein the page help content is associated with the page and is part of the application help content; and
ranking the ascertained pages with respect to the application help content by ranking the page according to the score and determined scores for other pages of the application.

US Pat. No. 11,113,081

GENERATING A VIDEO FOR AN INTERACTIVE SESSION ON A USER INTERFACE

International Business Ma...


1. A method of generating a video comprising:identifying, via at least one processor, a scenario within a document including content of a communication session, wherein the communication session pertains to support for use of a user interface;
extracting from the document, via the at least one processor, one or more items corresponding to the identified scenario and associated with the user interface;
mapping, via the at least one processor, the extracted items to corresponding aspects of the user interface, wherein at least one of the extracted items remains unmapped to the user interface;
determining, via the at least one processor, at least one question for a user to receive information to map an unmapped item to a corresponding aspect of the user interface; and
generating, via the at least one processor, a video based on the mapped aspects of the user interface to reproduce one or more activities performed during the use of the user interface.

US Pat. No. 11,113,080

CONTEXT BASED ADAPTIVE VIRTUAL REALITY (VR) ASSISTANT IN VR ENVIRONMENTS

Tata Consultancy Services...


1. A computer implemented method, comprising:providing an adaptive virtual reality (VR) assistant application executable by at least one processor configured for VR assistance on a computing device;
detecting, by the adaptive VR assistant application, activation of an interface element from a plurality of interface elements, wherein the plurality of interface elements comprises a VR environment interface option and a communication session interface option; and
executing, upon the activation of the interface element, at least one of a first set of instructions and a second set of instructions,
wherein the first set of instructions comprises:displaying, in a real time, one or more VR environments upon the activation of the interface element;
determining, in the real time, a selection of at least one VR environment from the one or more VR environments by a user;
displaying, in the real time, the at least one selected VR environment comprising one of one or more corresponding objects and one or more VR characters on the computing device, and generating a VR character for the user specific to the at least one selected VR environment;
obtaining, in the real time, a first input comprising a first media message from the user, wherein the first media message comprise a text message;
determining in the real time, using a Natural Language Processing (NLP) engine, a first context based on the first media message, wherein text from the first media message is extracted to determine the first context, wherein a VR environment is rendered based on the text extracted from the media message by the adaptive VR assistant application, and the rendering comprises at least one of selecting and retrieving a determined context based VR environment from the one or more VR environments; and
enabling, based on a first determined context, a first interactive communication session between the VR character and the one or more corresponding objects and the one or more presented VR characters in the at least one selected VR environment, and wherein the second set of instructions comprises:
generating an interactive session user interface by the adaptive VR assistant application on the computing device;
obtaining a second input comprising a second media message including one or more queries from the user;
determining a second context of the second input, wherein text from the second input is extracted to determine the second context;
generating, by the adaptive VR assistant application, one or more responses and the one or more VR environments comprising one or more corresponding objects integrated within the generated interactive session user interface, based on one of: the second input, the one or more responses, the second determined context, and any combination thereof, wherein both the generated one or more responses to the one or more queries and the generated one or more VR environments comprising the one or more corresponding objects are displayed within same interactive communication session of the same generated interactive session user interface, wherein the adaptive VR assistant application further enables communication between the user and the one or more corresponding objects in the generated one or more VR environments based on the second determined context within the same interactive communication session of the same generated interactive session user interface, wherein the communication between the user and the one or more corresponding objects in the generated one or more VR environments is enabled by creating a VR character for the user specific to the generated one or more VR environments within the same interactive communication session of the same generated interactive session user interface, and wherein the adaptive VR assistant application further enables the generated one or more VR environments within the same generated interactive session user interface to be maximized based on one or more user inputs provided for full screen view of the generated one or more VR environments and enabling the user to experience the generated one or more VR environments and generating a public speaking simulator, wherein the one or more VR environments are simulated based on user selection and further augmenting the adaptive VR assistant application as a speaker; and
extracting, by the NLP engine, at least the text from the first media message, wherein the text is processed to analyze at least one of the first context and the second context and wherein the extracted text is transferred to an asset loader to load content in the one or more VR environments.


US Pat. No. 11,113,079

MOBILE ASSISTANT

Verizon Media Inc., New ...


1. A method, comprising:displaying a messaging interface comprising a conversation display area, an input text area, a predictive text interface and a keyboard;
receiving a request from a user via the input text area of the messaging interface;
determining a task based upon the request;
determining one or more questions associated with information required to perform the task;
providing a first question of the one or more questions via the conversation display area of the messaging interface, the first question associated with one or more answer choices comprising a first answer choice and a second answer choice;
concurrently providing via the messaging interface:a first visual element, in the predictive text interface, comprising a first abbreviation of the first answer choice, wherein the first abbreviation of the first answer choice is different than the first answer choice and a portion of the first abbreviation is the same as a portion of the first answer choice,
a second visual element, in the predictive text interface, comprising a second abbreviation of the second answer choice, wherein the second abbreviation of the second answer choice is different than the second answer choice and a portion of the second abbreviation is the same as a portion of the second answer choice, wherein the first abbreviation is different than the second abbreviation, and
a third visual element, in the conversation display area and different than the first visual element and the second visual element, comprising a list of answer choices comprising the first answer choice and the second answer choice, the first visual element displayed concurrently with the second visual element and the third visual element, the third visual element displayed as at least part of a message transmitted via the messaging interface;

receiving a selection of the first visual element from the user via the predictive text interface of the messaging interface; and
performing the task, using the first answer choice, based upon the selection of the first visual element.

US Pat. No. 11,113,078

VIDEO MONITORING

Verizon Media Inc., New ...


1. A method, comprising:identifying a video on a user device comprising a visible display region and a non-visible region;
determining a grayscale video of the video and a color video of the video;
playing the grayscale video of the video and the color video of the video within the non-visible region of the user device;
applying grayscale percentages of the grayscale video played within the non-visible region to color pixels of the color video played within the non-visible region to create alpha color pixels; and
displaying, in the visible display region of the user device and based upon the alpha color pixels associated with the grayscale video and the color video, the video within a canvas overlaying content of a webpage displayed through a user interface.

US Pat. No. 11,113,077

NON-INVASIVELY INTEGRATED MAIN INFORMATION SYSTEM MODERNIZATION TOOLBOX


1. A computing system for controlling and augmenting information associated with Basic Units of a browser-accessible Main Information System, said computing system comprising computerized mechanisms for:a) recognizing Basic Units of said Main Information System and associating said Basic Units with a Basic URL;
b) identifying an active Basic Unit of said Main Information System when a user accesses an active URL through a web browser, wherein if said active URL is associated with a Basic Unit, it is considered a Basic URL associated with said Basic Unit;
c) displaying a toolbox, which is shown in a content area of a screen, wherein said screen also displays, in another content area, an active Basic URL of said browser-accessible Main Information System, said toolbox comprising icons for allowing the user to select an active tool from a plurality of available tools, said tools allowing the user to add, delete, and edit information associated with the active Basic Unit;
d) organizing, storing, and retrieving information extracted from said browser-accessible Main Information System and information generated by said tools in a database; and
e) showing a tool window separate from said toolbox, which is shown in a content area of said screen which also displays said toolbox and said active basic URL, wherein said tool window allows the user to add, delete, and edit information associated with said active Basic Unit by using the selected tool;

wherein said computing system is embodied in a computer having a processor, a screen and browser access;
wherein when the user is browsing a Basic URL which is already stored into said database, and selects any of the Toolbar's tools, said computing system retrieves from said database all the information associated with said Basic Unit for the selected tool, and displays said information in said tool window;
wherein recognizing Basic Units of said Main Information System and assigning a Basic URL to each of them comprises a Unique Pair for identifying each Basic Unit, said Unique Pair consisting of a Main Key and a Basic URL, wherein said Basic URL and said Main Key are extracted from the active Basic URL of the browser-accessible Main Information System, from content areas identified by the user in an initial setup process;and

wherein said tools are non-invasively integrated to said Main Information System, without modifying source code of said Main Information System which is only accessed by said computing system through a web browser.

US Pat. No. 11,113,076

COORDINATING POWER TRANSITIONS BETWEEN A SMART INTERCONNECT AND HETEROGENEOUS COMPONENTS

North Sea Investment Co. ...


1. An apparatus comprising:a semiconductor interconnect substrate electrically coupled to one or more components mounted thereon, wherein the semiconductor interconnect substrate includes within it a logic to handle sequence and/or control of power functions for the one or more components, wherein the semiconductor interconnect substrate includes within it a microcontroller unit; and
an interface coupled to the semiconductor interconnect substrate, wherein the interface is operable to carry a configuration command set to the one or more components in a normal operation mode subsequent to a power-up mode,
wherein the logic, to handle sequence and/or control of power functions for the one or more components, comprises: a power management unit; a power kernel and controller coupled to the power management unit, and a memory coupled to the power kernel and controller,
wherein the power kernel and controller are operable to sequence one or more reset signals to reset the microcontroller unit, and
wherein the semiconductor interconnect substrate includes within it a microelectromechanical system-based temperature compensated crystal oscillator coupled to the microcontroller unit.

US Pat. No. 11,113,075

LAUNCHING A MIDDLEWARE-BASED APPLICATION

INTERNATIONAL BUSINESS MA...


1. A computer-implemented method comprising:preparing, by a device operatively coupled to one or more processing units, a first execution environment for middleware to be included in a container hosted on a machine wherein a proxy agent instructs a container engine to generate a copy of the first execution environment, wherein the middleware is launched based on the copy of the first execution environment;
detecting, by the device, a request to schedule an application to be executed in the machine using the middleware;
in response to the request being detected, launching, by the device, the application within the container based on the first execution environment having been prepared; and
based on the request to schedule the application to be executed in the machine using the middleware, preparing, by the device, a second execution environment for the middleware before the launching of the application.

US Pat. No. 11,113,074

SYSTEM AND METHOD FOR MODEM-DIRECTED APPLICATION PROCESSOR BOOT FLOW

QUALCOMM Incorporated, S...


1. A method for a modem-directed application processor boot sequence in a portable computing device (“PCD”), the method comprising:initializing a direct memory access (“DMA”) engine for the boot sequence, wherein the DMA engine is configured to read data from a memory component;
initializing a crypto engine for the boot sequence, wherein the crypto engine is configured to calculate a hash according to a predetermined hash function;
reading with the DMA engine metadata and data segments associated with an image stored in the memory component and associated with the boot sequence;
calculating with the crypto engine hash values associated with the metadata and data segments; and
transitioning an application processor into an idle state for durations of time coinciding with at least one of reading with the DMA engine during the boot sequence and calculating with the crypto engine during the boot sequence.

US Pat. No. 11,113,073

DUAL MODE HARDWARE RESET

Micron Technology, Inc., ...


1. A host system comprising:a host device comprising a host processor;
a storage system comprising at least one non-volatile memory device, control circuitry coupled to the non-volatile memory device, and a storage register to store at least one control bit; and
a communication interface between the host device and the storage system; and
a reset interface between the host device and the storage system,
wherein the host device is configured to control a power mode of the storage system, the power mode comprising an operational mode and a low power mode in which the communication interface is disabled,
wherein, in the low power mode, the control circuitry is configured to provide one of a first reset or a second reset to transition to the storage system from the low power mode to the operational power mode in response to a reset signal from the host device on the reset interface and a value of the at least one control bit,
wherein, in the operational power mode, the host processor is configured to control the value of the at least one control bit, and
wherein, in the low power mode, the control circuitry is configured to control the value of the at least one control bit.

US Pat. No. 11,113,072

BOOT PERSONALITY FOR NETWORK DEVICE

Arista Networks, Inc., S...


1. A method for managing a boot personality, the method comprising:while executing a first operating system on a network device in a first boot personality using a first processor, executing a first command to modify a configuration of a hardware component of the network device to cause the network device to be configured in a second boot personality;
performing a first reboot of the network device after executing the first command;
initializing, based on the configuration of the hardware component and during the first reboot, a second processor before the first processor; and
executing, after the initialization, a second operating system while the network device is in the second boot personality;
while executing the second operating system in the second boot personality, executing a second command to modify the configuration of the hardware component of the network device to cause the network device to be configured in the first boot personality;
performing a second reboot of the network device after executing the second command;
initializing, based on the configuration of the hardware component and during the second reboot, the first processor; and
executing, after the initialization, the first operating system on the first processor while the network device is in the first boot personality.

US Pat. No. 11,113,071

UNIFIED HYPERVISOR IMAGE FOR MULTIPLE PROCESSOR ISAS

VMware, Inc., Palo Alto,...


1. A method for booting a computer system, comprising:loading a first stage bootloader of a plurality of first stage bootloaders from a boot image based on a known configuration of the computer system;
executing the first stage bootloader to identify a selected bootbank of a plurality of bootbanks in the boot image based on the known configuration of the computer system;
executing, by the first stage bootloader, a second stage bootloader from the boot image with an instruction to boot from the selected bootbank; and
executing, by the second stage bootloader, a binary file in the selected bootbank.

US Pat. No. 11,113,070

AUTOMATED IDENTIFICATION AND DISABLEMENT OF SYSTEM DEVICES IN A COMPUTING SYSTEM

AMERICAN MEGATRENDS INTER...


1. A computer-implemented method, comprising:initiating execution of a firmware configured to perform a bootup process of a computing system having multiple system devices, the firmware retained in a non-volatile memory of the computing system;
accessing, by the firmware, program code that defines a procedure to identify a system device for disablement, wherein the accessing includes one of generating the program code or retrieving the program code from a memory element within the non-volatile memory;
sending, by the firmware, the program code to a baseboard management controller (BMC), wherein execution of the program code by the BMC generates data identifying one or more second system devices to be disabled;
accessing, by the firmware, second data from a memory device within the BMC;
determining, using the second data, that a first system device of the multiple system devices is to be disabled; and
disabling the first system device by the firmware.

US Pat. No. 11,113,069

IMPLEMENTING QUICK-RELEASE VLV MEMORY ACCESS ARRAY

HUAXIA GENERAL PROCESSOR ...


1. A method for implementing a quick-release Variable Length Vector (VLV) memory access array, comprising:each time when a pipeline restarts to refresh an out-of-order queue, and a number of times for sending an access request for an entry recorded in a sending counter associated with the entry equals to a number of times for receiving return for the entry recorded in a returning counter associated with the entry, maintaining an Identifier (ID) of the entry unchanged, wherein the ID is used for a next pushed request;
each time when the pipeline restarts to refresh the out-of-order queue, the number of times for sending the access request for the entry recorded in the sending counter does not equal to the number of times for receiving the return for the entry recorded in the returning counter, and mirror resources are not exhausted, executing operations comprising:releasing the entry,
copying the ID, the sending counter and the returning counter associated with the entry to another structure, and
selecting N IDs, each of which is in a non-busy status, from a free list, setting a busy bit of each of the N IDs, and filling the N IDs into an ID field of the entry, wherein N is an integer;

storing copied request information of at least one uncompleted request when the pipeline restarts to refresh the out-of-order queue using the mirror resources, the copied request information comprising the ID, the sending counter and the returning counter of each request;
after the receiving counter is copied to the mirror resources, continuing monitoring at least one return response, increasing a counting number of the returning counter based on a count of the at least one return response, releasing a current mirror resource and the ID when the counting number of the sending counter equals to the counting number of the returning counter, and updating the free list according to the released ID;
when the currently released mirror resource is the only available resource, copying the released ID back to an entry tagged to be restarted in a same clock cycle of releasing this ID to exchange information with the entry; and
maintaining ID allocation and recycling through the free list; allocating an ID each time when a request is pushed; recycling the ID when the entry is normally released, or recycling the ID when the request is completed in the mirror resources.

US Pat. No. 11,113,068

PERFORMING FLUSH RECOVERY USING PARALLEL WALKS OF SLICED REORDER BUFFERS (SROBS)

Microsoft Technology Lice...


1. A register mapping circuit in a processor device, the register mapping circuit comprising:a rename map table (RMT) comprising a plurality of RMT entries each representing a mapping of a logical register number (LRN) among a plurality of LRNs to a physical register number (PRN) among a plurality of PRNs; and
a reorder buffer (ROB) comprising a plurality of ROB entries;
a sliced reorder buffer (SROB) subdivided into a plurality of SROB slices each comprising a plurality of SROB slice entries, wherein each SROB slice is configured to track only uncommitted instructions that write to a respective LRN among the plurality of LRNs;
the register mapping circuit configured to:detect, within an execution pipeline of the processor device, an uncommitted instruction comprising a write instruction to a destination LRN among the plurality of LRNs;
allocate, to the uncommitted instruction, an SROB slice entry among the plurality of SROB slice entries of an SROB slice corresponding to the destination LRN among the plurality of SROB slices of the SROB;
receive an indication of a pipeline flush from a target instruction within the execution pipeline; and
responsive to receiving the indication of the pipeline flush, restore the plurality of RMT entries to a corresponding plurality of prior mapping states, based on sequential accesses, performed in parallel, of the plurality of SROB slices corresponding to LRNs of the plurality of RMT entries.


US Pat. No. 11,113,067

SPECULATIVE BRANCH PATTERN UPDATE

CENTAUR TECHNOLOGY, INC.,...


1. A microprocessor, comprising:first logic configured to detect that a cache address matches at least one of two previous cache addresses; and
second logic configured to adjust a branch pattern used for conditional branch prediction based on the match and combine the cache address with the adjusted branch pattern to form a conditional branch predictor address.

US Pat. No. 11,113,066

PREDICTING A BRANCH INSTRUCTION CLASSIFIED AS SIMPLE OR HARD TO PREDICT BASED ON A CONFIDENCE COUNTER IN A BRANCH TYPE TABLE

International Business Ma...


1. A processor comprising:a processor pipeline comprising one or more execution units configured to execute one or more branch instructions;
a branch predictor coupled to the processor pipeline and configured to predict an outcome of each of the one or more branch instructions; and
a branch classification unit associated with the processor pipeline and the branch predictor, and configured to classify each of the one or more branch instructions as at least one of the following: a simple branch and a hard-to-predict branch, wherein the branch classification unit comprises a direct-mapped branch type table (BTT) and a branch classification table (BCT),
wherein the BTT comprises one or more entries, wherein each of the one or more entries further includes a branch type BTT field and a confidence counter BTT field, and
wherein the branch classification unit is configured to, in response to detecting a first branch instruction, determine a type of the first branch instruction by:extracting an index from an instruction address of the first branch instruction;
using the index to identify a BTT entry in the BTT corresponding to the first branch instruction;
determining whether a value in the confidence counter BTT field in the identified BTT entry is greater than 0; and
if the value in the confidence counter BTT field in the identified BTT entry is greater than 0:determining the type of the first branch instruction using a value in the branch type BTT field of the identified BTT entry;
predicting, by the branch predictor, a predicted outcome of the first branch instruction; and,
updating the value of the confidence counter BTT field in the identified BTT entry based on an actual outcome of the first branch instruction and the predicted outcome of the first branch instruction using one or more of the following rules:

increment the value in the confidence counter BTT field in the identified BTT entry by 1 if the branch type BTT field in the identified BTT entry indicates simple and the predicted outcome of the first branch instruction matches the actual outcome of the first branch instruction,
increment the value in the confidence counter BTT field in the identified BTT entry by 1 if the branch type BTT field in the identified BTT entry indicates hard to predict and the predicted outcome of the first branch instruction does not match the actual outcome of the first branch instruction,
decrement the value in the confidence counter BTT field in the identified BTT entry by 1 if the branch type BTT field in the identified BTT entry indicates simple and the predicted outcome of the first branch instruction does not match the actual outcome of the first branch instruction, and
decrement the value in the confidence counter BTT field in the identified BTT entry by 1 if the branch type BTT field in the identified BTT entry indicates hard to predict and the predicted outcome of the first branch instruction matches the actual outcome of the first branch instruction.


US Pat. No. 11,113,065

SPECULATIVE INSTRUCTION WAKEUP TO TOLERATE DRAINING DELAY OF MEMORY ORDERING VIOLATION CHECK BUFFERS

Advanced Micro Devices, I...


1. A method for speculatively executing load-dependent instructions, comprising:detecting that a memory ordering consistency queue is full for a completed load instruction;
storing data loaded by the completed load instruction into a storage location for storing data when the memory ordering consistency queue is full;
speculatively executing instructions that are dependent on the completed load instruction;
in response to a slot becoming available in the memory ordering consistency queue, replaying the completed load instruction as a replayed load instruction; and
in response to receiving loaded data for the replayed load instruction, testing for a data mis-speculation by comparing the loaded data for the replayed load instruction with the data loaded by the completed load instruction that is stored in the storage location.

US Pat. No. 11,113,064

AUTOMATED CONCURRENCY AND REPETITION WITH MINIMAL SYNTAX

SAS INSTITUTE INC., Cary...


1. An apparatus comprising a first processor core and a storage to store instructions that, when executed by the first processor core, cause the first processor core to perform operations comprising:receiving a request to execute executable instructions of application code that comprises an instruction block and a trigger instruction, wherein:the instruction block comprises executable instructions operable to cause a second processor core to perform operations comprising:reading a single row of data values from a first data structure; and
outputting at least one data value generated by a performance of a function using the single row as an input;

the first data structure is divided into multiple portions that each comprise multiple rows; and
the trigger instruction serves to provide an indication that:multiple instances of the instruction block that correspond to the multiple portions of the first data structure are to be executed concurrently; and
each instance of the instruction block is to be executed repetitively until all rows of the corresponding portion of the first data structure are used as input to the function, wherein the repetitive execution of each instance commences independently of the other instances and completes independently of the other instances; and


in response to the request, and in response to identification of the instruction block and the trigger instruction as present within the application code, performing operations comprising:generating multiple instances of a support block corresponding to the multiple instances of the instruction block, wherein each instance of the support block comprises executable instructions operable to cause the second processor core to execute a corresponding instance of the instruction block repetitively until all rows of the corresponding portion of the first data structure are used as input to the function;
assigning each instance of the instruction block and corresponding support block to be executed by a processor core of a set of processor cores, wherein the set of processor cores comprises the second processor core; and
providing each instance of the instruction block with access to the corresponding portion of the first data structure.


US Pat. No. 11,113,063

METHOD AND APPARATUS TO CONTROL THE USE OF HIERARCHICAL BRANCH PREDICTORS BASED ON THE EFFECTIVENESS OF THEIR RESULTS

SAMSUNG ELECTRONICS CO., ...


1. An apparatus comprising:a main-branch target buffer (BTB);
a micro-BTB separate from and smaller than the main-BTB, and configured to produce prediction information associated with a branching instruction;
a micro-BTB confidence counter configured to measure a correctness of the prediction information produced by the micro-BTB;
a micro-B TB misprediction rate counter configured to measure a rate of mispredictions produced by the micro-BTB; and
a micro-BTB enablement circuit configured to enable a usage of the micro-BTB's prediction information, based, at least in part, upon the values of the micro-BTB confidence counter and the micro-BTB misprediction rate counter;
wherein both the micro-BTB confidence counter and the micro-BTB misprediction rate counter are updated in response to a commitment of a branch instruction.

US Pat. No. 11,113,062

INSERTING PREDEFINED PAD VALUES INTO A STREAM OF VECTORS

TEXAS INSTRUMENTS INCORPO...


1. A method of operating a streaming engine in a computer system, the method comprising:receiving stream parameters into control logic of the streaming engine to define a multidimensional array, wherein the stream parameters define a size for each dimension of the multidimensional array and a pad value indicator;
fetching data from a memory coupled to the streaming engine responsive to the stream parameters;
forming a stream of vectors from the data fetched from memory for the multidimensional array responsive to the stream parameters;
forming a padded stream vector for the stream of vectors that includes a pad value specified by the pad value indicator without fetching respective pad data from the memory; and
suppressing a respective access to the memory while forming the padded stream vector.

US Pat. No. 11,113,061

REGISTER SAVING FOR FUNCTION CALLING

Advanced Micro Devices, I...


1. A method for saving registers in the event of a function call, the method comprising:modifying a program including a block of code designated as a calling code that calls a function, the modifying including:
modifying the calling code to:set a register usage mask indicating which registers are in use at the time of the function call; and

modifying the function to:combine the information of the register usage mask with information indicating registers used by the function to generate an indication of which registers are to be saved; and
save the registers to be saved.


US Pat. No. 11,113,060

COMBINING STATES OF MULTIPLE THREADS IN A MULTI THREADED PROCESSOR

GRAPHCORE LIMITED, Brist...


1. A processing apparatus comprising:one or more processing modules each comprising a respective execution unit for executing machine code instructions, each machine code instruction being an instance of a predefined set of instruction types in an instruction set, each instruction type in the instruction set being defined by a corresponding opcode and zero or more operand fields for taking zero or more operands;
wherein the one or more processing modules are operable to execute a plurality of parallel or concurrent threads; and
the processing apparatus further comprises a storage location for storing an aggregated exit state of said plurality of threads;
wherein the instruction set comprises an exit instruction for inclusion in each of said plurality of threads, the exit instruction taking at least an individual exit state of the respective thread as an operand; and
wherein each of the one or more execution units is configured so as, in response to the opcode of the exit instruction, to terminate the respective thread, and also to cause the individual exit state specified in the operand to contribute to the aggregated exit state in said storage location.

US Pat. No. 11,113,059

DYNAMIC ALLOCATION OF EXECUTABLE CODE FOR MULTI-ARCHITECTURE HETEROGENEOUS COMPUTING

Next Silicon Ltd, Tel Av...


1. An apparatus for executing a software program, the apparatus comprising a plurality of processing units and at least one hardware processor adapted for:in an intermediate representation of the software program, where the intermediate representation comprises a plurality of blocks, each associated with one of a plurality of execution blocks of the software program and comprising a set of intermediate instructions, identifying a calling block and a target block, where the calling block comprises at least one control-flow intermediate instruction to execute at least one target intermediate instruction of the target block;
generating a target set of executable instructions using the target block;
generating a calling set of executable instructions using the calling block and using at least one computer control instruction for invoking the target set of executable instructions, when the calling set of executable instructions is executed by a calling processing unit and the target set of executable instructions is executed by a target processing unit;
configuring the calling processing unit for executing the calling set of executable instructions; and
configuring the target processing unit for executing the target set of executable instructions.

US Pat. No. 11,113,058

RECONFIGURABLE PROCESSING UNIT

Facebook, Inc., Menlo Pa...


1. A central processing unit, comprising:a processing unit configured to handle a predefined instruction set;
a reconfigurable logic unit configured to handle a specialized instruction set including a first macro instruction, wherein the first macro instruction has been identified for inclusion in the specialized instruction set based on a determination of frequencies of occurrence of patterns of instructions belonging to the predefined instruction set based on use of a Poincaré map that plots sequences of instructions using a first axis representing a first order instruction in a specific sequence, a second axis representing a second order instruction in the specific sequence, and third axis representing a third order instruction in the specific sequence; and
a control unit configured to:prefetch instructions to be executed;
identify sets of instructions included in the prefetched instructions, wherein each set included in the identified sets is able to be replaced with a corresponding macro instruction for potential execution by the reconfigurable logic unit;
identify a sequence length of each of the identified sets of instructions and select a set of instructions in the identified sets determined to have a longest sequence length for replacement with the first macro instruction to be executed by the reconfigurable logic unit;
after selecting the set of instructions determined to have the longest sequence length, identify a next longest set of instructions in the sets of instructions for replacement with a second macro instruction to be executed by the reconfigurable logic unit; and
issue the first macro instruction and the second macro instruction to the reconfigurable logic unit rather than issuing the corresponding replaced instructions to the processing unit.


US Pat. No. 11,113,057

STREAMING ENGINE WITH CACHE-LIKE STREAM DATA STORAGE AND LIFETIME TRACKING

Texas Instruments Incorpo...


1. A device comprising:a memory component configured to couple to an instruction decoder, to a memory, and to a functional unit, wherein the memory component includes:a buffer operable to store a set of data elements;
a reference queue operable to store a set of address tags associated with the set of data elements in the buffer;
an address generator operable to:receive an instruction from the instruction decoder; and
in response to the instruction, generate a set of addresses associated with the instruction;

a storage allocation component operable to:receive the set of addresses;
compare a first address of the set of addresses to the set of address tags of the reference queue;
in response to the first address matching a first address tag of the set of address tags, inhibit retrieval of data corresponding to the first address from the memory;
compare a second address of the set of addresses to the set of address tags of the reference queue; and
in response to the second address not matching the set of address tags:allocate a cache line of the buffer for the second address;
store an address tag for the second address in the reference queue;
cause data corresponding to the second address to be retrieved from the memory; and
cause the data corresponding to the second address to be stored in the allocated cache line, wherein the memory component is operable to provide the data corresponding to the first address and the data corresponding to the second address to the functional unit; and


a command queue configured to couple to the memory and operable to store a set of commands to retrieve data associated with the set of addresses from the memory, wherein the storage allocation component is operable to inhibit retrieval of data corresponding to the first address from the memory by cancelling a command of the set of commands stored in the command queue that is associated with the first address.


US Pat. No. 11,113,056

TECHNIQUES FOR PERFORMING STORE-TO-LOAD FORWARDING

Advanced Micro Devices, I...


1. A method for performing store-to-load forwarding for a load instruction, the method comprising:determining a virtual address for data to be loaded for the load instruction;
identifying a matching store instruction from one or more store instruction memories by comparing a virtual-address-based comparison value for the load instruction to one or more virtual-address-based comparison values of one or more store instructions;
placing the load instruction into a load waiting buffer in response to detecting that the matching store instruction has not yet received an address translation when initiating validation of the load instruction;
and
validating the load instruction based on a comparison between a physical address of the load instruction and a physical address of the matching store instruction.

US Pat. No. 11,113,055

STORE INSTRUCTION TO STORE INSTRUCTION DEPENDENCY

INTERNATIONAL BUSINESS MA...


1. A computer-implemented method for creating a store instruction dependency in a processor pipeline comprising:detecting, by a processor, a second store instruction subsequent to a first store instruction in an instruction stream, wherein the first store instruction and the second store instruction respectively include operand address information, and wherein there is a memory image overlap in an issue queue between the operand address information of the first store instruction and operand address information of a load instruction;
comparing, by the processor, the operand address information of the first store instruction with the operand address information of the second store instruction to determine whether there is a match between the operand address information of the second store instruction and the operand address information of the first store instruction; and
writing, by the processor, a scoreboard bit to a scoreboard in response to determining a match between the operand address information of the second store instruction and the operand address information of the first store instruction;
analyzing, by the processor and prior to issuance of the second store instruction, the scoreboard to detect a determined match between the operand address information of the second store instruction and the operand address information of the first store instruction; and
delaying, by the processor, the second store instruction in the processor pipeline until a dependency is created between the second store instruction and the first store instruction,
wherein comparing the operand address information of the first store instruction with the operand address information of the second store instruction comprises:
translating a vector representing a base register address, an index register address, a displacement register address to a memory image vector for the first store instruction,
translating a vector representing a base register address, an index register address, a displacement register address to a memory image vector for the second store instruction, and
comparing the memory image vector for the first store instruction to the memory image vector for the second store instruction.

US Pat. No. 11,113,054

EFFICIENT HARDWARE INSTRUCTIONS FOR SINGLE INSTRUCTION MULTIPLE DATA PROCESSORS: FAST FIXED-LENGTH VALUE COMPRESSION

Oracle International Corp...


1. One or more non-transitory, computer-readable media storing one or more instructions that, when executed by one or more processors, cause producing a vector of variable-length values from a vector of fixed-length values;wherein a first register stores a plurality of fixed-length values from the vector of fixed-length values, wherein each fixed-length value in the vector of fixed-length values is a variable-length value that has been padded, as needed, to achieve a particular fixed length,
wherein a series of single instruction multiple data (SIMD) subregisters of a second register stores a plurality of length values from a vector of lengths, wherein each fixed-length value in the vector of fixed-length values corresponds to a length value in the vector of lengths, and wherein each length value in the vector of lengths indicates an unpadded length of a corresponding fixed-length value in the vector of fixed-length values;
wherein the one or more instructions, when executed by the one or more processors, cause:storing, in a third register, the plurality of fixed-length values into the vector of variable-length values, wherein storing the vector of variable-length values in the third register is based on the vector of lengths, wherein each variable-length value in the vector of variable-length values is unpadded, and wherein a pointer specifies a memory address of the vector of variable length values stored in the third register;
determining, based on the vector of lengths, a particular key in an offset lookup table;
determining that a particular offset is indexed in the offset lookup table by the particular key;
updating the pointer based on the particular offset.


US Pat. No. 11,113,053

DATA ELEMENT COMPARISON PROCESSORS, METHODS, SYSTEMS, AND INSTRUCTIONS

Intel Corporation, Santa...


1. A processor comprising:a decode unit on a die to decode an instruction, the instruction to indicate a first source packed data operand that is to have a first plurality of data elements, a second source packed data operand that is to have a second plurality of data elements, a first mask register, and a second mask register; and
an execution unit on the die and coupled with the decode unit, the execution unit to perform the instruction to generate and store:a first result in the first mask register, the first result to include a different mask element for each corresponding data element in the first source packed data operand in a same relative position, each mask element of the first result to indicate whether the corresponding data element in the first source packed data operand equals any of the data elements in the second source packed data operand; and
a second result in the second mask register, the second result to include a different mask element for each corresponding data element in the second source packed data operand in a same relative position, each mask element of the second result to indicate whether the corresponding data element in the second source packed data operand equals any of the data elements in the first source packed data operand.


US Pat. No. 11,113,052

GENERATION APPARATUS, METHOD FOR FIRST MACHINE LANGUAGE INSTRUCTION, AND COMPUTER READABLE MEDIUM

FUJITSU LIMITED, Kawasak...


1. A generation apparatus comprising:a memory configured to store variable value information indicating variable value candidates for each variable name, the variable value candidates being determined for each architecture; and
a processor coupled to the memory and the processor configured to:
generate a first machine language instruction corresponding to a first code in response to receiving designation of the first code included in codes generated by a compiler, and
when the generated first machine language instruction includes a first variable name of a specific type, by reference to the variable value information stored in the memory, perform generation of a plurality of machine language instructions based on a plurality of pieces of variable value information associated with each of one or more variable names included in the generated first machine language instruction,
when the generated first machine language instruction includes a second variable name of a type that is not the specific type, perform generation of a plurality of machine language instructions based on variable value information whose type is not the specific type included in the generated first machine language instruction.

US Pat. No. 11,113,051

PROCESSING CORE WITH METADATA ACTUATED CONDITIONAL GRAPH EXECUTION

Tenstorrent Inc., Toront...


1. A computer-implemented method for a conditional execution of a directed graph comprising:evaluating a set of output data from an execution engine;
generating metadata for a first data tile based on the evaluating of the set of output data;
storing, subsequent to the evaluating of the set of output data, a first data tile in a random access memory, wherein the set of output data is stored as a first set of data elements in the first data tile, wherein the first data tile includes a header, and wherein the metadata is stored in the header in the random access memory;
storing a second data tile in the random access memory, wherein the second data tile includes a second set of data elements;
fetching, subsequent to the storing of the first data tile and metadata in the random access memory, an instruction, for execution by the execution engine, wherein execution of the instruction requires an arithmetic logic operation using: (i) an arithmetic logic unit; (ii) a first data element in the first set of data elements; and (iii) a second data element in the second set of data elements;
evaluating the metadata from the header; and
conditionally executing the arithmetic logic operation based on the evaluating of the metadata;
wherein a conditionally executed output of the arithmetic logic unit resulting from the conditional execution of the arithmetic logic operation is not equal to a standard output of the arithmetic logic unit resulting from a standard execution of the arithmetic logic operation.

US Pat. No. 11,113,050

APPLICATION ARCHITECTURE GENERATION

ACCENTURE GLOBAL SOLUTION...


1. An application architecture generation apparatus comprising:at least one hardware processor;
an input receiver, executed by the at least one hardware processor, toascertain, for a project, an input that includes project information, component information, and target information;

a command line input analyzer, executed by the at least one hardware processor, toparse the project information to determine whether the project is an existing project or a new project;

a configuration manager, executed by the at least one hardware processor, to generate a component list from the component information;
an architecture modeler, executed by the at least one hardware processor, to ascertain components from the component list based on installation paths;
a mapper, executed by the at least one hardware processor, to map each of the ascertained components to a corresponding target determined from the target information and to a template representing a structure for an application with respect to source code, by attaching, based on the installation paths identified in a configuration file associated with the project, a file associated with each of the ascertained components to the corresponding target;
a dependency manager, executed by the at least one hardware processor, to analyze a dependency for each of the ascertained components relative to at least one other component of the ascertained components; and
an integrated output generator, executed by the at least one hardware processor, to generate, based on the mapping and the analyzed dependency, an integrated output that includes an architecture for the application associated with the project.

US Pat. No. 11,113,049

DEPLOYING APPLICATIONS IN A COMPUTING ENVIRONMENT

Red Hat, Inc., Raleigh, ...


1. A method, comprising:determining, by a processing device, that a first computing device has been added to a computing environment, the computing environment comprising a plurality of computing devices executing a plurality of benchmark applications and a first application, the plurality of benchmark applications to execute one or more tasks performed by the first application, wherein the first computing device comprises a first component that is not included in the plurality of computing devices;
in response to determining that the first computing device has been added to the computing environment recompiling the plurality of benchmark applications to execute on the first computing device in view of the first component;
determining, by the processing device, whether a functionality of the first application may be improved in view of the plurality of benchmark applications as executed on the first computing device;
in response to determining that the functionality of the first application may be improved, recompiling the first application for execution on the first computing device; and
in response to determining that the functionality of the first application may not be improved, refraining from recompiling the first application for execution on the first computing device.

US Pat. No. 11,113,048

UTILIZING ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING MODELS TO REVERSE ENGINEER AN APPLICATION FROM APPLICATION ARTIFACTS

Accenture Global Solution...


1. A method, comprising:receiving, by a device, input data identifying user stories, test case documents, event logs, and application logs associated with an application;
processing, by the device, the input data, with a machine learning model, to determine application change data and application overview data associated with the application;
performing, by the device using a natural language tool kit, natural language processing on the user stories and the test case documents, identified in the input data, to generate a first state diagram associated with the application;
processing, by the device, the event logs identified in the input data, with a heuristic miner model, to generate a second state diagram associated with the application;
processing, by the device, the application logs identified in the input data, with a clustering model, to generate a volumetric analysis associated with the application;
performing, by the device and using the natural language tool kit, post processing of the first state diagram, the second state diagram, the volumetric analysis, the application change data, and the application overview data, to remove duplicate data and unmeaningful data and to generate modified outputs,wherein the modified outputs include one or more of:a modified first state diagram,
a modified second state diagram,
a modified volumetric analysis,
modified application change data, or
modified application overview data; and


performing, by the device, one or more actions based on the modified outputs,wherein the one or more actions includes modifying the application.


US Pat. No. 11,113,047

SYSTEMS AND PROCESSES OF ACCESSING BACKEND SERVICES WITH A MOBILE APPLICATION

KONY, INC., Orlando, FL ...


1. A method, comprising:providing a singly deployed mobile application by deploying a mobile application a single time on a mobile device during a development lifecycle of the singly deployed mobile application; and
pointing the singly deployed mobile application to selected different environments in different service endpoint destinations during the development lifecycle of the singly deployed mobile application, without recompiling, based on one or more application policies associated with at least the singly deployed mobile application,
wherein the application policies are provided to the singly deployed mobile application by a policy server in a management computing system and are based on a context in which a user operates, without the use of a user identity,
wherein the singly deployed mobile application includes a properties file, stored locally on the mobile device, that contains a list of the service endpoint destinations to allow direct access by the mobile device to the different service endpoint destinations during the development lifecycle of the singly deployed mobile application using the properties file,
wherein the direct access to the different service endpoint destinations is provided via a connection between the mobile device and the different service endpoint destinations which does not include the management computing system,
wherein the singly deployed mobile application uses the application policies provided by the policy server to determine which service endpoint destinations can be used based on context only, without the use of the user identity, from the list of service endpoint destinations contained in the properties file stored locally on the mobile device,
wherein the context includes at least one of time of day and location,
wherein the list of the service endpoint destinations accessible by the singly deployed mobile application and included in the property files includes QA servers, staging servers, and production servers; and
wherein the singly deployed mobile application moves its way through the service endpoint destinations without being rebuilt or recompiled, and further comprising redefining the application policies to change the list of service endpoint destinations that can be used by the user within the property files stored locally on the mobile device, to allow the singly deployed mobile application to directly access the changed service endpoint destinations during the development lifecycle of the singly deployed mobile application using the properties file, based on a Unique Device ID of the mobile device, without the use of the user identity.

US Pat. No. 11,113,046

INTEGRATION AND REMOTE CONTROL OF A PRE-ASSEMBLED COMPUTER SYSTEM INTO A SERVER FOR A VIRTUALIZATION SERVICE

Amazon Technologies, Inc....


1. A server system, comprising:a server chassis;
a pre-assembled computer system mounted in the server chassis, wherein the pre-assembled computer system is pre-assembled and pre-installed in a computer case of the pre-assembled computer system prior to being installed in the server chassis;
a baseboard management controller mounted in the server chassis external to the pre-assembled computer system;
one or more cables coupled at respective first ends to one or more connectors of the baseboard management controller and coupled at respective second ends to one or more ports in the computer case of the pre-assembled computer system,
wherein the baseboard management controller is configured to:receive request messages from a virtualization offloading component of a cloud computing service that provides virtualization control for a plurality of servers;
receive status information or data from one or more system management components included in the server chassis;
receive status information or data from the pre-assembled computer system via the one or more cables coupled to the one or more ports of the pre-assembled computer system; and
generate, based on the request messages, the status information, or the data, output commands to take one or more control actions at the pre-assembled computer system or the one or more system management components included in the server chassis;

wherein the baseboard management controller enables the cloud computing service to remotely control the pre-assembled computer system.

US Pat. No. 11,113,045

IMAGE INSTALL OF A NETWORK APPLIANCE

Red Hat, Inc., Raleigh, ...


8. A non-transitory computer-readable storage medium, having instructions stored therein, which when executed, cause a processor to:initiate an install process for a network appliance;
identify a lack of swapping operations associated with a swap partition of a memory of the network appliance during the install process;
reserve an install staging area in the memory of the network appliance, the install staging area comprising the swap partition of the memory repurposed for the install process due to the lack of swapping operations;
copy a plurality of installation objects from an installation medium to the install staging area of the memory of the network appliance, the installation medium having stored thereon a network appliance image file previously downloaded from a service provider, wherein the plurality of installation objects comprises a collection of software packages in a pre-installed state that can be unpacked to create an executable image and a set of configuration data associated with the collection of software packages;
prior to installing the plurality of installation objects onto the network appliance, generate fingerprints of the plurality of installation objects in the install staging area, the fingerprints comprising object names and version information of the plurality of installation objects, and send, to a server, a first list of the fingerprints of the plurality of installation objects in the install staging area, the first list comprising a set of strings, each string in the set of strings representing at least one of a checksum or a hash of a corresponding one of the plurality of installation objects;
obtain, from the server, installation object data comprising a second list of fingerprints that identifies an outdated installation object from the first list and a new installation object not already on the first list, wherein the installation object data is represented as at least one of serialized data, character data (CDATA) sections in an extensible markup language (XML) document, or a multipart document encoding;
create, by the processor in view of the installation object data, an updated set of installation objects in the install staging area of the memory of the network appliance by deleting the outdated installation object from the install staging area and adding the new installation object to the install staging area;
send, to the server and responsive to creating the updated set of installation objects, a third list of fingerprints of the updated set of installation objects in the install staging area, wherein the third list differs from the first list, the third list comprising an indication of the new installation object having been obtained and added to the install staging area and lacking an indication of the outdated installation object having been deleted from the install staging area;
obtain, from the server, an indication that the updated set of installation objects is complete and up to date;
mark the install staging area as bootable;
reboot the network appliance; and
install, by the processor, the updated set of installation objects from the install staging area onto the network appliance.

US Pat. No. 11,113,044

INFORMATION PROCESSING APPARATUS AND NON-TRANSITORY COMPUTER READABLE MEDIUM STORING PROGRAM

FUJIFILM Business Innovat...


1. An information processing apparatus comprising:a processor configured to:
display a first list which is a list of available software and a second list which is a list of software installed on a target device; and
receive an instruction operation of installing the software displayed in the first list on the target device and an instruction operation of performing a predetermined process on the installed software displayed in the second list,
wherein the processor continues to display the software in the second list, in a case where the software included in the second list is no longer newly provided, and the processor does not display the software in the first list or displays a fact that it is not allowed to install the software, in a case where the software included in the first list is no longer newly provided,
wherein the predetermined process is a process of updating the installed software displayed in the second list,
the processor is configured to, in a case where the processor receives an instruction to update one piece of software which is displayed in the second list and is no longer newly provided, perform a process of installing different software which succeeds the one piece of software and is newly provided on the target device.

US Pat. No. 11,113,043

SPLIT FRONT END FOR FLEXIBLE BACK END CLUSTER PROCESSING

Databricks Inc., San Fra...


1. A system for code development and execution, comprising:a client interface adapted to:receive user code to be executed; and
receive an indication of a server that will perform the execution; and

a client processor adapted to:parse the user code to determine one or more data items referred to during the execution;
provide the server with an inquiry for metadata regarding the one or more data items;
receive the metadata regarding the one or more data items;
determine a logical plan based at least in part on the metadata regarding the one or more data items;
provide the logical plan to the server to be executed;
receive a second server indication of a second server that will perform a second execution;
receive an error message, a general message, or a status message, wherein the error message comprises a server unavailable message or a version compatibility error message; and
in response to a server unavailable error message:indicate server unavailability to a user, automatically switch to a different server; and
prompt for a manual switch to the different server.



US Pat. No. 11,113,042

SYSTEMS FOR DETERMINING REGULATORY COMPLIANCE OF SMART CONTRACTS

CAPITAL ONE SERVICES, LLC...


1. A system, comprising:one or more processors; and
a memory in communication with the one or more processors and storing instructions that, when executed by the one or more processors, are configured to cause the system to:train a first neural network (NN) to classify smart contract sections based on a first set of intermediate representation of code corresponding to one or more positive sections that comply with one or more regulations;
receive a first smart contract comprising one or more first sections;
convert, using the one or more compilers, the one or more first sections into a second set of intermediate representation of code;
classify, by the first NN, the second set of intermediate representation of code as a second classification not corresponding to the first set of intermediate representation of code; and
generate for display a negative indication that the one or more first sections do not comply with the one or more regulations in response to classifying the second set of intermediate representation of code as the second classification.


US Pat. No. 11,113,041

SPREADSHEET-BASED SOFTWARE APPLICATION DEVELOPMENT


1. A computer implemented method for generating an interactive web application comprising at least one web page, the method comprising:determining at least one primary data source within a spreadsheet, wherein the at least one primary data source corresponds to a first worksheet of the spreadsheet;
determining at least one secondary data source within the spreadsheet, wherein the at least one secondary data source corresponds to a different second worksheet of the spreadsheet; determining a relationship between records of the primary data source and records of the secondary data source, wherein determining the relationship between the records of the primary data source and the records of the secondary data source comprises automatically detecting the relationship based on:one or more characteristics of the first worksheet and the second worksheet, or
one or more characteristics of at least one user interface template corresponding to a particular web page of the interactive web application;

generating, automatically and based on the determined relationship, a third worksheet comprising at least a portion of the records of the primary data source and at least a portion of the records of the secondary data source, wherein content of the third worksheet is synchronized with content of the first worksheet and content of the second worksheet, and wherein a first row of the third worksheet comprises at least one first cell selected from the primary data source and at least one different second cell selected from the second data source based on the determined relationship;
generating the particular web page of the interactive web application based on the at least one user interface template corresponding to the particular web page and stored within the spreadsheet, wherein the particular web page references records of the third worksheet identified based on the at least one user interface template corresponding to the particular web page, wherein generating the particular web page of the interactive web application comprises extracting content corresponding to the records of the third worksheet from the spreadsheet, and wherein the content corresponding to the records of the third worksheet is identified by the at least one user interface template corresponding to the particular web page;
receiving user input corresponding to a record of the third worksheet via an input control associated with the particular web page of the interactive web application; and
updating at least one record of the secondary data source corresponding to the record of the third worksheet based on the received user input and based on the determined relationship.

US Pat. No. 11,113,040

SYSTEMS AND METHODS FOR ORCHESTRATION AND AUTOMATED INPUT HANDLING OF INTERACTIONS RECEIVED VIA A USER INTERFACE

Verizon Patent and Licens...


1. A device, comprising:a non-transitory computer-readable medium storing a set of processor-executable instructions; and
one or more processors configured to execute the set of processor-executable instructions, wherein executing the set of processor-executable instructions causes the one or more processors to:present a graphical user interface (“GUI”) that includes options to define a user interface for presentation at a User Equipment (“UE”), the GUI including a plurality of graphical elements that are selectable via drag and drop operations,wherein a first set of the plurality of graphical elements of the GUI are associated with interactive elements that are available to be placed in the user interface, and
wherein a second set of the plurality of graphical elements of the GUI are associated with respective labels that are available to be associated with respective graphical elements, of the first set of graphical elements,wherein a first label, associated with a first graphical element of the second set of graphical elements, is associated with a first set of actions, the first set of actions including an input validation action,
wherein a second label, associated with a second graphical element of the second set of graphical elements, is associated with a second set of actions,
wherein a combination of the first and second labels is associated with a third set of actions;


receive, via the GUI, a first drag and drop selection of a third graphical element, of the first set of graphical elements of the GUI, to associate a first interactive element with the user interface, wherein the first interactive element includes an option to receive a user input;
receive, via the GUI, a second selection to associate the first label with the first interactive element, the second selection including a second drag and drop selection of the first graphical element of the second set of graphical elements,wherein the association of the first label with the first interactive element associates the first interactive element with an input validation action for user input received via the first interactive element;

receive, via the GUI, a third drag and drop selection of a fourth graphical element, of the first set of graphical elements of the GUI, to associate a second interactive element with the user interface;
receive, via the GUI, a fourth selection to associate the second label with the second interactive element, the fourth selection including a fourth drag and drop selection of the second graphical element of the second set of graphical elements;
receive, from the UE, a first user input that was received via the first interactive element of the user interface presented by the UE,wherein the user interface includes a set of interactive elements, including the first interactive element and the second interactive element,
wherein each interactive element, of the set of interactive elements, is associated with a respective label;

identify, based on the receiving the first user input, which other interactive elements, of the set of interactive elements, have received user input;
identify, based on the receiving the first user input and based on which other interactive elements have received user input, that the first user input is associated with the first label that is associated with the first interactive element;
identify the first set of actions that is associated with the first label associated with the first interactive element via which the first user input was received, wherein identifying the first set of actions includes identifying user information from a user profile;
perform, based on identifying that the first user input is associated with the first label, the first set of actions, wherein performing the first set of actions includes:performing the input validation action on the first user input received via the first interactive element after identifying the user information,
identifying a particular input handling system, from a plurality of input handling systems, that is associated with the first label, to perform additional actions of the first set of actions, and
forwarding the received user information and the first user input to the particular input handling system;

receive, from the UE, a second user input that was received via the first interactive element of the user interface presented by the UE and a third user input that was received via the second interactive element;
identify, based on identifying that the first interactive element has received the second user input and that the second interactive element has received the third user input, that the second user input is associated with the combination of the first and second labels;
identify, based on identifying that the third user input is associated with the combination of the first and second labels, that the third user input is associated with the third set of actions; and
perform the third set of actions based on identifying that the second user input and the third user input have been received via the first and second interactive elements.


US Pat. No. 11,113,039

INTEGRATED NOTE-TAKING FUNCTIONALITY FOR COMPUTING SYSTEM ENTITIES

Microsoft Technology Lice...


1. A computing system comprising:a processor; and
memory storing instructions executable b the processor, wherein the instructions, when executed, configure the computing system to:detect an object type selection input;
based on the object type selection input, select a subset of object types from a set of object types corresponding to a first application;
automatically detect creation of a data object in the first application that is configured to operate on the data object;
in response to detecting the creation of the data object and a determination that the data object has an object type included in the selected subset of object types, automatically send a control instruction to a second application that is distinct from the first application and includes note-taking functionality,wherein the control instruction identifies the data object and instructs, the second application to generate a notebook component corresponding to the data object;

receive, from the second application, location information that identifies a storage location of the notebook component in the second application;
store association information that associates the data object and the notebook component, the association information including, a location indicator that indicates the storage location of the notebook component in the second application; and
access, by the first application, the notebook component in the second application using the location indicator.


US Pat. No. 11,113,038

GRAND UNIFIED PROCESSOR WITH ADAPTIVE PROCESSING FEATURES

HM Health Solutions Inc.,...


1. A computer-based method for executing a computer process associated with processing data or a computer-based event, the computer-based method comprising:coding, with a computer processor, the computer process a single time to use a common document object model;
breaking down, with the computer processor, the computer process into multiple functional units;
extracting, with the computer processor, metadata associated with each functional unit of the multiple functional units;
representing each functional unit of the multiple functional units by an interface;
coding each functional unit of the multiple functional units with computer-readable instructions which, when executed by the computer processor, direct the functional unit of the multiple functional units to use at least one configuration set defined by at least a portion of the extracted metadata;
executing, by the computer processor, at least one functional unit of the multiple functional units in association with at least one configuration set determined in response to at least one computer system operating parameter;
processing, by the computer processor, at least one event associated with the computer process; and
using at least one artificially intelligent algorithm to self-configure a processing flow for the at least one event associated with the computer process at runtime based on at least one event attribute of the at least one event associated with the computer process.

US Pat. No. 11,113,037

SOFTWARE PERFORMANCE MODIFICATION

International Business Ma...


1. A software performance management and capacity planning generation and modification method comprising:presenting, by a processor of an electronic device via a graphical user interface (GUI), a plurality of graphical images associated with tailoring hardware and software systems for specialized functionality, wherein said specialized functionality comprises parallel sysplex review functionality, central storage usage functionality, and workload manager review functionality;
receiving, by said processor from an authoritative user, a selection for a specified group of images of said plurality of graphical images;
receiving, by said processor from said authoritative user, an order of said specified group of images;
storing, by said processor within a specified portion of an external specialized memory device, said specified group of images with respect to said order;
generating, by said processor within said GUI, a set of movement interface buttons configured to enable software controls to rearrange graphical images within said specified group of images;
generating, by said processor within said GUI, a set of action interface buttons configured to enable software control click movements associated with control of said graphical images within said specified group of images;
generating, by said processor, specialized software code comprising said set of movement interface buttons, said set of action interface buttons, and said specified group of images or previously stored images retrieved from said specified portion of said external specialized memory device;
listing, by said processor, said specified group of images within a single section of a new software function generated within an internal software tool of said specialized software code;
executing, by said processor, said specialized software code;
monitoring, by said processor via a plurality of hardware and software sensors, functionality of said specialized software code, and
tailoring, by said processor in response to said executing and said monitoring, a candidate hardware and software system for said specialized functionality with respect to capturing expert knowledge and best practices.

US Pat. No. 11,113,036

PROGRAMMING DEVICE AND RECORDING MEDIUM, AND PROGRAMMING METHOD

CASIO COMPUTER CO., LTD.,...


1. A programming device comprising:a programming board which is a tangible object that can be directly and physically touched in real space, and which includes:a planar shape indication section which receives at least one first user operation for indicating a planar shape by specifying two or more portions among a plurality of portions arranged at different positions in a planar direction of the planar shape indication section; and
a height reception section which receives at least one second user operation for indicating a height that is a position in a direction intersecting with a plane of the planar shape or a displacement amount of the height in association with a portion of any of the two or more portions;

one or more height indication sections each of which indicates a height in the intersecting direction or a displacement amount of the height, and each of which is a tangible object that can be directly and physically touched in real space and is structured to be stackable on the programming board by a user and detected by the programming board when stacked thereon; and
a hardware processor which generates a command list for moving a control target section along a three-dimensional shape indicated by the planar shape indication section and the height reception section,
wherein the height reception section is provided on any of the portions, and receives the at least one second user operation in response to an operation of arranging the one or more height indication sections to correspond to any of the portions,
wherein the programming board further comprises:a parameter value reception section which receives at least one third user operation for indicating a parameter value for defining a state of the control target section in association with a portion of any of the two or more portions; and
a function reception section which receives at least one fourth user operation for setting a function to be executed by the control target section in association with any of the portions;

wherein the programming device further comprises:one or more parameter value indication sections each of which indicates a parameter value that defines the state of the control target section; and
one or more function setting sections each of which indicates a function that is performed by the control target section,

wherein the parameter value reception section is provided on any of the portions, and receives the at least one third user operation in response to an operation of arranging the one or more parameter value indication sections to correspond to any of the portions, and
wherein the function reception section is provided on any of the portions, and receives the at least one fourth user operation in response to an operation of arranging the one or more function setting sections to correspond to any of the portions.

US Pat. No. 11,113,035

METHOD AND SYSTEM FOR IMPLEMENTING APPLICATION LINEAGE METADATA AND REGISTRATION

JPMorgan Chase Bank, N.A....


1. A system for implementing application lineage metadata and registration, the system comprising:a memory component that stores and manages code;
an interactive interface that communicates with a user via a communication network; and
a computer processor coupled to the memory component and the interactive interface and further configured to perform the steps of:initiating a build process for code, wherein the build process comprises one or more plugins integrated with a build tool;
scanning, via one or the one or more plugins, the code for one or more predefined annotation markers that were added into the code;
identifying a corresponding content for each annotation marker;
automatically generating registry information in the form of a plurality of logical and physical catalogs that identify application lineage data associated with one or more assets or services of an ecosystem;
maintaining and storing the registry information in the memory component;
generating analytics based on the registry information; and
providing, via the interactive interface, a search function, filtering, and visualization data that graphically depicts the application lineage data comprising ownership traceability information.


US Pat. No. 11,113,034

SMART PROGRAMMING ASSISTANT

EMC IP Holding Company LL...


1. A computer-implemented method comprising steps of:identifying one or more processes corresponding to an application running on a system of a user;
monitoring user input being provided to the application by the user, the user input associated with developing software code in one or more computer programming languages, wherein said monitoring comprises capturing keystroke data of the user in response to said identifying said one or more processes;
identifying a context of said user input relative to said one or more computer programming languages;
obtaining, from a storage system, (i) one or more candidate code completion suggestions that match the identified context, and (ii) information aggregated from a plurality of web sources that is linked to at least a given one of the candidate code completion suggestions, wherein the information comprises programming language documentation information and one or more code samples; and
outputting, in response to said user input, a ranked list of said one or more candidate code completion suggestions and at least a portion of the obtained information to a graphical user interface in real time, wherein the order of the ranked list is based at least in part on one or more characteristics associated with said user;
wherein the steps are performed by at least one processing device comprising a processor coupled to a memory.

US Pat. No. 11,113,033

DYNAMIC VALIDATION FRAMEWORK EXTENSION

ORACLE INTERNATIONAL CORP...


1. A non-transitory computer-readable medium comprising instructions that, when executed by one or more processors, cause the one or more processors to perform operations comprising:receiving, at a programming language framework, an instance of an object, wherein a definition of the object is annotated with a constraint;
receiving, at the programming language framework, one or more validators that are annotated with an annotation that identifies an attribute in the definition of the object and a value for the attribute, wherein the programming language framework comprises a definition of an abstract class from which each of the one or more validators is inherited, and the abstract class comprises a protected function that validates an object against custom constraints;
identifying, at the programming language framework, a validator in the one or more validators for which a value of the attribute in the instance of the object matches the value for the attribute in the annotation of the validator; and
executing the validator using the instance of the object.

US Pat. No. 11,113,032

FUNCTION ACCESS SYSTEM

Palantir Technologies Inc...


1. A computer-implemented method comprising:accessing an object definition for a first type of data object stored in a first storage format in a first data store, wherein the object definition comprises at least one or more properties associated with the first type of data object;
generating a first application programming interface associated with the first type of data object for accessing an instance of the first type of data object, based at least partly on the object definition;
receiving a first function associated with the first type of data object, wherein the first function is configured to execute an operation associated with the instance of the first type of data object by using the generated first application programming interface associated with the first type of data object;
storing the first function associated with the first type of data object in a registry of functions;
determining a change to the object definition for the first type of data object;
in response to determining the change, updating the first application programming interface based on the change to the object definition for the first type of data object;
receiving a request to execute the first function associated with the first type of data object; and
executing the first function associated with the first type of data object based on the updated first application programming interface.

US Pat. No. 11,113,031

SYSTEMS AND METHODS FOR LOADING PROJECT DATA

VISA INTERNATIONAL SERVIC...


1. A processor-implemented method of deploying updated computer software projects, the method comprising:identifying, via one or more processors, at least a first project file of a plurality of software project files, the first project file being associated with a first project ID;
determining, via the one or more processors, that at least one reference to programing code of a second project file of the plurality of software project files is included within the at least one first project file, the second project file being associated with a second project ID;
loading, via the one or more processors, the first project ID and the second project ID into a reference table such that the first project ID is associated with the second project ID in the reference table;
providing, via the one or more processors, a graphical user interface with a field for receiving one or more user selections of queried project IDs associated with a project files that include updates to programming code;
editing, via the one or more processors, programming code of the second project file so as to change a functionality of the second project file;
receiving, via the field for receiving user selections of queried project IDs, a user selection of the secondary project ID for the second project file that includes edited programming code;
in response to receiving the user selection of the secondary project ID, querying, via the one or more processors, the reference table to determine whether any project ID of any of the plurality of software project files is associated with the second project ID in the reference table and thus requires deployment to reflect the edited programming code of the second project file;
in response to the query of the reference table, providing, via the one or more processors, a results table via the graphical user interface, the results table including at least the first project ID and a file location of the first project file; and
based on the results table, deploying, via the one or more processors, the first project file in order to reflect, in the first project file, the edits to the programming code of the second project file associated with the second project ID.

US Pat. No. 11,113,030

CONSTRAINTS FOR APPLICATIONS IN A HETEROGENEOUS PROGRAMMING ENVIRONMENT

XILINX, INC., San Jose, ...


1. A method, comprising:receiving graph source code, the graph source code defining a plurality of kernels and a plurality of communication links, wherein each of the plurality of communication links couples a respective pair of the plurality of kernels to form a dataflow graph;
identifying a constraint corresponding to a graph object in the dataflow graph, wherein the graph object comprises at least one of the plurality of kernels and plurality of communication links;
configuring, when compiling the graph source code, the graph object in a heterogeneous processing system in an integrated circuit to satisfy the constraint, wherein the constraint is used to select a particular hardware element, or a group of hardware elements, in the heterogeneous processing system to which the graph object is assigned; and
implementing the dataflow graph on the heterogeneous processing system in the integrated circuit.

US Pat. No. 11,113,029

PROBABILISTIC MATCHING OF WEB APPLICATION PROGRAM INTERFACE CODE USAGE TO SPECIFICATIONS

INTERNATIONAL BUSINESS MA...


1. A computing device comprising:a processor;
a network interface coupled to the processor to enable communication over a network;
a matching engine configured to perform acts comprising:
receiving a program having an application program interface (API) code usage;
extracting features from the API code usage;
extracting features from meta data of a plurality of API specifications;
for each API specification of the plurality of API specifications, determining a match probability with the API code usage;
determining an API specification having a highest probability; and
matching the API code usage with the API specification having the highest probability.

US Pat. No. 11,113,028

APPARATUS AND METHOD FOR PERFORMING AN INDEX OPERATION

Arm Limited, Cambridge (...


1. An apparatus comprising:vector processing circuitry to perform an index operation in each of a plurality of lanes of parallel processing, the index operation requiring an index value opm to be multiplied by a multiplier value e to produce a multiplication result, where the number of lanes of parallel processing is dependent on a specified element size, and the multiplier value is different, but known, for each lane of parallel processing;
the vector processing circuitry comprising:
mapping circuitry to perform, within each lane, mapping operations on the index value opm in order to generate a plurality of intermediate input values, the plurality of intermediate input values being such that addition of the plurality of intermediate input values produces the multiplication result, within each lane the mapping operations being determined by the multiplier value used for that lane; and
vector adder circuitry to perform, within each lane, an addition of at least the plurality of intermediate input values, in order to produce a result vector providing a result value for the index operation performed in each lane.

US Pat. No. 11,113,027

APPARATUS, SYSTEM, AND METHOD THAT SUPPORT OPERATION TO SWITCH TO INPUT TERMINAL TO BE ACTIVATED AMONG INPUT TERMINALS INCLUDED IN DISPLAY APPARATUS

SHARP KABUSHIKI KAISHA, ...


1. An operational support apparatus for supporting operation of a display apparatus, the display apparatus including a plurality of input terminals that each receive image information and a display section that displays image information received by an activated input terminal of the plurality of input terminals from an apparatus connected to the activated input terminal, comprising:a speech recognizer that converts speech data into text information; and
a command generator that generates a control command that corresponds to a content of the text information,
wherein in a case where the text information contains a keyword that indicates first application software, the command generator determines which of the plurality of input terminals of the display apparatus is associated with the first application software,
the command generator generates, as the control command, a switching command to activate the input terminal thus determined and a start-up command to start the first application software associated with the input terminal thus determined,
the command generator identifies a certain input terminal of the input terminals of the display apparatus according to a keyword that indicates first application software associated with the certain input terminal, and identifies a certain different input terminal of the input terminals of the display apparatus according to a keyword indicating first application software associated with the certain different input terminal, and
first application software associated with an input terminal of the input terminals activated by the switching command is installed on an apparatus connected to the input terminal activated by the switching command,
after the first application software associated with the input terminal activated by the switching command is started by the start-up command, the apparatus connected to the input terminal activated by the switching command transmits image information generated by the first application software associated with the input terminal activated by the switching command to the input terminal activated by the switching command.

US Pat. No. 11,113,026

SYSTEM AND METHOD FOR VOICE-DIRECTED WEBSITE WALK-THROUGH

Toonimo, Inc., New York,...


6. A method of voice guided website walkthrough comprising:(a) asking an introductory voice question to a user upon the user accessing a website using text to voice conversion or an initial recorded voice question;
(b) receiving an initial voice reply from the user;
(c) converting the initial voice reply to initial text using voice recognition;
(d) analyzing the initial text with an artificial intelligence module to determine intent;
(e) if intent can be determined from the initial reply, presenting a webpage to the user that relates to the intent;
(f) if intent cannot be determined from the initial reply, asking a further voice question using text to voice conversion or a recorded voice question; (g) receiving a voice answer from the user;
(h) converting the voice reply to response text using voice recognition;
(i) analyzing the response text with the artificial intelligence module to determine intent;
(j) if intent can be determined from the response text, presenting a webpage to the user that relates to the intent;
(k) if intent cannot be determined from the reply text repeating steps (f)-(k).

US Pat. No. 11,113,025

INTERACTION MANAGEMENT DEVICE AND NON-TRANSITORY COMPUTER READABLE RECORDING MEDIUM

TOYOTA JIDOSHA KABUSHIKI ...


1. An interaction management device that fills in each item of one or more service frames with data based on user's speech content, each of the one or more service frames being composed of a plurality of items, the interaction management device comprising a control unit configured toreceive a speech content spoken by a user;
analyze the received user's speech content;
identify the one or more service frames based on the user's speech content and identify items to be filled in with the data for the identified one or more service frames;
fill in one identified item with the data corresponding to the user's speech content;
estimate another data with which to fill in a blank item of the identified one or more service frames based on a past action history of the user when the blank item is an item not yet filled in with any data;
inquire of the user whether the estimate another data with which to fill in the blank item is a correct estimated data via a speech recognition device; and
determine that the estimated another data is the correct estimated data with which to fill in the blank item if a response indicating that the estimated another data is correct is received from the user,
wherein the one or more service frames comprise at least one selected from a scheduler, navigation, traffic information, and weather information.

US Pat. No. 11,113,024

ELECTRONIC DEVICE AND METHOD FOR SHARING INFORMATION THEREOF

Samsung Electronics Co., ...


1. An electronic device, comprising:a display;
a communication module comprising communication circuitry;
a processor; and
a memory configured to store information on an application executed by the processor and information on a screen output through the display,
wherein the processor is configured to:
receive an input for execution of a first application related to a communication service while a screen for a second application is displayed on the display,
determine information related to an other party of the communication service based on the received input,
determine whether the information related to the other party is included on the screen for the second application output through the display when the input is received,
if the information related to the other party included on the screen for the second application is included in the screen for the second application, cause the display to display the information related to the other party on a screen of the first application, if the execution screen of the first application is displayed through the display while the communication service is performed,
if the information related to the other party is included in the screen for the second application, causes the display to display the information related to the other party on the screen of the first application after the communication service is terminated, if the execution screen of the first application is not displayed through the display while the communication service is performed, and
transmit the information related to the other party, to the other party, through the communication module after the communication service is terminated.

US Pat. No. 11,113,023

MEDIA CONTENT SYSTEM FOR ENHANCING REST

Spotify AB, Stockholm (S...


1. A method for selecting and playing a song with a mobile device, the method comprising:receiving a request from a user to play a song;
receiving a context selection for a context for playback;
detecting a user heart rate using a sensor and a light source of the mobile device, while the sensor and the light source are directed toward a part of the user's body;
selecting a first song with a first tempo, wherein the first tempo is based on the user heart rate and the context selection;
mixing a binaural beat with the first song;
initiating playback of the first song, with the first tempo, and the binaural beat on the mobile device; and
over a predetermined amount of time, reducing a frequency of the binaural beat.

US Pat. No. 11,113,022

METHOD, SYSTEM AND INTERFACE FOR CONTROLLING A SUBWOOFER IN A NETWORKED AUDIO SYSTEM


1. A method for controlling a plurality of media rendering devices in a data network with a controller device comprising a processor and a memory configured to store non-transitory instructions for execution by the processor, the plurality of media rendering devices being within a wireless communication network, comprising the steps of:adding an audio rendering device to the data network such that the audio rendering device is visible on the network to the controller;
via a user interface on the controller, displaying an audio rendering device graphical object representing the audio rendering device within a room tile graphical object comprising a plurality of room graphical objects, each room graphical object representing a location of a group comprising one or more of the plurality of rendering devices configured to render a media program comprising one or more audio channels;
receiving a selection of the audio rendering device and a room graphical object of the plurality of room graphical objects by a user of the controller;
as a result of receiving the selection, generating a configuration command to the audio rendering device in the selected room, joining the audio rendering device with the room graphical object group;
assigning a first audio channel associated with the group to the audio rendering device; and
hiding the audio rendering device graphical object from the room tile graphical object,
wherein selection by the user further comprises a drag and drop of the audio rendering device graphical object upon the room graphical object.

US Pat. No. 11,113,021

SYSTEMS AND METHODS FOR SAAS APPLICATION PRESENTATION MODE ON MULTIPLE DISPLAYS

Citrix Systems, Inc., Fo...


1. A method of using an embedded browser for displaying content from a network application in presentation mode on a secondary display, the method comprising:establishing, by an embedded browser of a client application executing on a client device coupled to a first display device and a second display device, a session to a network application accessed via an embedded browser of the client application;
selecting, by the client application, responsive to detection of an interaction with an actionable object included within a web page of the network application displayed within the embedded browser, a presentation mode that is to be initiated in response to the detected interaction with the actionable object;
transmitting, by the client application, data corresponding to the selection of the presentation mode to the embedded browser, the data to cause the embedded browser to display content of the network application to be presented in the presentation mode on the second display device; and
displaying, by the embedded browser responsive to receipt of the data from the client application, content of the network application in the presentation mode on the second display device of the client device.

US Pat. No. 11,113,019

MULTI-DEVICE SELECTIVE INTEGRATION SYSTEM

Truist Bank, Charlotte, ...


1. A system, comprising:a processor coupled to a database having a plurality of memory locations; and
a memory device accessible to the processor and comprising instructions that are executable by the processor to cause the processor to:receive a user request from a user device via a network, the user request being for obtaining secure user information related to a user of the user device; and
in response to receiving the user request:determine that the user device has a device identifier among a plurality of device identifiers stored in a first set of memory locations in the database, wherein the plurality of device identifiers are for a plurality of user devices of the user;
determine a first association between the first set of memory locations and a second set of memory locations in the database;
based on determining the first association between the first set of memory locations and the second set of memory locations, retrieve application information for a plurality of web-based applications from the second set of memory locations in the database, the plurality of web-based applications including at least one banking application;
determine a second association between the second set of memory locations and a third set of memory locations in the database;
based on determining the second association between the second set of memory locations and the third set of memory locations, retrieve customization information for each web-based application among the plurality of web-based applications from third set of memory locations, the customization information for each web-based application being user customizable and identifying both a location on a dashboard user interface to display the web-based application and a position within the web-based application at which to display a respective portion of the secure user information that is accessible via the web-based application;
request the secure user information from a plurality of different systems corresponding to the plurality of web-based applications, the plurality of different systems being separate from and communicatively coupled to the system for providing the secure user information to the system;
receive the secure user information from the plurality of different systems; and
in response to receiving the secure user information from the plurality of different systems, generate the dashboard user interface for display on the user device, the dashboard user interface comprising the plurality of web-based applications spatially positioned within the dashboard user interface in locations determined based on the customization information and respective types of secure information associated with the plurality of web-based applications, wherein the secure user information is spatially positioned within the plurality of web-based applications in accordance with the customization information.



US Pat. No. 11,113,018

CONTENT DISPLAY SYSTEM AND DISPLAY DEVICE

SHARP KABUSHIKI KAISHA, ...


1. A content display system comprising:a first control device which displays content on a display;
a second control device which receives a voice;
a third control device which generates a command corresponding to the voice received by the second control device, and determines whether the generated command is a change command to change first content being displayed on the display; and
a fourth control device which distributes, to the first control device, second content corresponding to the change command when the third control device determines that the command is the change command, wherein
the first control device changes a material being displayed on the display from the first content to the second content distributed by the fourth control device,
when the voice received by the second control device includes authentication information to authenticate a user, the third control device adds the authentication information to the change command generated corresponding to the voice, and
when the authentication information is not added to the change command, the first control device suspends display of the first content, displays the second content for a certain period of time, and displays the first content again in accordance with the first timetable when the certain period of time ends.

US Pat. No. 11,113,017

ELECTRONIC DEVICE, IMAGE READING METHOD, AND PRINT PROCESSING METHOD

Seiko Epson Corporation, ...


15. An image reading method comprising:providing a first web application programming interface (web API) or a third web API by an electronic device,
receiving a scan start request or a print request from a browser, the scan start request comprising a first predetermined URL associated with a scan process of the first web API and a first parameter that designates a scan setting in an Extensible Markup Language (XML) format, or the print request comprising a second predetermined URL associated with a print process of the third web API and a second parameter that designates a print setting in an XML format, and the browser being configured to communicate with a server system; wherein:the first URL includes an IP address of a scanning device,
the scan setting is associated with a scan item,
the second URL includes an IP address of a printing device,
the print setting is associated with a print item,
the browser is configured to receive a Hypertext Markup Language (HTML) file from the server system,
the HTML file includes (1) the first URL and the first parameter or (2) the second URL and the second parameter,
the HTML causes the browser to display a page having a scan button or a print button; and
when a user presses the scan button or the print button, the browser is caused to send the scan start request or the print start request to a communication unit, and

in response to receiving the scan start request or the print request,performing the scan process or the print process at the electronic device, and
transmitting a current status of the print process or the scan process to the web browser, which in turn transmits the current status of the print process or the scan process to the server system whereinthe scan process is a process for transmitting scan data acquired by an image reading unit of the electronic device to the server system, and
the print process is a process for acquiring print data to perform printing by performing either one of a process for transmitting print data from the web browser to the electronic device in an invocation of the third web API for print request or a process for requesting print data from the electronic device to the server system in accordance with an invocation of the third web API for print request.



US Pat. No. 11,113,016

INFORMATION PROCESSING METHOD, INFORMATION PROCESSING APPARATUS, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM

Canon Kabushiki Kaisha, ...


13. An information processing method of an information processing apparatus communicable with a plurality of devices, the method comprising:selecting any one of the plurality of devices as a device to be used to perform a predetermined process;
installing to the information processing apparatus, based on that the device to be used to perform a predetermined process is changed from a first device among the plurality of devices to a second device among the plurality of devices by the selecting, a second device driver corresponding to the second device; and
setting, to the second device driver installed by the installing, at least a part of setting data for performing the predetermined process set to a first device driver corresponding to the first device,
wherein in the setting, when a state in which the first device is connected to the information processing apparatus and selected in the selecting and the second device is not connected to the information processing apparatus is switched to a state in which the second device is connected to the information processing apparatus and selected in the selecting and the first device is not connected to the information processing apparatus, at least a part of the setting data set to the first device driver is set to the second device driver.

US Pat. No. 11,113,015

INFORMATION PROCESSING APPARATUS, METHOD OF CONTROLLING THE SAME AND STORAGE MEDIUM

Canon Kabushiki Kaisha, ...


1. An image forming apparatus having at least two image processing functions, comprising:at least one processor; and
at least one memory coupled to the at least one processor and having stored thereon instructions which, when executed by the at least one processor, cause the at least one processor to function asa first activation unit configured to switch, from an inactive state to an active state, a predetermined authentication function that cancels, by performing based on inputted authentication information, a usage restriction for at least one image processing function of the image forming apparatus and;

a second activation unit configured to switch, from an inactive state to an active state, a job operation restriction function that imposes a restriction on performing, by an authenticated user, an operation with respect to a job of a user that has not logged in,
wherein in accordance with accepting from a user a setting for activating the authentication function, the first activation unit switches the authentication function from the inactive state to the active state, and the second activation unit switches the job operation restriction function with respect to the first and second image processing functions of the at least two image processing functions from the inactive state to the active state at a timing of accepting the user setting for activating the authentication function.

US Pat. No. 11,113,014

INFORMATION PROCESSING APPARATUS DETERMINES WHETHER IMAGE PROCESSING DEVICE SUITABLE TO EXECUTE PROCESSING ACCORDING TO RELIABILITY AND CONFIDENTIALITY INFORMATION

FUJIFILM BUSINESS INNOVAT...


1. An information processing apparatus comprising:an execution request obtainer that obtains an execution request to execute processing using an image processing device;
a reliability information obtainer that obtains reliability information concerning reliability of the image processing device, wherein the reliability information includes at least one of:a communication encryption indicating whether communication between the image processing device and the information processing apparatus is encrypted,
a HDD (hard disk drive) encryption indicating whether data stored in a HDD of the image processing device during the execution of processing is encrypted,
a HDD initialization indicating whether a data region in the HDD is initialized after the execution of processing,
a storage of an image log indicating whether the image log is stored in the storage of the image processing apparatus, and
an installation location indicating where the image processing device is installed;

a confidentiality-degree determiner that determines a degree of confidentiality of subject data to be processed in response to the execution request; and
a processing execution judger that judges whether the image processing device is suitable to execute processing in response to the execution request, based on the reliability information and the degree of confidentiality,
wherein the processing execution judger judges whether the image processing device is suitable to execute processing in response to the execution request, based on a judgement standard set in advance for different combinations of values of three or more categories of the reliability information and the degree of confidentiality, the three or more categories of the reliability information including:a first category indicating the communication encryption,
a second category indicating the HDD encryption, and
a third category indicating the HDD initialization.


US Pat. No. 11,113,013

IMAGE FORMING APPARATUS FOR EXECUTING SECURE PRINT JOB

KYOCERA DOCUMENT SOLUTION...


1. An image forming apparatus comprising:a printing device;
a controller that controls the printing device; and
a print job management unit that executes a secure print job by the controller based on a job request from a user,
wherein the controller executes calibration of the printing device when the controller detects that a predetermined parameter measured from a previous calibration exceeds a predetermined value, and
wherein the print job management unit, upon receiving a job request for the secure print job, determines whether or not a difference between a current value of the predetermined parameter and the predetermined value is less than a predetermined threshold, and if the difference is less than the predetermined threshold, the print job management unit determines whether to cancel the secure print job in accordance with a predetermined setting data that indicates whether or not the user selects job execution permission.

US Pat. No. 11,113,012

REPROCESSING OF PAGE STRIPS RESPONSIVE TO LOW MEMORY CONDITION

Hewlett-Packard Developme...


1. A method comprising: detecting a low memory condition of a memory buffer storing a plurality of already processed and compressed first strips of a page, the first strips processed according to a rendering technique; responsive to detecting the low memory condition, decompressing, reprocessing, compressing, and storing the first strips in the memory buffer, the first strips reprocessed according to a memory utilization reduction technique; and processing, compressing, and storing a plurality of additional, second strips of the page in the memory buffer, the second strips processed according to the rendering technique and then according to the memory utilization reduction technique; again detecting the low memory condition of the memory buffer; responsive to again detecting the low memory condition, decompressing, reprocessing, compressing, and storing the first and second strips in the memory buffer, the first and second strips reprocessed according to an additional memory utilization reduction technique; and processing, compressing, and storing a plurality of additional, third strips of the page in the memory buffer, the third strips processing according to the rendering technique, then according to the memory utilization reduction technique, and then according to the additional memory utilization reduction technique.

US Pat. No. 11,113,011

INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, AND RECORDING MEDIUM

RICOH COMPANY, LTD., Tok...


1. An information processing system for providing a service via a service providing server residing on a first network, the service being requested by an information processing apparatus residing on a second network different from the first network based on application information, the information processing system comprising:first circuitry configured to obtain state information indicating a state of the service providing server; and
second circuitry configured to transmit the state information of the service providing server to the information processing apparatus in response to reception of a workflow service request to the service providing server from the information processing apparatus, to cause the information processing apparatus to perform a notification operation according to the state information that is received before the requested workflow service is executed.

US Pat. No. 11,113,010

PRINTING USING FIDUCIAL MARKS

Hewlett-Packard Developme...


1. A method, comprising:printing an image and fiducial marks on a first side of a print medium, wherein the fiducial marks comprise:a set of first marks ahead of the image, wherein two of the first marks are end marks at a first distance from respective opposite edges of the print medium, and each of the first marks has a first length and is located at a second distance from a beginning edge of the image; and
a set of second marks along a length of the print medium, wherein each of the second marks is at the first distance from the respective opposite edges, has a second length smaller than the first length, and is at a third distance from another second mark;

scanning, using a sensor, the fiducial marks while the fiducial marks are backlit by a light source; and
printing a registered mirrored copy of the image on the second side of the print medium using respective relative positions of at least some of the scanned fiducial marks to the image.

US Pat. No. 11,113,009

COMPUTING DEVICE FACILITATING PRIORITIZATION OF TASK EXECUTION WITHIN A DISTRIBUTED STORAGE NETWORK (DSN)

Pure Storage, Inc., Moun...


1. A computing device comprising:an interface configured to interface and communicate with a distributed storage network (DSN);
memory that stores operational instructions; and
a processing module operably coupled to the interface and to the memory, wherein the processing module, when operable within the computing device based on the operational instructions, is configured to:generate a prioritized request related to information stored within a storage unit (SU) of a plurality of storage units (SUs) implemented within the DSN;
transmit, to the SU and via the interface, the prioritized request; and
receive, from the SU and via the interface, a response to the prioritized request that includes an execution priority level that is generated by the SU based on one or more conditions corresponding to the SU and that indicates a priority value level of the prioritized request in comparison to at least one other prioritized request.


US Pat. No. 11,113,008

DATA RESTORATION USING PARTIALLY ENCODED SLICE REQUESTS

Pure Storage, Inc., Moun...


11. A method for execution by a computing device, the method comprising:determining to restore data stored in a first set of storage units (SUs) within a storage network in accordance with first dispersed storage error coding function parameters to be stored in a second set of SUs within the storage network in accordance with second dispersed storage error coding function parameters, wherein the data includes a data object that is segmented into a plurality of data segments, wherein a data segment of the plurality of data segments is dispersed error encoded in accordance with the first dispersed storage error coding function parameters to produce a set of encoded data slices (EDSs) that are distributedly stored in the first set of SUs, wherein a first decode threshold number of EDSs are needed to recover the data segment;
generating a plurality of partially encoded slice requests; and
issuing via an interface of the computing device that is configured to interface and communicate with the storage network, the partially encoded slice requests to at least a first decode threshold number of SUs of the first set of SUs to instruct a SU of the at least a first decode threshold number of SUs of the first set of SUs to generate a second decode threshold number of partially EDSs based on the first dispersed storage error coding function parameters and the second dispersed storage error coding function parameters to be combined to generate a new set of EDSs to be stored in the second set of SUs.

US Pat. No. 11,113,007

PARTIAL EXECUTION OF A WRITE COMMAND FROM A HOST SYSTEM

Micron Technology, Inc., ...


1. A method, comprising:receiving, in a memory sub-system, a first write command from a host system, the first write command identifying a first input/output size;
identifying, by the memory sub-system based on a media physical layout, a second input/output size for executing the first write command in the memory sub-system, the second input/output size being smaller than the first input/output size;
executing, in the memory sub-system, the first write command according to the second input/output size;
configuring, by the memory sub-system, a response for the first write command to identify the second input/output size;
transmitting, from the memory sub-system to the host system, the response identifying the second input/output size, wherein the host system is configured to generate a second write command to write at least data that is identified in the first write command via the first input/output size but not included in execution of the first write command according to the second input/output size; and
receiving, in the memory sub-system, the second write command from the host system.

US Pat. No. 11,113,006

DYNAMIC DATA PLACEMENT FOR COLLISION AVOIDANCE AMONG CONCURRENT WRITE STREAMS

Micron Technology, Inc., ...


1. A method, comprising:receiving, in a memory sub-system, a plurality of streams of write commands, wherein the plurality of streams of write commands are configured to write data in a plurality of erase block sets respectively in the memory sub-system, and wherein each of the plurality of streams of write commands is tagged to store data as a group in a respective one of the erase block sets;
identifying a plurality of media units in the memory sub-system, wherein the plurality of media units are available to write data from the plurality of streams concurrently;
selecting first commands from the plurality of streams for concurrent execution in the plurality of media units;
generating and storing, dynamically in response to the first commands from a plurality of write streams being selected for concurrent execution in the plurality of media units, a portion of a media layout that maps:from logical addresses, identified by the first commands from the plurality of write streams, in the logical address space,
to physical addresses of memory units in the plurality of media units; and

executing the first commands concurrently by storing data into the memory units according to the physical addresses.

US Pat. No. 11,113,005

MULTI-PLATFORM DATA STORAGE SYSTEM SUPPORTING CONTAINERS OF VIRTUAL STORAGE RESOURCES

Arrikto Inc., San Mateo,...


1. A multi-platform data storage system for maintaining containers including one or more virtual storage resources, the multi-platform data storage system comprising:a storage interface configured to enable access to a plurality of storage platforms that use different storage access and storage management protocols, the plurality of storage platforms storing data objects in physical data storage;
a storage mobility and management layer providing virtual management of virtual storage resources corresponding to one or more data objects stored in the plurality of storage platforms; and
a virtual resource management sub-system that manages containers configured to contain one or more of the virtual storage resources, wherein the virtual resource management sub-system permits sharing of the containers in a peer-to-peer manner with one or more other peer multi-platform data storage systems via one or more networks,
wherein the one or more virtual storage resources contained in a selected one of the containers comprise files having unique file names, and wherein the virtual resource management sub-system is configured to create a new version of a corresponding one of the files when associated metadata for the corresponding one of the files has changed.

US Pat. No. 11,113,004

MOBILITY AND MANAGEMENT LAYER FOR MULTI-PLATFORM ENTERPRISE DATA STORAGE

Arrikto Inc., San Mateo,...


1. A multi-platform data storage system for accessing a plurality of storage platforms that use different storage access or storage management protocols, the multi-platform data storage system comprising:a storage mobility and management layer providing virtual management of data stored in the plurality of storage platforms; and
a storage protocol converter operatively coupled between the storage mobility and management layer and the plurality of storage platforms, wherein during access or management communication from the storage mobility and management layer to a particular one of the storage platforms, the storage protocol converter operates to convert the access and/or management communication from a layer protocol used by the storage mobility and management layer to the storage access protocol used by the particular one of the storage platforms,
wherein the storage mobility and management layer comprises a composition service that forms a virtual storage resource (VSR) from a plurality of virtual data blocks (VDB), and maintains a mapping of the virtual data blocks to the virtual storage resource, and wherein mapping data pertaining to the mapping is stored in a virtual data block.

US Pat. No. 11,113,003

STORAGE APPARATUS, STORAGE CONTROL DEVICE, AND RECORDING MEDIUM WITH EXECUTION COMMAND PAUSING OR STOPPING

FUJITSU LIMITED, Kawasak...


1. A storage apparatus comprising:a non-volatile storage device configured to store data;
a memory; and
a processor coupled to the memory and configured to control access to the non-volatile storage device, the processor configured to:receive an instruction from external software as a task,
generate a command set for controlling the non-volatile storage device
execute a command included in the generated command set,
control firmware revision processing of the storage apparatus,
stop execution of the command after performing processing for suppressing abnormality detection of the external software at a timing at which the command is not executable during the firmware revision processing,
provide an instruction to pause the execution of the command at a timing of separating and incorporating a corresponding one of a plurality of control devices and at a timing of synchronizing a cache table with another one of the plurality of control devices, and
provide an instruction to completely stop the execution of the command at a timing of switching the another one of the plurality of control devices to a master control device.


US Pat. No. 11,113,002

COMMAND OVERLAP CHECKING IN A DATA STORAGE DEVICE

Seagate Technology LLC, ...


8. A method comprising:recognizing commands received in a data storage drive from a host as single logical address (LA) commands or multi-LA commands;
providing a command overlap detection table having a plurality of records with each record having a predetermined bit length;
storing multiple LAs associated with different commands in a same record of the command overlap detection table when the different commands are recognized as single LA commands;
storing multiple LAs associated with a same command in the same record of the command overlap detection table when the same command is recognized as a multi-LA command; and
not storing in the same record of the command overlap detection table the multiple LAs associated with the different single LA commands together with the multiple LAs associated with the same multi-LA command at the same time.

US Pat. No. 11,113,001

FABRIC DRIVEN NON-VOLATILE MEMORY EXPRESS SUBSYSTEM ZONING

Hewlett Packard Enterpris...


1. An apparatus comprising:at least one processor; and
a non-transitory computer readable medium storing machine readable instructions that when executed by the at least one processor cause the at least one processor to:receive, from a non-volatile memory express (NVMe) Name Server (NNS), a zoning specification that includes an indication of a host that is to communicate with a given NVMe subsystem of an NVMe storage domain;
designate, based on the zoning specification, the host as being permitted to connect to the given NVMe subsystem of the NVMe storage domain;
receive, from the host, an NVMe connect command;
establish, based on the designation and an analysis of the NVMe connect command, a connection between the given NVMe subsystem of the NVMe storage domain and the host;
receive, prior to receiving the NVMe connect command, a discovery command from the host; and
forward, in response to the discovery command, a payload to the host, the payload;masking NVMe subsystems of the NVMe storage domain that are different from the given NVMe subsystem of the NVMe storage domain, and
including an Internet Protocol (IP) address associated with an NVMe qualified name (NQN) for the given NVMe subsystem of the NVMe storage domain.



US Pat. No. 11,113,000

TECHNIQUES FOR EFFICIENTLY ACCESSING VALUES SPANNING SLABS OF MEMORY

NETFLIX, INC., Los Gatos...


1. A computer-implemented method, comprising:causing a first data word included in a first slab associated with a memory pool to store a first portion of a value;
causing a second data word included in a second slab associated with the memory pool to store a second portion of the value, wherein the first portion of the value is not stored in the second slab; and
performing a copy operation that copies the second data word to a duplicate data word included in the first slab.

US Pat. No. 11,112,999

OPTIMIZING I/O LATENCY BY SOFTWARE STACK LATENCY REDUCTION IN A COOPERATIVE THREAD PROCESSING MODEL

EMC IP Holding Company LL...


1. A method for use in a storage node, the method comprising:instantiating a first poller for detecting whether pending storage device operations have been completed;
executing the first poller to identify a first storage device operation that has been completed, wherein executing the first poller includes: (a) executing a first function to detect whether a completion queue corresponding to a storage device driver is empty, the first function being arranged to read a content of a memory location that is associated with the completion queue, the first function being executed in a user space of the storage node, (b) terminating the execution of the first poller when the completion queue is empty, and (c) executing a system call function to the storage device driver only when the completion queue is not empty, wherein the system call function is executed in a kernel space of the storage node, and the system call function is configured to identify one or more operations that are listed in the completion queue;
identifying a first thread that is waiting for the first storage device operation to be completed; and
transitioning the first thread from a waiting state to a ready state.

US Pat. No. 11,112,998

OPERATION INSTRUCTION SCHEDULING METHOD AND APPARATUS FOR NAND FLASH MEMORY DEVICE

DERA CO., LTD., Beijing ...


1. An operation instruction scheduling method for a NAND flash memory device, comprising:performing task decomposition on the operation instruction of the NAND flash memory device, and sending an obtained task to a corresponding task queue;
sending a current task to a corresponding arbitration queue according to a task type of the current task in the task queue, wherein the task type is determined according to time spent by the task in occupying a NAND interface, and a task that spends shorter time in occupying the NAND interface is assigned a higher task priority;
determining a priority of an arbitration queue according to a task priority stored in the arbitration queue; and
scheduling a NAND interface for a to-be-executed task in the arbitration queue according to priority information of the arbitration queue.