US Pat. No. 11,068,412

RDMA TRANSPORT WITH HARDWARE INTEGRATION

Microsoft Technology Lice...


1. A remote direct memory access (RDMA) capable network interface card (RNIC) consumer device configured to provide interface functionality between an RNIC and an RNIC consumer, the RNIC consumer device comprising a hardware-based interface configured to interact with an RDMA transport mechanism of the RNIC, the RNIC consumer device comprising hardware-based logic that configures the RNIC consumer device to perform operations comprising:send, via the hardware-based interface to the RNIC, RDMA requests on behalf of the RNIC consumer;
receive, by the hardware-based logic from the RNIC, RDMA responses sent to the RNIC consumer;
detect invalid RMDA responses and requests;
maintain an RDMA connection between the RNIC and the RNIC consumer when an invalid RDMA response or request is detected; and
communicate with a transaction interface configured to transmit inbound upper layer protocol (ULP) responses without reading receive queue (RQ) descriptors or implementing RQ semantics.

US Pat. No. 11,068,411

REDUCING IMPACT OF CONTEXT SWITCHES THROUGH DYNAMIC MEMORY-MAPPING OVERALLOCATION

INTERNATIONAL BUSINESS MA...


1. A computer-implemented method comprising:receiving, via a processor, established upper bounds for dynamic structures in a multi-tenant system;
creating, via the processor, arrays comprising related memory-management unit (MMU) mappings to be placed together; and
placing the dynamic structures within the arrays, the placing comprising for each array:determining that placing a dynamic structure in that element would cause the array to become overcommitted and result in a layout where accessing all elements would impose a translation look aside buffer (TLB) replacement action; based on the determination, skipping an element of the array; and
scanning for an array-start entry by placing the start of a first element at an address from which an entire array can be placed without TLB contention, and
accessing, via the processors, all non-skipped elements without incurring TLB replacements.


US Pat. No. 11,068,410

MULTI-CORE COMPUTER SYSTEMS WITH PRIVATE/SHARED CACHE LINE INDICATORS

ETA SCALE AB, Uppsala (S...


1. A computer system comprising:multiple processor cores;
at least one local cache memory associated with, and operatively coupled to, a respective one of the multiple processor cores for storing one or more cache lines of data accessible only by the associated core;
at least one intermediary cache memory which is coupled to a subset of the multiple processor cores and which stores one or more cache lines of data; and
at least one shared memory, the shared memory being operatively coupled to all of the cores and which stores multiple data blocks,
wherein each cache line has a bit that signifies whether this cache line is private or shared in said shared memory,
wherein a common shared level is identified among the at least one intermediary cache memory and the at least one shared memory, the identifying being based on which of the at least one intermediary cache memory and the at least one shared memory is shared between two of the multiple processor cores,
wherein the common shared level is a level within computer system memory where a bit's value for a memory block becomes shared from being private in levels closer to local cache memory,
wherein a cache coherence operation is selected among a plurality of cache coherence operations based on said common shared level being identified, and
wherein said cache coherence operation is performed upon the occurrence of a coherence event.

US Pat. No. 11,068,409

METHOD AND SYSTEM FOR USER-SPACE STORAGE I/O STACK WITH USER-SPACE FLASH TRANSLATION LAYER

Alibaba Group Holding Lim...


1. A computer-implemented method for facilitating a user-space storage I/O stack, the method comprising:generating, by a file system in the user-space, a logical block address associated with an I/O request which indicates data to be read or written;
receiving, by a flash translation layer module in the user-space from the file system, the logical block address, wherein the flash translation layer module operates by communicating with the file system and a block device driver;
mapping, by the flash translation layer module in the user-space, a physical block address corresponding to the logical block address;
notifying the block device driver of the physical block address;
operating, by the block device driver, the mapped physical block address to access or place the data in a non-volatile memory by bypassing a kernel space;
estimating a latency associated with executing the I/O request; and
in response to determining that the estimated latency is greater than or equal to a predetermined threshold, and that the I/O request is a read request, reading the requested data from a location other than the physical block address.

US Pat. No. 11,068,408

MEMORY SYSTEM AND OPERATING METHOD THEREOF

Sk hynix Inc., Icheon (K...


1. A memory system, comprising:a nonvolatile memory device;
a buffer memory device configured to store logical-physical address mapping information; and
a memory controller configured to control operations of the nonvolatile memory device and the buffer memory device,
wherein the memory controller comprises:
a cache memory;
a host control circuit configured to receive a read command and a read logical address from a host, to read mapping information corresponding to the read logical address from the buffer memory device, and to cache the mapping information in the cache memory, the mapping information corresponding to the logical-physical address mapping information stored in the buffer memory device;
a flash translation section configured to read a read physical address mapped to the read logical address from the mapping information cached in the cache memory; and
a flash control circuit configured to read data corresponding to the read command from the nonvolatile memory device based on the read physical address, and
wherein the buffer memory device includes a DRAM, and the cache memory includes an SRAM, and
wherein when the host control circuit receives a write command, a write logical address, and write data from the host, the flash translation section maps a write physical address to the write logical address, and updates the logical-physical address mapping information stored in the buffer memory device with the write physical address mapped to the write logical address.

US Pat. No. 11,068,407

SYNCHRONIZED ACCESS TO DATA IN SHARED MEMORY BY PROTECTING THE LOAD TARGET ADDRESS OF A LOAD-RESERVE INSTRUCTION

International Business Ma...


1. A processing unit for a data processing system including multiple processing units all having access to a shared memory via a system interconnect, said processing unit comprising:a processor core that executes within a given hardware thread memory access instructions including, in order, a load-type instruction and a load-reserve instruction, wherein execution of the load-type instruction and load-reserve instruction generates corresponding core load and load-reserve requests that both specify a same target address, wherein the load-reserve request requests a reservation for the target address and the core load request does not request a reservation for the target address;
a cache memory coupled to the processor core, wherein the cache memory includes a directory and at least one read-claim state machine, and wherein the cache memory is configured to perform:based on receipt of the core load request:determining whether the target address specified by the core load request hits in the directory;
based on determining the target address specified by the core load request hits in the directory, refraining from issuing on the system interconnect a memory access request for data identified by the target address;
allocating the at least one read-claim machine to service the core load request and servicing the core load request by the at least one read-claim machine;
initiating, by the at least one read-claim machine for only the target address, a protection interval during which the at least one read-claim machine protects the target address against access by a conflicting memory access request following servicing of the core load request;

thereafter receiving the load-reserve request, allocating the at least one read-claim machine to service the load-reserve request, and servicing the load-reserve request by the at least one read-claim machine establishing a reservation for the target address specified by the load-reserve request;
while the at least one read-claim machine is allocated to service the load-reserve request, the at least one read-claim machine continuing the protection interval initiated based on the core load request; and
thereafter, ending the protection interval for the target address specified by the load-reserve request prior to receipt of a subsequent store-conditional request.


US Pat. No. 11,068,406

MAINTAINING A SINGLE COPY OF DATA WITHIN A READ CACHE

EMC IP Holding Company LL...


1. In data storage circuitry, a method of processing read requests from a set of requesters, the method comprising:while a first data element and a second data element are stored in secondary storage, providing the first data element from the secondary storage to the set of requesters in response to a first request to read the first data element, the first request being received by the data storage circuitry from the set of requesters;
after the first data element is provided to the set of requesters in response to the first request, providing the second data element to the set of requesters in response to a second request to read the second data element, the second request being received by the data storage circuitry from the set of requesters; and
in response to detecting that the first data element and the second data element match, maintaining a single copy of the first and second data elements in a read cache for subsequent read access by the set of requesters.

US Pat. No. 11,068,405

COMPRESSION OF HOST I/O DATA IN A STORAGE PROCESSOR OF A DATA STORAGE SYSTEM WITH SELECTION OF DATA COMPRESSION COMPONENTS BASED ON A CURRENT FULLNESS LEVEL OF A PERSISTENT CACHE

EMC IP Holding Company LL...


1. A method of providing data compression in a storage processor of a data storage system, comprising the steps of:in response to detecting a cache flush event by detecting that a predetermined time period has expired since host I/O data was previously stored into a persistent cache located in the storage processor, i) forming an aggregation set of blocks of host I/O data within host I/O data accumulated in the persistent cache, wherein the aggregation set is a set of oldest blocks of host I/O data that are stored in the persistent cache, and ii) determining a current fullness level of the persistent cache, wherein the current fullness of the persistent cache comprises a current percentage of a total size of the persistent cache that is currently used to store host I/O data;
selecting, by a compression selection component in the storage processor in response to the current fullness level of the persistent cache, from a set of available compression components contained in the storage processor, a compression component for compressing the aggregation set, wherein the compression selection component selects compression components implementing compression algorithms having relatively lower compression ratios in response to relatively higher current fullness levels of the persistent cache, and wherein the compression selection component selects compression components implementing compression algorithms having relatively higher compression ratios in response to relatively lower current fullness levels of the persistent cache; and
compressing the aggregation set using the selected compression component to obtain a compressed version of the aggregation set.

US Pat. No. 11,068,404

EFFICIENT COMPUTER-IMPLEMENTED TECHNIQUES FOR MANAGING GRAPHICS MEMORY

NETFLIX, INC., Los Gatos...


1. A method, comprising:creating in computer memory a memory space that is configured to store visual content items in one or more atlases of visual content items, each visual content item comprising a graphical image, and each atlas comprising a plurality of visual content items and having a configured memory size;
receiving a request from an application to display a row of visual content items on a screen of a computer; and
in response to receiving the request:determining that the row of visual content items includes one or more visual content items stored in a particular atlas of the one or more atlases, and
processing the one or more visual content items by accessing a portion of the memory space corresponding to the particular atlas.


US Pat. No. 11,068,403

DATA PROCESSING SYSTEM FOR PREFETCHING DATA TO BUFFER ENTRIES AND OPERATING METHOD OF THE DATA PROCESSING SYSTEM

SK hynix Inc., Icheon-si...


1. A data processing system comprising:a memory device;
a buffer circuit including a plurality of buffer entries each including a plurality of slabs;
a prefetch circuit configured to control the memory device to prefetch data from the memory device and control the buffer circuit to store the data in the buffer entries; and
a plurality of processing circuits respectively corresponding to the plurality of slabs, each processing circuit being configured to sequentially demand-fetch and process data stored in the corresponding slabs in the buffer entries,
wherein each processing circuit checks, when demand-fetching data from a first slab among corresponding slabs, a prefetch trigger bit of a first buffer entry in which the first slab is included, determines, when it is determined that the prefetch trigger bit is set, whether all data stored in a plurality of slabs included in a second buffer entry is demand-fetched, and triggers, when it is determined that all the data is demand-fetched, the prefetch circuit to perform prefetch of subsequent data to the second buffer entry.

US Pat. No. 11,068,402

EXTERNALIZED CONFIGURATIONS AND CACHING SOLUTION

ADP, LLC, Roseland, NJ (...


1. A computer-implemented method, comprising:storing, into a shared cache, a first version of a structured data format file and a second version of the structured data format file, the first version and the second version each comprising different configuration version data for an application;
setting an expiration time period to a first time period in response to determining that historical updating of versioning of the configuration version data occurs less than a threshold frequency value during the first time period;
in response to a request from a user at run-time for the configuration version data for the application, determining whether run-time format data of the configuration version data is stored in a local cache, wherein the local cache is different from the shared cache, and wherein the run-time format is different from the structured data format and enables a processor to execute the configuration version data at run-time;
in response to determining that the run-time format data of the configuration version data is stored in the local cache, determining a last-checked timestamp value that is associated to the run-time format data stored in the local cache, determining a first last-modified timestamp value that is associated to the run-time format data stored in the local cache, and determining a second last-modified timestamp value that is associated to the structured data format file stored in the shared cache;
returning the data from the configuration version run-time format file stored within the local cache in satisfaction of the request at run-time for the configuration version data of the application in response to determining that a time elapsed from the last-checked timestamp value does not exceed the expiration time period and that the second last-modified timestamp value is not more recent than the first last-modified timestamp value; and
in response to determining that the run-time format data of the configuration version data is not stored in the local cache, during execution of the application:
determining whether an attribute value of the user meets a threshold attribute value;
in response to determining that the attribute value of the user meets the threshold attribute value, reading data from the first version of the structured data format file stored in the shared cache;
in response to determining that the attribute value of the user does not meet the threshold attribute value, reading data from the second version of the structured data format file stored in the shared cache;
translating the read data from the structured data format into the run-time data format;
storing the translated data into the local cache in a run-time format file; and
returning data from the configuration version run-time format file stored within the local cache in satisfaction of the request at run-time for the configuration version data of the application; and
wherein the threshold attribute value is selected from the group consisting of a user identification indicia value and a frequency of access of the user to the shared cache.

US Pat. No. 11,068,401

METHOD AND APPARATUS TO IMPROVE SHARED MEMORY EFFICIENCY

INTEL CORPORATION, Santa...


16. A system comprising:a processor coupled to a shared local memory, the processor to comprise:logic circuitry to compile:a first version of a code to access one or more registers as shared local memory; and
a second version of the code to access a cache as the shared local memory, wherein the logic circuitry to compile is to compile both the first version of the code and the second version of the code prior to execution of either of the first version of the code or the second version of the code; and

logic circuitry to execute the first version of the code in response a determination that a work group size of the first version of the code is below a threshold value,
wherein the threshold value is a frequently used work group size, wherein the second version of the code comprises a universal binary code to be compiled in accordance with a maximum work group size prior to execution of either of the first version of the code or the second version of the code.


US Pat. No. 11,068,400

FAILURE-ATOMIC LOGGING FOR PERSISTENT MEMORY SYSTEMS WITH CACHE-COHERENT FPGAS

VMware, Inc., Palo Alto,...


1. A method for logging changes to cache lines in a set of cache lines into a log for a transaction, the log permitting recovery from a failure, comprising:receiving a signal to begin logging changes to cache lines in the set of cache lines corresponding to a plurality of physical addresses, the plurality of physical addresses derived from a virtual memory region specified by an application, wherein the signal to begin the logging is received in response to the application sending a command to register the virtual memory region to be tracked for the transaction;
in response to the signal to begin the logging, adding a beginning mark for the transaction to the log, wherein the log resides in a persistent memory and the beginning mark indicates that the transaction is active; and
while the transaction is active:tracking each of the cache lines in the set of cache lines while the application is executing;
receiving an indication that a cache line of the cache lines in the set of cache lines has been modified while the application is executing, the indication being a write-back event for the cache line;
creating and adding an undo entry corresponding to the cache line to the log;
receiving a signal to end the logging; and

in response to the signal to end the logging, adding an ending mark for the transaction to the log, the ending mark indicating that the transaction is inactive.

US Pat. No. 11,068,399

TECHNOLOGIES FOR ENFORCING COHERENCE ORDERING IN CONSUMER POLLING INTERACTIONS BY RECEIVING SNOOP REQUEST BY CONTROLLER AND UPDATE VALUE OF CACHE LINE

Intel Corporation, Santa...


1. A target computing device for enforcing coherence ordering in consumer polling interactions, the target computing device comprising:a network interface controller (NIC);
one or more processors; and
one or more data storage devices having stored therein a plurality of instructions that, when executed by the one or more processors, cause the target computing device to:transmit, by the NIC and subsequent to having received a network packet, one or more write requests to a data storage device of the one or more data storage devices, wherein each of the one or more write requests is usable to initiate storage of at least a portion of a payload of the received network packet to the data storage device;
obtain, by the NIC and subsequent to having transmitted a last write request of the one or more write requests, ownership of a flag cache line of a plurality of cache lines in a cache of the target computing device, wherein a value of the flag cache line indicates whether the network packet has been written to the data storage device;
receive, by the NIC, a snoop request from a processor of the one or more processors;
identify, by the NIC, whether the received snoop request corresponds to a read flag snoop request associated with an active request being processed by the NIC;
hold, by the NIC and in response to having identified the received snoop request as the read flag snoop request, the received snoop request for delayed return;
determine, by the NIC, whether each of the one or more write requests has returned successfully;
update, by the NIC and subsequent to having determined that each of the one or more write requests has returned successfully, the value of the flag cache line to indicate the payload has been written to the data storage device; and
issue, by the NIC and subsequent to having updated the value of the flag cache line, a response to the processor responding to the received snoop request.


US Pat. No. 11,068,398

DISTRIBUTED CACHING SYSTEM

Amazon Technologies, Inc....


1. A system comprising:a plurality of local proxy systems, wherein individual local proxy systems from the plurality of local proxy systems include a local cache for storing data;
at least one processor configured to implement a work distribution module, the work distribution module configured to:receive a first request to retrieve a data item;
select, based at least in part on a selection algorithm, a first proxy system from the plurality of local proxy systems to process the first request; and
assign the first request to the first proxy system; and

a plurality of networked caching systems configured to store data items that can be accessed by the plurality of local proxy systems;
wherein the first proxy system of the plurality of local proxy systems is configured to:determine that the local cache of the first proxy system does not store the data item;
identify a networked caching system of the plurality of networked caching systems that is designated to store items associated with the first request;

wherein the networked caching system is configured to:
determine that the networked caching system does not store the data item;
retrieve the data item from a primary storage, the primary storage separate from the networked caching system;
store the data item in a cache of the networked caching system; andtransmit the data item to the first proxy system; and

wherein the first proxy system of the plurality of local proxy systems is further configured to:obtain the data item from the networked caching system;
store the data item in the local cache of the first proxy system; and
provide the data item in response to the first request.


US Pat. No. 11,068,397

ACCELERATOR SHARING

International Business Ma...


1. A device for sharing an accelerator among a plurality of processors, the device comprising:a local store for storing cache allocation information indicating allocation of cache lines to one of a plurality of coherent proxies; and
a cache directory for the cache lines used by the accelerator, the cache directory providing a status of the cache lines and identification information of the coherent proxies to which the cache lines are allocated,
wherein, in response to receiving an operation request, the device determines a presence of a cache line within the cache directory, determines a coherent proxy of the plurality of coherent proxies corresponding to the request based on the cache line, and communicates with the coherent proxy for the request.

US Pat. No. 11,068,396

SYSTEM AND METHOD FOR BALANCE LOCALIZATION AND PRIORITY OF PAGES TO FLUSH IN A SEQUENTIAL LOG

EMC IP Holding Company, L...


1. A computer-implemented method comprising:staging writes into a log in chronological order, wherein each write has a log record of a plurality of log records describing data of the write;
organizing each of the log records of the plurality of log records into a bucket of a plurality of buckets associated with a range of a plurality of ranges within a backing store, wherein each bucket of the plurality of buckets includes a first key and a second key respectively;
creating a tree, wherein the tree is based upon, at least in part, the second key, wherein the second key includes a lowest log sequence number (LSN) of one or more log records of the plurality of log records, wherein the tree includes one or more LSNs of the one or more log records; and
flushing one or more log records of the plurality of log records from one or more buckets of the plurality of buckets to the backing store at a location and in an order determined based upon, at least in part, the first key, the second key, and one or more lowest LSNs from the tree.

US Pat. No. 11,068,395

CACHED VOLUMES AT STORAGE GATEWAYS

Amazon Technologies, Inc....


1. A system, comprising:one or more processors and memories storing program instructions that, when executed by the one or more processors:designate a first storage space of a storage appliance to (a) cache at least a portion of one or more data chunks of a file of a remote storage service and (b) store first metadata for the one or more data chunks;
designate a second storage space of the storage appliance to store second metadata, based on the first metadata, for the one more data chunks of the file;
update a portion of the second metadata based at least in part on a modification of at least a portion of the first metadata, wherein said update of the portion of the second metadata is performed asynchronously with respect to the modification of the at least a portion of the first metadata; and
determine an offset for, or a state of, at least a portion of a particular data chunk of the one or more data chunks using the portion of the second metadata, and enable client access to the at least the portion of the particular data chunk.


US Pat. No. 11,068,394

NEURAL NETWORK SYSTEM INCLUDING DATA MOVING CONTROLLER

Electronics and Telecommu...


1. A neural network system for processing data transferred from an external memory, comprising:an internal memory configured to store input data transferred from the external memory;
an operator configured to perform a multidimensional matrix operation by using the input data of the internal memory and to transfer a result of the multidimensional array operation to the internal memory as output data; and
a data moving controller configured to control an exchange of the input data or the output data between the external memory and the internal memory,
wherein, for the multidimensional matrix operation, the data moving controller reorders a dimension order with respect to an access address of the external memory to generate an access address of the internal memory;
wherein the data moving controller is configured to:generate a first physical address for reading input data of a first multidimensional array from the external memory;
generate a second physical address for storing the input data in the internal memory in a form of a second multidimensional array;
generate a third physical address for reading output data of the second multidimensional array from the internal memory; and
generate a fourth physical address for storing the output data of the second multidimensional array in the external memory in a form of the first multidimensional array.


US Pat. No. 11,068,393

ENHANCED CONCURRENCY GARBAGE COLLECTION STACK SCANNING

Microsoft Technology Lice...


1. A concurrency-enhanced system, comprising:an execution stack of a program, the execution stack including execution frames over a time period of interest;
a memory, the memory configured by the execution stack, the memory also configured by behavior-driven stack scan optimization (BDSSO) software;
a processor in operable communication with the memory, the processor configured to execute the BDSSO software to perform BDSSO steps which include (a) obtaining execution stack frame occurrence data, (b) determining from the execution stack frame occurrence data, for each of a plurality of execution frames, a respective frame execution likelihood indicating a computed likelihood that a respective execution frame will be accessed while a tracing garbage collector scans the execution stack, (c) selecting a stack scan depth based at least in part on the frame execution likelihoods, the selected stack scan depth being less than a full depth of the entire execution stack, (d) installing a garbage collection scan return barrier at the selected stack scan depth, and then (e) allowing the tracing garbage collector to scan the execution stack below the scan return barrier while the program is running.

US Pat. No. 11,068,392

SYSTEM AND METHOD OF DATA WRITES AND MAPPING OF DATA FOR MULTIPLE SUB-DRIVES

Western Digital Technolog...


1. A machine-implemented method, comprising:providing a list associated with recent data writes for at least one sub-drive of a plurality of sub-drives;
receiving incoming data; and
writing the incoming data to another sub-drive of the plurality of sub-drives when an address of the incoming data is represented in the list, wherein the another sub-drive is associated with a data temperature range hotter than the at least one sub-drive.

US Pat. No. 11,068,391

MAPPING TABLE UPDATING METHOD FOR DATA STORAGE DEVICE

Silicon Motion, Inc., Jh...


1. A mapping table updating method executable by a data storage device, the data storage device comprising a non-volatile memory and a controller, and the mapping table updating method comprising steps of:step A: configuring the controller to process a command issued by a host, and determine whether to trigger a partial garbage collection procedure when the command is a write command;when it is determined to trigger the partial garbage collection procedure, then determining whether an abnormal event has occurred in a destination block;
when it is determined that the abnormal event has occurred in the destination block, performing step A1: rolling back a logical-to-physical address mapping table according to a logical-to-physical address backup table, and then performing steps B and C; and
when it is determined that the abnormal event has not occurred in the destination block, directly performing the steps B and C;

step B: copying partial valid data in at least one source block to the destination block according to a segmentation condition, and establishing the logical-to-physical address backup table for recording a logical address of the copied partial valid data and a physical address of the partial valid data in the source block; and
step C: updating the logical-to-physical address mapping table of the data storage device according to the logical address of the copied partial valid data and the physical address in the destination block where the partial valid data is located, and returning to perform the step A.

US Pat. No. 11,068,390

SCALABLE GARBAGE COLLECTION FOR DEDUPLICATED STORAGE

EMC IP HOLDING COMPANY LL...


1. A method for cleaning deleted objects from a storage system, the method comprising:identifying impacted similarity groups that are impacted by a garbage collection operation, wherein the impacted similarity groups include segments associated with deleted objects and segments associated with live objects;
determining sizes of the impacted similarity groups individually and determining a size of all the impacted similarity groups;
determining a number of workers to perform the garbage collection operation that cleans the deleted objects stored in the storage system from objects stored in the storage system based on the sizes of the impacted similarity groups individually and a size of all the impacted similarity groups;
assigning each of the workers a range of the impacted similarity groups based on the sizes Impacted similarity groups and the size of ail the impacted similarity groups;
identifying live segments in the impacted similarity groups associated with the live objects;
removing the segments associated with the deleted objects;
updating the impacted similarity groups to reflect the segments associated with the deleted objects have been removed.

US Pat. No. 11,068,389

DATA RESILIENCY WITH HETEROGENEOUS STORAGE

Pure Storage, Inc., Moun...


1. A method, comprising:detecting differing amounts of storage memory on two or more of a plurality of blades of the storage system;
forming a resiliency group comprising a first plurality of blades each having a first amount of storage memory and a second plurality of blades each having a second amount of storage memory;
storing data segments of a first size in the resiliency group of the first plurality of blades and the second plurality of blades; and
storing data segments of a second size in the resiliency group across the second plurality of blades, wherein the data segments of the second size are shorter than the data segments of the first size.

US Pat. No. 11,068,388

VERIFY BEFORE PROGRAM RESUME FOR MEMORY DEVICES

Rambus Inc., San Jose, C...


1. A method of programming a memory device including memory cells, the method comprising:receiving a program command, the program command including a page address for addressing a page of the memory cells for a programming operation to program data in the page of memory cells;
executing the program command by generating fractional program commands, queuing the fractional program commands, and iteratively carrying out a program/verify cycle upon execution of each fractional program command, wherein a subsequent program/verify cycle is not carried out until a corresponding fractional program command is received;
selectively receiving a secondary command after initiating but before completing the programming operation, the selectively receiving occurring while pausing the programming operation, the secondary command inserted into the command queue between any of two adjacent selected fractional program commands; and
selectively resuming the programming operation by first verifying the page of memory cells, then carrying out a further program/verify cycle on the page of memory cells.

US Pat. No. 11,068,387

CLASSIFYING A TEST CASE EXECUTED ON A SOFTWARE

WEBOMATES INC., Stamford...


1. A method for classifying a test case executed on a software, the method comprising:executing a test case on a software;
receiving an actual result of the execution of the test case;
determining a probability of the actual result being either a true failure or a false failure based on one of predefined rules, machine learning models, and an aggregation of the predefined rules and the machine learning models, wherein the true failure indicates a bug in the software, and wherein the false failure indicates one of a failure in an execution of the test case and a modification in the test case;
classifying the actual result as one of the true failure or the false failure based on the probability;
recommending a recursive execution of the test case when the actual result is classified as the false failure until the actual result is classified as the true failure or a true pass;
receiving a feedback from a reviewer on the classification; and
recording a deviation between the classification and the feedback to classify results of subsequent test cases as true failures or false failures using an adaptive intelligence technique.

US Pat. No. 11,068,386

SYSTEMS AND METHODS FOR MAINFRAME BATCH TESTING

STATE FARM MUTUAL AUTOMOB...


1. A method implemented in a test computer environment, wherein the test computer environment contains a finite-state machine (FSM) that replicates a production computer environment in a first state, the method comprising:receiving, via a computer network, a first set of batch data, wherein the first set of batch data is designed to determine whether a first set of properties corresponding to the FSM are properly implemented;
validating, via one or more particularly programmed processors, that data contained in the first set of batch data is in a valid format by confirming any data field contained within the first set of batch data is in a proper format for the corresponding data field, wherein the data fields include at least one of an insurance policy number, a claim identification number, a person associated with an insurance policy, a property owned by a policyholder, a vehicle owned by a policyholder, or a date corresponding to a transaction request;
processing, via the one or more processors, the first set of batch data, wherein the processing of the first set of batch data causes the FSM to enter a second state;
validating, via the one or more processors, that corresponding data fields of the second state of the FSM adheres to the first set of properties under test; and
based upon the validations, generating, via the one or more processors, an indication of whether the first set of batch data and the second state of the FSM are valid.

US Pat. No. 11,068,385

BEHAVIOR DRIVEN DEVELOPMENT TEST FRAMEWORK FOR APPLICATION PROGRAMMING INTERFACES AND WEBSERVICES

JPMORGAN CHASE BANK, N.A....


1. A computer-implemented method for testing an Application Programming Interface (API) or webservice, the method comprising the steps of:receiving a Gherkin comprising an identification of an API or webservice and a plain English description of a test to be executed on the API or webservice;
receiving one or more input files comprising validation information for the API or webservice;
converting the Gherkin into machine-executable code for the test;
testing whether the API or webservice is available;
executing the machine-executable code if the API or webservice is available;
receiving a response output from the API or webservice;
validating the response output based on the validation information of the one or more input files; and
generating a report based on the validation.

US Pat. No. 11,068,384

SYSTEMS AND METHODS FOR TESTING SOFTWARE APPLICATIONS

PayPal, Inc., San Jose, ...


1. A system, comprising:a non-transitory memory; and
one or more hardware processors coupled with the non-transitory memory and configured to read instructions from the non-transitory memory to cause the system to perform operations comprising:configuring a testing environment based on one or more failure configurations to simulate one or more failure modes applicable to a production environment;
receiving a transaction request from a user device;
generating a copy of the transaction request;
transmitting the transaction request to the production environment for processing the transaction request, wherein a first transaction processing engine within the production environment is configured to process the transaction request by accessing one or more production database nodes that are in the production environment;
transmitting the copy of the transaction request to the testing environment for processing the copy of the transaction request, wherein a second transaction processing engine within the testing environment is configured to process the copy of the transaction request by accessing the one or more production database nodes according to the one or more failure configurations;
obtaining, from the second transaction processing engine, a response to the processing of the copy of the transaction request; and
determining a status of the second transaction processing engine based on the response.


US Pat. No. 11,068,383

SERVICE ORIENTED ARCHITECTURE INFRASTRUCTURE FOR BUSINESS PROCESS VERIFICATION AND SYSTEMS INTEGRATED TESTING

INTERNATIONAL BUSINESS MA...


1. A computer system for verifying a particular function of a particular business process, the system comprising:a central processing unit (CPU), a computer readable memory and a non-transitory computer readable storage media; first program instructions to detect, by one or more primary agents, a start of the particular business process;
second program instructions to provide, by one or more action agents, a verification function of the particular business process;
third program instructions to determine, by a service manager, context look-up services for the one or more primary agents;
fourth program instructions to forward a reference of an action agent which is responsible for a particular verification function of the particular business process to a requesting primary agent;
fifth program instructions to invoke the action agent which performs the particular verification function of the particular business process; and
sixth program instructions to add at least one of a new primary agent, new action agent, and new business processes without disrupting an existing test run,
wherein the first, second, third, fourth, fifth, and sixth program instructions are stored on the non-transitory computer readable storage media for execution by the CPU via the computer readable memory.

US Pat. No. 11,068,382

SOFTWARE TESTING AND VERIFICATION

International Business Ma...


1. An automated software testing method comprising:identifying, by a processor of a hardware device of an IT system, software elements of a software test specification derived from a readable domain specific software language;
first mapping, by said processor, existing software objects associated with a software module for testing with said identified software elements of said software test specification;
second mapping, by said processor, said existing software objects with physical operational values of said software module;
extracting, by said processor, software values of said identified software elements via execution of software fabrication code, object lookup code, extraction and mapping code, and data retrieval code;
second executing by said processor based on results of first executing said software values with respect to a library database, said software test specification with respect to said software module;
generating, by said processor based on results of said second executing, software module test software for operationally testing software modules and associated hardware devices resulting in improved operation of said software modules and associated hardware devices, wherein said improved operation of said software modules and said associated hardware devices comprises an improved processing speed for a processor of said associated hardware devices;
validating, by said processor executing said software module test software, software and hardware functionality of said software modules and associated hardware devices with respect to a specified scenario associated with said operationally testing said software modules and associated hardware devices; and
storing, by said processor, said software test specification within a storage device.

US Pat. No. 11,068,381

PROGRAM ANALYSIS DEVICE, PROGRAM ANALYSIS SYSTEM, PROGRAM ANALYSIS METHOD AND COMPUTER READABLE MEDIUM

MITSUBISHI ELECTRIC CORPO...


1. A program analysis device to analyze a first program executed by a first execution device, and a second program executed concurrently with the first program by a second execution device that communicates with the first execution device, wherein the first execution device is physically separate from the second execution device, wherein the first program includes a transmission function to transmit a value to the second program, and the second program includes a reception function to receive a value from the first program,the program analysis device comprising:
processing circuitry configured to:
collect transmission information representing communication performed by the first execution device in accordance with the transmission function, and reception information representing communication performed by the second execution device in accordance with the reception function, and
inspect, by using the collected transmission information and the collected reception information, whether a falsely-detected warning exists in a warning included in an analysis result obtained by analyzing each source code of the first program and the second program by an analysis tool by:compares a communication time when the first execution device performs communication included in the transmission information with a communication time when the second execution device performs communication included in the reception information, compares a function name of the transmission function included in the transmission information with a function name of the reception function included in the reception information, and judges whether the transmission function and the reception function are corresponding functions,
extracts, when the transmission function and the reception function are the corresponding functions, a warning under an influence of the communication represented by the transmission information and the communication represented by the reception information, from the warning included in the analysis result, and
judges whether an occurrence condition to issue a warning is met with respect to the extracted warning, based on each source code of the first program and the second program, the transmission information and the reception information, and determines a warning for which the occurrence condition is not met as the falsely-detected warning.


US Pat. No. 11,068,380

CAPTURING AND ENCODING OF NETWORK TRANSACTIONS FOR PLAYBACK IN A SIMULATION ENVIRONMENT

ServiceNow, Inc., Santa ...


1. A computing system, configured to operate as a remote network management platform, comprising:a plurality of computational instances each containing one or more computing devices and one or more databases;
a load balancer device configured to (i) receive incoming network traffic addressed to the computational instances, and (ii) distribute the incoming network traffic to the computational instances, wherein distribution of the incoming network traffic across the one or more computing devices of each computational instance is in accordance with a load balancing algorithm;
a traffic filtering device, coupled to the load balancer device, configured to: (i) receive, as a first sequence of packets, copies of the incoming network traffic from the load balancer device, and (ii) filter the first sequence of packets to create a second sequence of packets, wherein the second sequence of packets includes only copies of packets that were transmitted to a particular computational instance of the plurality of computational instances;
a storage device, coupled to the traffic filtering device, configured to receive the second sequence of packets from the traffic filtering device and store the second sequence of packets; and
a simulation compiler device, coupled to the storage device, configured to: (i) receive the second sequence of packets from the storage device, (ii) identify a first captured transaction within the second sequence of packets, (iii) receive file system logs from the particular computational instance, (iv) identify a second captured transaction within the file system logs, (v) remove one or more duplicate captured transactions between the file system logs and the second sequence of packets, wherein the one or more removed duplicate captured transactions were captured from the file system logs of the particular computational instance rather than from the second sequence of packets (vi) encode the first captured transaction as a first playback instruction, wherein the first playback instruction is used to generate a third sequence of packets that, when transmitted to a computational instance used for testing, simulates the first captured transaction, and (vii) encode the second captured transaction, as a second playback instruction, wherein the second playback instruction is used to generate a fourth sequence of packets that, when transmitted to the computational instance used for testing, simulates the second captured transaction.

US Pat. No. 11,068,379

SOFTWARE QUALITY DETERMINATION APPARATUS, SOFTWARE QUALITY DETERMINATION METHOD, AND SOFTWARE QUALITY DETERMINATION PROGRAM

TOYOTA JIDOSHA KABUSHIKI ...


1. A software quality determination apparatus configured to determine convergence of a bug generated in a system, the apparatus comprising:circuitry configured to:
calculate, for each test viewpoint, which is a viewpoint when the system is tested, a detection rate of a bug generated in a test of the test viewpoint;
calculate, for each test viewpoint, an execution reference amount of the test viewpoint, which serves as a reference of a test execution amount of the test viewpoint, in accordance with a development scale of the test viewpoint, the development scale corresponding to one of (i) a size of a source code to be tested and (ii) an index indicating a complexity of the source code to be tested; and
for each test viewpoint:determine whether the test execution amount of the test viewpoint has reached the calculated execution reference amount,
when the test execution amount of the test viewpoint has reached the calculated execution reference amount, determine, the bug convergence of the test viewpoint depending on whether the calculated detection rate of the test viewpoint is equal to or smaller than a reference value, which serves as a reference of the detection rate of the test viewpoint, for additional text execution of the test view point after the test execution amount of the test viewpoint has reached the calculated execution reference amount, and
when the test execution amount of the test viewpoint has not reached the calculated execution reference amount, wait until the test execution amount of the test viewpoint is determined to have reached the calculated execution reference amount to determine the bug convergence of the test viewpoint.


US Pat. No. 11,068,378

MEMORY VALUE EXPOSURE IN TIME-TRAVEL DEBUGGING TRACES

MICROSOFT TECHNOLOGY LICE...


1. A method, implemented at a computer system that includes at least one processor, for generating data for exposing memory cell values during trace replay at execution times that are prior to execution times corresponding to events that caused the memory cell values to be recorded into a trace, the method comprising:identifying a plurality of trace fragments within a trace that records prior execution of one or more threads, each trace fragment recording an uninterrupted consecutive execution of a plurality of executable instructions on a corresponding thread of the one or more threads, the plurality of trace fragments including at least a first trace fragment and a second trace fragment;
determining at least a partial ordering among the plurality of trace fragments, including determining that the first trace fragment can be ordered prior to the second trace fragment;
determining that a memory cell value can be exposed, during replay of the second trace fragment, at a first execution time that is prior to a second execution time corresponding to an event that caused the memory cell value to be recorded into the trace during trace recording; and
generating output data indicating that the memory cell value can be exposed at the first execution time during replay of the second trace fragment.

US Pat. No. 11,068,377

CLASSIFYING WARNING MESSAGES GENERATED BY SOFTWARE DEVELOPER TOOLS

BlackBerry Limited, Wate...


1. A method, comprising:receiving, by a hardware processor, a first data set, the first data set including a first plurality of data entries, wherein each data entry is associated with a warning message generated based on a first set of software codes, each data entry includes indications for a plurality of features, and each data entry is associated with one of a plurality of class labels;
generating, by the hardware processor, a second data set by sampling the first data set;
based on the second data set, selecting, by the hardware processor, at least one feature from the plurality of features, wherein selecting the at least one feature comprises selecting the at least one feature that is clustered above a cut-off value, wherein the cut-off value is a Spearman value;
generating, by the hardware processor, a third data set by filtering the second data set with the selected at least one feature;
determining, by the hardware processor, a machine learning classifier based on the third data set, wherein determining the machine learning classifier comprises dividing the third data set into a training data set and a testing data set; and
classifying, by the hardware processor, a second warning message generated based on a second set of software codes to one of the plurality of class labels using the machine learning classifier, wherein the second set of software codes is different than the first set of software codes.

US Pat. No. 11,068,376

ANALYTICS ENGINE SELECTION MANAGEMENT

International Business Ma...


1. A computer-implemented method for comparing and selecting one analytics engine from a plurality of analytic engines, the method comprising:ingesting and processing a set of reference data by a first analytics engine to compile a first set of characteristic data for the first analytics engine with respect to the set of reference data;
ingesting and processing the set of reference data by a second analytics engine to compile a second set of characteristic data for the second analytics engine with respect to the set of reference data;
compiling the first set of characteristic data for the first analytics engine with respect to processing the set of reference data, wherein the first set of characteristic data comprises a collection of properties, traits, attributes, or factors that characterize the behavior of the first analytics engine with respect to the set of reference data;
compiling the second set of characteristic data for the second analytics engine with respect to processing the set of reference data, wherein the second set of characteristic data comprises a collection of properties, traits, attributes, or factors that characterize the behavior of the second analytics engine with respect to the set of reference data;
comparing the first set of characteristic data against the second set of characteristic data to determine a set of distinct attributes for each of the first and second analytics engines, wherein the respective sets of distinct attributes distinguish the first analytics engine from the second analytics engine with respect to the set of reference data;
selecting one of the first or second analytics engines based on the set of distinct attributes for the first and second analytics engines; and
processing a set of content data to generate a valid results-set using the selected analytics engine.

US Pat. No. 11,068,375

SYSTEM AND METHOD FOR PROVIDING MACHINE LEARNING BASED MEMORY RESILIENCY

ORACLE INTERNATIONAL CORP...


9. A method for providing machine learning based memory resiliency within a service-oriented architecture, cloud computing, or other computing environment, comprising:providing one or more computer servers including a processor, memory, and a virtual machine for operation of one or more services, applications, or other components, wherein each of a plurality of software components that operate in association with the virtual machine can be registered for concurrent process tuning;
determining memory heap usage within the virtual machine based on a collected metrics data describing heap usage, and adjusting or tuning concurrent processing of registered components acccordingly, wherein:each component registers with a process tuner that receives instructions from a memory metrics component and communicates with a process tunable component to adjust an amount of threads, and processing rate, performed by the registered component;
a determination is made, based on the collected metrics data, as to a presence of one or more low memory conditions; and
in response to a determination of a low memory condition being present a particular number of instances within a particular period of time, an instruction is issued to slow the amount of threads associated with, and the processing rate performed by, the registered component; and

wherein the method supports a plurality of low memory conditions and components that register for concurrent process tuning.

US Pat. No. 11,068,374

GENERATION, ADMINISTRATION AND ANALYSIS OF USER EXPERIENCE TESTING

USERZOOM TECHNOLOGIES, IN...


1. A method for generating intents from a user experience study comprising:applying at least one screener question to a plurality of participants;
subjecting the screened participants to at least one task;
recording clickstream and success data for each participant engaging in the at least one task, wherein the clickstream data is mapped as a branched tree structure;
aggregating the success and clickstream data for all the screened participants into aggregated results, wherein the aggregated results include a likelihood of a given path taken in the branched tree structure;
filtering the aggregated clickstream data by a configured dimension; and
identifying features of the clickstream data that correspond to failure of the at least one task by feeding the aggregated clickstream data into a neural network.

US Pat. No. 11,068,373

AUDIT LOGGING DATABASE SYSTEM AND USER INTERFACE

Palantir Technologies Inc...


1. A method for audit logging user interactions, the method comprising:receiving a first logging configuration for an application user interface;
determining, from the first logging configuration, to log an additional data field in addition to a default set of data fields;
receiving user input regarding a user interaction with the application user interface;
determining:a timestamp associated with the user interaction,
a user identifier associated with the user interaction,
a category type of the user interaction,
an application context associated with the user interaction,
an object identifier associated with a first data object and the user interaction,
system output of the application user interface, the system output in response to the user input;

in response to determining to log the additional data field from the first logging configuration, retrieving the additional data field from the first data object using the object identifier;
generating a first logging entry for the user interaction, the first logging entry comprising: the timestamp, the user identifier, the category type, the application context, the user input, the system output, and the additional data field for the first data object;
storing the first logging entry in a structured format to a non-transitory computer storage medium; and
causing the system output of the application user interface.

US Pat. No. 11,068,372

LINKING COMPUTING METRICS DATA AND COMPUTING INVENTORY DATA

Red Hat, Inc., Raleigh, ...


1. An apparatus, comprising:a memory to store computing inventory data associated with one or more components of a computing device; and
a processing device operatively coupled to the memory, the processing device to:receive the computing inventory data from one or more components of the computing device, the computing inventory data collected via a first monitoring device associated with the one or more components of the computing device, wherein the computing inventory data comprises contextual information and identification information of the one or more components of the computing device;
receive computing metrics data from a server device, wherein the computing metrics data comprises time series data collected by the server device via a second monitoring device associated with the one or more components of the computing device;
combine the computing metrics data and the computing inventory data into a relationship indication by appending the contextual information and the identification information of the computing inventory data to the time series data of the computing metrics data; and
provide the relationship indication to be displayed via a graphical user interface of a client device in response to a request, wherein the relationship indication comprises a plurality of links between the computing inventory data and the computing metrics data.


US Pat. No. 11,068,371

METHOD AND APPARATUS FOR SIMULATING SLOW STORAGE DISK

EMC IP Holding Company LL...


1. A method for simulating a slow storage disk, comprising:intercepting an input/output (I/O) command to a storage disk; and
simulating the slow storage disk with the storage disk by injecting a delay to a dispatch of the intercepted I/O command by delaying the dispatch of the intercepted I/O command to the storage disk by a desired duration of delay indicated by a predetermined delay injection policy, wherein the delay is injected within a SCSI (Small Computer System Interface) software stack at a middle layer above a driver layer that forwards a package encapsulating the I/O command to a controller of the storage disk via a storage network, wherein the injection policy further indicates injecting delays to dispatches of a predetermined number of I/O commands and a frequency at which the delays are to be injected to the dispatches of the predetermined number of I/O commands, and wherein injecting the delay to the dispatch of the intercepted I/O command includes determining whether to inject the delay to the dispatch of the intercepted I/O command at least in part by using a random algorithm to generate a random number and injecting the delay responsive to the random number exceeding the frequency indicated by the injection policy.

US Pat. No. 11,068,370

SYSTEMS AND METHODS FOR INSTANTIATION OF VIRTUAL MACHINES FROM BACKUPS

Nakivo, Inc., Sparks, NV...


11. A method providing a technical solution to the technical problem of recovering virtual machine (VM) operating system (OS) level items from a virtual machine backup involving quickly instantiating the virtual machine backup, the method comprising:(a) creating the virtual machine backup by saving backup VM disk data for a first state of a first virtual machine to a first backup repository;
(b) quickly instantiating a second virtual machine based on the virtual machine backup by(i) exposing, by a data processing engine, the backup VM disk data for the first state of the first virtual machine,
(ii) mounting, by a VM host server, the backup VM disk data as a mapped VM disk,
(iii) restoring the second virtual machine to the first state of the first virtual machine using the mapped VM disk, and
(iv) starting the second virtual machine; and

(d) following instantiation of the second virtual machine based on the virtual machine backup, executing, at the second virtual machine, a recovery request by retrieving one or more OS level items specified by the recovery request from within the operating system of the running second virtual machine;
(e) wherein via performance of the method a technical solution is provided which involves recovering virtual machine (VM) operating system (OS) level items from the virtual machine backup that is instantiated with a quicker restore time than would be required for a full production restore that would involve copying over all data for the first virtual machine from a backup location to a restore location.

US Pat. No. 11,068,369

COMPUTER DEVICE AND TESTING METHOD FOR BASIC INPUT/OUTPUT SYSTEM

Inventec (Pudong) Technol...


1. A test method for a basic input/output system (BIOS), configured to test a computer device which includes the BIOS when a power on self test (POST) of the BIOS fails, comprising:when the BIOS fails to perform the POST after the BIOS is activated, by a debug port of a motherboard of the computer device, enabling the BIOS, and by the BIOS, performing a fixing function related to fixing a memory;
by the debug port, enabling the BIOS, and by the BIOS, enabling a first memory device and disabling a second memory device;
according to the fixing function, turning on the computer device by the first memory device; and
determining whether the computer device is turned on successfully.

US Pat. No. 11,068,368

AUTOMATIC PART TESTING

ADVANCED MICRO DEVICES, I...


1. A method of automatic part testing, the method comprising:booting a part under testing into a first operating environment;
executing, via the first operating environment, one or more test patterns on the part;
performing a comparison between one or more observed characteristics associated with the one or more test patterns and one or more expected characteristics; and
modifying one or more operational parameters of a central processing unit of the part based on the comparison by applying an offset or a modification to a voltage frequency curve.

US Pat. No. 11,068,367

STORAGE SYSTEM AND STORAGE SYSTEM CONTROL METHOD

HITACHI, LTD., Tokyo (JP...


1. A storage system including a plurality of storage nodes for providing storage regions for storing data of a computer in which an application is executed, the storage system comprising a controller configured to:switch setting a process mode, for processing a request for input and output of data, between a normal mode indicating a normal state and an emergency mode in which a predetermined function is suppressed compared with the normal mode;
in response to the occurrence of a failure in a first storage node among the plurality of storage nodes, determine whether or not to switch to the emergency mode for a second storage node in which the failure does not occur; and
based on the determination, switch the process mode to the emergency mode or maintain the process mode in the normal mode.

US Pat. No. 11,068,366

POWER FAIL HANDLING USING STOP COMMANDS

Western Digital Technolog...


1. A machine-implemented method, comprising:detecting whether a power fail event is triggered; and
responsive to the power fail event being triggered:determining that at least one non-volatile memory device among a plurality of non-volatile memory devices is executing a non-critical memory command; and
issuing a suspend or terminate command, to the determined at least one non-volatile memory device among the plurality of non-volatile memory devices, to suspend or terminate execution of the non-critical memory command.


US Pat. No. 11,068,365

DATA RECOVERY WITHIN A MEMORY SUB-SYSTEM WITHOUT MOVING OR PROCESSING THE DATA THROUGH A HOST

Micron Technology, Inc., ...


1. A system, comprising:a memory component of a memory sub-system;
a register to store an address range that is outside of an existing address range associated with the memory component, wherein the existing address range associated with the memory component comprises an address range used by a host system to access the memory component; and
a processing device, operatively coupled with the memory component, to:receive, from the host system, a command to transfer data in a portion of the memory component to a recovery portion of a different memory component within the memory sub-system, wherein the portion of the memory component is associated with a portion of the memory component that has failed; and
recover and transfer, responsive to receipt of the command, the data in the portion of the memory component to the recovery portion of the different memory component without moving or processing the data through the host system by using an address within the address range that is outside of the existing address range associated with the memory component, wherein the address provides an indication that the data is to be recovered and transferred to the recovery portion of the different memory component within the memory sub-system.


US Pat. No. 11,068,364

PREDICTABLE SYNCHRONOUS DATA REPLICATION

INTELLIFLASH BY DDN, INC....


10. A method comprising:determining that a source volume and a destination volume are out of synchronization;
concurrently in response to the determining that a source volume and a destination volume are out of synchronization:creating a first snapshot of the source volume; and
continuing to service write commands directed to the source volume from a host;

determining that the destination volume is available to resume synchronous data replication subsequent to creating the first snapshot;
in response to determining that the destination volume is available to resume synchronous data replication:quiescing write commands from the host directed to the source volume; and
creating a second snapshot of the source volume;

resuming the synchronous data replication between the source volume and the destination volume, subsequent to successful creation of the second snapshot;
identifying a set of one or more inconsistent data blocks between the first snapshot and the second snapshot;
essentially simultaneously:sending the set of one or more inconsistent data blocks from a source node to a destination node; and
performing the synchronous data replication between the source volume and the destination volume;

accessing a data block from among the one or more inconsistent data blocks;
determining that the accessed data block is a most recent version of the data block relative to a corresponding data block at the destination volume; and
applying the accessed data block to the destination volume.

US Pat. No. 11,068,363

PROACTIVELY REBUILDING DATA IN A STORAGE CLUSTER

Pure Storage, Inc., Moun...


1. A plurality of storage nodes, comprising:the plurality of storage nodes configured to communicate together as a storage cluster;
each storage node, of the plurality of storage nodes, having nonvolatile solid-state memory for data storage, and each storage node having a plurality of authorities, each of the plurality of authorities owning a range of data, each range of data associated with a segment number identifying a configuration of a respective redundant array of independent disks (RAID) stripe;
the plurality of storage nodes configured to distribute the data and metadata throughout the plurality of storage nodes such that the plurality of storage nodes can read the data, using erasure coding; and
the plurality of storage nodes configured to rebuild the user data independent of detection of an error associated with the data, wherein each of the plurality of storage nodes are configured to consult others of the plurality of storage nodes to determine an erasure coding scheme for rebuking the data.

US Pat. No. 11,068,362

HIGH-AVAILABILITY CLUSTER ARCHITECTURE AND PROTOCOL

Fortinet, Inc., Sunnyval...


1. A method comprising:establishing an active connection between a first interface of a network device within an internal network and a first interface of a first cluster unit of a high-availability (HA) cluster of network security devices, wherein the first interface of the first cluster unit is in an enabled state in which network traffic directed thereto is able to be received, wherein the HA cluster provides the internal network with connectivity to an external network;
concurrent with the active connection, establishing a backup connection between a second interface of the network device and a first interface of a second cluster unit of the HA cluster, wherein the first interface of the second cluster unit is in a disabled state in which network traffic directed thereto is not able to be received and wherein the first interface of the first cluster unit and the first interface of the second cluster unit share a first virtual Internet Protocol (IP) address;
while the first cluster unit remains in an operational state and has connectivity to the external network, receiving and processing, by the first cluster unit via the active connection, network traffic from the network device that is to be transmitted onto the external network; and
upon determining the first cluster unit is in a failed state or the first cluster unit does not have connectivity to the external network, then, causing subsequent network traffic from the network device that is to be transmitted onto the external network to be received and processed by the second cluster unit via the backup connection by putting the first interface of the second cluster unit into the enabled state and putting the first interface of the first cluster unit into the disabled state.

US Pat. No. 11,068,361

CLUSTER FILE SYSTEM SUPPORT FOR EXTENDED NETWORK SERVICE ADDRESSES

International Business Ma...


1. A computer program product for extending network services addresses, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a processor to cause the processor to:identify, by the processor, an event affecting a node, wherein the node provides external access to a network using an Internet Protocol (IP) address;
in response to identifying the event, identify, by the processor, an attribute associated with the IP address; and
based on the attribute associated with the IP address, determine, by the processor, whether to move the IP address to another node.

US Pat. No. 11,068,360

ERROR RECOVERY METHOD AND APPARATUS BASED ON A LOCKUP MECHANISM

HUAWEI TECHNOLOGIES CO., ...


1. An error recovery method, comprising:receiving an interrupt triggered by an error that occurs in a first central processing unit (CPU), when the first CPU and a second CPU are operating in a lockstep mode;
exiting, by the first CPU, the lockstep mode in response to the interrupt;
determining a type of the error; and
in response to determining that the error is a recoverable error, performing error recovery on the first CPU according to a state of the second CPU that is correctly running at a time of triggering the interrupt, including:obtaining, through a hardware channel between the first CPU and the second CPU, a software-visible CPU context of the second CPU at the time of triggering the interrupt, and
updating, according to the software-visible CPU context of the second CPU, a software-visible CPU context of the first CPU, wherein the software-visible CPU context of the second CPU comprises a value of a system register and a value of a general purpose register.


US Pat. No. 11,068,359

STREAM LEVEL UNINTERRUPTED RESTORE OPERATION USING DATA PROBE

EMC IP HOLDING COMPANY LL...


1. A computer-implemented method for restoring data from a target device onto a source device by a restore agent on a storage system, comprising:using cache availability on the source device to emulate a cache disk array to be utilized by the restore agent;
receiving, by a read latch, a first set of data packets from the target device for restore, wherein the first set of data packets comprises a plurality of data chunks;
capturing footprints of the first set of data packets in the cache disk array; and
in response to receiving an acknowledgement from the cache disk array indicating that the footprints of the first set of data packets have been captured:pushing each data chunk of the first set of data packets to a construction container on the storage system for reconstruction of backup data;
in response to receiving an acknowledgement from the construction container indicating the data chunk is successfully pushed, flushing a footprint of the data chunk from the cache disk array; and
in response to receiving an unexpected abort signal from the read latch indicating a read operation is aborted, sending a freeze signal from the restore agent to the cache disk array thereby freezing the cache disk array at a point in time where a last data chunk was successfully pushed to the construction container.


US Pat. No. 11,068,358

METHOD FOR BACKING UP AND RESTORING DISC MANAGEMENT INFORMATION

LITE-ON ELECTRONICS (GUAN...


1. A method for backing up and restoring a disc management information, the method being executed in a disc archive, the disc archive comprising a processing circuit and a storage device, the processing circuit being coupled with an optical disc drive, the method comprising steps of:loading a specified optical disc into the optical disc drive;
reading the latest update of the disc management information of the specified optical disc after the optical disc drive performs a servo calibration process;
if the latest update of the disc management information is read successfully, the optical disc drive entering a normal working state;
if the latest update of the disc management information is not read successfully, acquiring the latest update of the disc management information of the specified optical disc through the processing circuit, so that the optical disc drive enters the normal working state; and
while the normal working state of the optical disc drive is terminated, transmitting the latest update of the disc management information to the processing circuit, recording the latest update of the disc management information into the storage device, and ejecting the optical disc.

US Pat. No. 11,068,357

UNINTERRUPTED RESTORE OPERATION USING A TIME BASED APPROACH

EMC IP HOLDING COMPANY LL...


1. A method of performing restore operations for data packets by a restore agent, comprising:predicting a first time period of completing a first restore operation for the data packets;
determining a second time period of performing the first restore operation until the first restore operation is stopped at a point of time;
identifying an incomplete status of the first restore operation at the point of time based on a comparison between the first time period and the second time period;
collecting information describing the incomplete status; and
starting a second restore operation for the data packets from the incomplete status based on the information.

US Pat. No. 11,068,356

INCREMENTAL EXPORT AND CONVERSION OF VIRTUAL MACHINE SNAPSHOTS

RUBRIK, INC., Palo Alto,...


1. A method for operating a data management system, comprising:detecting that a failback operation should be performed;
identifying a first virtual machine snapshot out of a chain of snapshots for a virtual machine in response to detecting that the failback operation should be performed, the virtual machine has an operating system disk associated with a first operating system;
converting the first virtual machine snapshot such that the operating system disk is associated with a second operating system different from the first operating system;
acquiring a full image snapshot of the converted first virtual machine snapshot;
instantiating a temporary virtual machine using the full image snapshot of the converted first virtual machine snapshot;
creating a linked clone to the temporary virtual machine;
acquiring a second incremental snapshot for a second virtual machine snapshot out of the chain of snapshots for the virtual machine;
applying a second set of data changes from the second incremental snapshot to the temporary virtual machine;
acquiring a third incremental snapshot for a third virtual machine snapshot out of the chain of snapshots for the virtual machine;
applying a third set of data changes from the third incremental snapshot to the linked clone;
capturing and storing a snapshot of the temporary virtual machine subsequent to applying the second set of data changes from the second incremental snapshot to the temporary virtual machine; and
capturing and storing a snapshot of the linked clone subsequent to applying the third set of data changes from the third incremental snapshot to the linked clone.

US Pat. No. 11,068,355

SYSTEMS AND METHODS FOR MAINTAINING VIRTUAL COMPONENT CHECKPOINTS ON AN OFFLOAD DEVICE

AMAZON TECHNOLOGIES, INC....


1. An offload computing device configured to be operably coupled to a physical computing device, wherein the offload computing device and the physical computing device are separate computing devices, the offload computing device comprising:hardware computing resources comprising at least one offload computing device processor, wherein the at least one offload computing device processor executes computer-executable instructions stored in offload computing device memory that configure the offload computing device to:provide instructions to the physical computing device to instantiate at least a portion of a first virtual machine instance using hardware computing resources of the physical computing device; and
instantiate at least a subset of a plurality of virtual input/output (I/O) components of the first virtual machine instance on the offload computing device, wherein at least the subset of the plurality of virtual I/O components are instantiated using the hardware computing resources of the offload computing device.


US Pat. No. 11,068,354

SNAPSHOT BACKUPS OF CLUSTER DATABASES

EMC IP Holding Company LL...


1. A system comprising:one or more processors; and
a non-transitory computer readable medium storing a plurality of instructions, which when executed, cause the one or more processors to:identify at least one of (1) a storage device that stores a database associated with a computer cluster or (2) a storage device that stores a plurality of transaction logs associated with the computer cluster;
create at least one of (1) a snapshot of the storage device that stores the database or (1) a snapshot of the storage device that stores the plurality of transaction logs;
mount, automatically, the at least one snapshot on a host in response to the creation of the at least one snapshot, the host being accessible to a native interface for a management system of the database;
transmit a prompt to the native interface, the prompt including instructions to catalog at least one of (1) a database recovery name created for the snapshot of the storage device that stores the database or (2) a transaction log recovery name created for the snapshot of the storage device that stores the plurality of transaction logs; and
store at least one of (1) a mapping from the database recovery name to a database backup identifier created for the snapshot of the storage device that stores the database or (2) a mapping from the transaction log recovery name to a transaction log backup identifier created for the snapshot of the storage device that stores the plurality of transaction logs, wherein storing the at least one mapping enables recovery of at least one of the database or the plurality of transaction logs.


US Pat. No. 11,068,353

SYSTEMS AND METHODS FOR SELECTIVELY RESTORING FILES FROM VIRTUAL MACHINE BACKUP IMAGES

Veritas Technologies LLC,...


1. A computer-implemented method for selectively restoring files from virtual machine backup images, at least a portion of the method being performed by a computing device comprising at least one processor, the method comprising:exposing a virtual disk image included in a target virtual machine backup image to an operating system of a host computing system;
mounting the virtual disk image included in the target virtual machine backup image to the host computing system;
determining at least one extent of a target file included in a file system included in the virtual disk image at least in part by querying the operating system of the host computing system to identify the extent of the target file;
associating the extent of the target file with a storage location included in the target virtual machine backup image;
generating a catalog that indicates a mapping between extents on disk for the target file and storage locations for fragments included in the target virtual machine backup image; and
restoring the target file from the target virtual machine backup image by:identifying, by using the mapping from the generated catalog, the storage location included in the target virtual machine backup image that is associated with the extent of the target file; and
accessing data stored at the identified storage location included in the target virtual machine backup image;

providing selective file restoration of the fragments from the target virtual machine backup image.

US Pat. No. 11,068,352

AUTOMATIC DISASTER RECOVERY MECHANISM FOR FILE-BASED VERSION CONTROL SYSTEM USING LIGHTWEIGHT BACKUPS

Oracle International Corp...


1. A non-transitory computer readable medium including one or more instructions executable by one or more processors for:querying a data repository that includes a collection of files and directories associated with one or more interrelated projects and a history of the files;
determining a current revision of the data repository;
determining revisions of changes that have occurred since previous revisions of the data repository with respect to the current revision of the data repository;
analyzing the revisions of the changes to determine changes to a data structure and attributes of the data repository since the previous revisions of the data repository;
determining a level of data backup required to recover the data repository to a particular revision;
determining a data backup threshold accuracy configured to set a backup data accuracy to a specified backup data accuracy limit;
determining from the changes to the data structure, the attributes since the previous revisions of the data repository, the level of data backup required, and the specified backup data accuracy limit, which of the attributes and portions of the data structure are required, and which of the attributes of the data repository and the portions of the data structure are not required to restore the data repository to the particular revision;
generating a backup object from the attributes and the portions of the data structure are required to recover the data repository to the particular revision, wherein the backup object is generated to include the required attributes and the required portions of the data structure without including the attributes and the portions of the data structure that are not required based on a history of the changes to the data structure;
configuring the backup object to provide a human-friendly text description that reports nature of the changes; and storing the backup object as a compressed file.

US Pat. No. 11,068,351

DATA CONSISTENCY WHEN SWITCHING FROM PRIMARY TO BACKUP DATA STORAGE

International Business Ma...


1. A system for switching from primary data storage to backup data storage in a data storage system, the system comprising:a data storage manager configured to maintaina primary copy of a plurality of data sets for directly servicing data transactions, and
a backup copy of the plurality of data sets, whereina backup protocol specifies synchronously updating the backup copy to reflect changes made to one type of data stored in the primary copy, and
the backup protocol specifies asynchronously updating the backup copy to reflect changes made to another type of data stored in the primary copy; and


a data switchover manager configured toprepare the backup copy of the plurality of data sets bya) identifying any inconsistency between any two interdependent data items in the data sets of the backup copy in accordance with a predefined schema of interdependent data in the data sets, and
b) correcting any identified inconsistency in the data sets of the backup copy in accordance with a predefined inconsistency correction protocol, and

cause the backup copy to be used in place of the primary copy for directly servicing data transactions.


US Pat. No. 11,068,350

RECONCILIATION IN SYNC REPLICATION

NetApp, Inc., San Jose, ...


1. A method comprising:intercepting a write request to write data to a first storage before the write request is received by a file system of a first node managing the first storage, wherein the write request is replicated as a replicated write request to write the data to a second storage;
in response to receiving an error and determining that the write request and the replicated write request are pending requests that are to be aborted, initiating a first abort operation on the first node to abort the write request and a second abort operation on a second node to abort the replicated write request; and
performing a reconciliation operation based upon the first abort operation succeeding and the second abort operation failing or the first abort operation failing and the second abort operation succeeding.

US Pat. No. 11,068,349

SELECTIVE PROCESSING OF FILE SYSTEM OBJECTS FOR IMAGE LEVEL BACKUPS

Veeam Software AG, Baar ...


1. A system for selective processing of file system objects for an image level backup, comprising:a backup engine including:a receiving module configured to receive backup parameters for the image level backup, wherein the backup parameters include a selection of a machine to backup, and a selection of at least one file system object to include in the image level backup;
a connection module configured to connect to production storage corresponding to the selected machine, wherein the connection module is further configured to obtain data from a source disk corresponding to the selected at least one file system object, and wherein the source disk is in the production storage;
a file allocation table (FAT) processing module configured to:fetch FAT blocks from the source disk;
search the fetched FAT blocks to determine a selected set of data blocks of the source disk, wherein the selected set of data blocks corresponds to the selected at least one file system object; and
create a backup FAT from the fetched FAT blocks, wherein the backup FAT comprises only records corresponding to the selected at least one file system object; and

a block processing module configured to:read the determined selected set of data blocks;
for each block in the determined selected set of data blocks:look up block content corresponding to each block;
determine whether the block content is part of the selected at least one file system object;
based on determining whether the block content for each block is not part of the selected at least one file system object:
?save a zeroed block in a reconstructed disk image that corresponds to each block that is not part of the selected at least one file system object, wherein saving the zeroed block includes preventing fetching the block contents from the source disk;
?write the zeroed block to the backup FAT corresponding to each zeroed block in the reconstructed disk image; and
?compress the backup FAT.




US Pat. No. 11,068,348

METHOD AND ENABLE APPARATUS FOR STARTING PHYSICAL DEVICE

HUAWEI TECHNOLOGIES CO., ...


1. A method for starting a physical device, comprising:detecting that a fault occurs in a first central processing unit (CPU) by an enable apparatus, wherein the enable apparatus is communicatively coupled to at least one platform control hub (PCH) and N CPUs, wherein the N is a positive integer greater than or equal to two, and wherein the N CPUs comprise the first CPU and a second CPU;
determining the second CPU in the N CPUs as a primary CPU when the fault occurs in the first CPU in the N CPUs, wherein the physical device comprises the N CPUs and the at least one PCH, wherein the first CPU is the primary CPU configured to start the physical device before the fault occurs in the first CPU, wherein an alternative CPU set comprises all CPUs of the N CPUs except the first CPU, and wherein determining the second CPU as the primary CPU comprises:determining only one CPU in the alternative CPU set as the second CPU when the alternative CPU set comprises the only one CPU; and
determining, from P CPUs comprised in the alternative CPU set, one CPU as the second CPU when the alternative CPU set comprises the P CPUs, wherein the P is a positive integer greater than one, wherein determining, from the P CPUs comprised in the alternative CPU set, the one CPU as the second CPU comprises determining a CPU corresponding to an intermediate node in a serial link as the second CPU when the P CPUs in the alternative CPU set form the serial link, wherein the serial link comprises P nodes and P?1 sub-links, wherein the P CPUs are in a one-to-one correspondence with the P nodes, wherein the physical device further comprises P?1 electrical couplings among the P CPUs, wherein any one of the P CPUs are electrically coupled to at least one CPU of other CPUs of the P CPUs, and wherein the P?1 electrical couplings are in a one-to-one correspondence with the P?1 sub-links; and

sending an enable signal from the enable apparatus to the second CPU to trigger the second CPU as the primary CPU to, in cooperation with a first PCH, start the physical device, wherein the first PCH is electrically coupled to the second CPU, and wherein the first PCH is one of the at least one PCH.

US Pat. No. 11,068,347

MEMORY CONTROLLERS, MEMORY SYSTEMS INCLUDING THE SAME AND MEMORY MODULES

SAMSUNG ELECTRONICS CO., ...


1. A memory controller configured to control a memory module including a plurality of memory devices which constitute a first channel and a second channel, the memory controller comprising:an error correction code (ECC) engine; and
a control circuit configured to control the ECC engine,
wherein the ECC engine is configured to:
generate a codeword including a plurality of symbols by adaptively constructing, based on device information including mapping information, each of the plurality of symbols from a predetermined number of data bits received via a plurality of input/output pads of each of the plurality of memory devices, and
transmit the codeword to the memory module,
wherein the mapping information indicates whether each of the plurality of input/output pads is mapped to the same symbol among the plurality of symbols or different symbols among the plurality of symbols, and
wherein each of the plurality of symbols corresponds to a unit of error correction of the ECC engine.

US Pat. No. 11,068,346

METHOD AND APPARATUS FOR DATA PROTECTION

EMC IP Holding Company LL...


1. A method of managing storage, comprising:receiving a request to change an initial portion of data, the initial portion of data (i) associated with an initial redundant region and (ii) including a first segment to be changed and a set of other segments not to be changed;updating the first segment in response to the request; and
generating an updated redundant region based on a computation involving the initial redundant region and the first segment but not involving the set of other segments,
wherein generating the updated redundant region includes (i) computing a delta-first segment that indicates differences between the first segment prior to being updated and the first segment after being updated and (ii) producing an intermediate redundant region based on the delta-first segment, and wherein generating the updated redundant region is further based on a difference between the initial redundant region and the intermediate redundant region.


US Pat. No. 11,068,345

METHOD AND SYSTEM FOR ERASURE CODED DATA PLACEMENT IN A LINKED NODE SYSTEM

Dell Products L.P., Roun...


1. A method for storing data, comprising:receiving, by a node, a request to store data from a data source;
dividing the data into a plurality of data chunks and generating a parity chunk using the plurality of data chunks based on an erasure coding scheme; and
storing the plurality of data chunks and the parity chunk across a plurality of nodes using data protection domain (DPD) information, wherein the node is one of the plurality of nodes,
wherein the DPD information specifies a plurality of immediate neighbor nodes of the node, and wherein each of the plurality of data chunks is stored in one of the plurality of immediate neighbor nodes.

US Pat. No. 11,068,344

CANDIDATE BIT DETECTION AND UTILIZATION FOR ERROR CORRECTION

INFINEON TECHNOLOGIES AG,...


14. A non-transitory machine readable medium comprising instructions, which when executed by a machine, cause the machine to:determine that error-correcting code functionality detected a first number of erroneous bits within a memory device, wherein the error-correcting code functionality can correct fewer than the first number of erroneous bits;
evaluate bits within the memory device to identify a first subset of bits within the memory device as candidate bits, wherein a second subset of bits within the memory device are non-candidate bits;
evaluate the candidate bits to determine whether the error-correcting code functionality returns a non-error state where no error correction is performed based upon one or more combinations of candidate bits being inverted;
if the error-correcting code functionality returns the non-error state for one combination of the one or more combinations of candidate bits being inverted, then correcting, within the memory device, the one combination of the one or more combinations of candidate bits; and
if the error-correcting code functionality returns the non-error state for more than one combination of the one or more combinations of candidate bits being inverted, then identifying a new subset of bits within the memory device as new candidate bits, wherein the new subset of bits comprises fewer bits than the first subset of bits.

US Pat. No. 11,068,343

DATA STORAGE ERROR PROTECTION

Micron Technology, Inc., ...


1. An apparatus, comprising:an array of memory cells arranged in a first dimension and a second dimension; and
a controller configured to:determine a set of symbols oriented obliquely to the first and second dimensions and corresponding to data stored in the memory cells;
add subsets of the set of symbols to determine a number of parity check symbols; and
selectably protect, based on a same number of parity check symbols for each, a first subset of memory cells oriented parallel to the first dimension and a second subset of memory cells oriented parallel to the second dimension.


US Pat. No. 11,068,342

REDUNDANCY DATA IN INTEGRATED MEMORY ASSEMBLY

Western Digital Technolog...


10. A method of recovering data, the method comprising:accessing, at a plurality of control dies, a set of error correcting code (ECC) codewords that are stored across a plurality of memory dies, including accessing each ECC codeword by way of bond pads that bond one of the control dies to one of the memory dies, wherein each ECC codeword comprises data bits and parity bits;
running an ECC decoder, at each control die, on the ECC codeword that was accessed from the memory die that is bonded to the control die;
sending decoding information from running the ECC decoders from each control die to a memory controller, wherein the decoding information indicates that at least one of the ECC codewords in the set was not decoded;
sending a request for XOR information from the memory controller to a selected control die of the control dies;
sending XOR information that is formed from the set of ECC codewords from the selected control die to the memory controller; and
recovering the data bits of the set of ECC codewords, by the memory controller, based on the XOR information and the decoding information.

US Pat. No. 11,068,341

ERROR TOLERANT MEMORY ARRAY AND METHOD FOR PERFORMING ERROR CORRECTION IN A MEMORY ARRAY

Microchip Technology Inc....


1. A method for providing error correction for a memory array comprising:for each data word stored in a data memory portion of the memory array having at least one bit error, storing in an error programmable read only memory (PROM) an error entry associated with an address of the data word in the data memory portion and including error data identifying a bit position of each bit error, and correct bit data for each bit error, the storing including programming a unique term into a programmable logic device that is a Boolean combination of the address of the data word in the data memory portion and a number of the bit error;
monitoring memory addresses presented to the data memory portion;
if a memory address presented to the data memory portion is associated with a respective error entry in the error PROM, reading from the error PROM the bit position of each bit error and the correct bit data for each bit error; and
substituting the correct bit data into each identified bit position of a memory read circuit arranged to read data from the data memory portion.

US Pat. No. 11,068,340

SEMICONDUCTOR MEMORY DEVICES AND MEMORY SYSTEMS

Samsung Electronics Co., ...


1. A semiconductor memory device, comprising:a memory cell array including a plurality of memory cell rows, each of the plurality of memory cell rows including a plurality of dynamic memory cells;an error correction code (ECC) engine circuit;
an error information register; and
a control logic circuit configured to control the ECC engine circuit,
wherein the control logic circuit is configured to control the ECC engine circuit to cause the ECC engine circuit to generate an error generation signal based on performing a first ECC decoding on first sub-pages in at least one first memory cell row of the plurality of memory cell rows in a scrubbing operation on the at least one first memory cell row and based on performing a second ECC decoding on second sub-pages in at least one second memory cell row of the plurality of memory cell rows in a normal read operation on the at least one second memory cell row,
wherein the control logic circuit is further configured to record error information in the error information register and is configured to control the ECC engine circuit to cause the ECC engine circuit to skip an ECC encoding operation and an ECC decoding operation on at least one selected memory cell row of the at least one first memory cell row and the at least one second memory cell row based on referring to the error information, and
wherein the error information at least indicates a quantity of error occurrences in the first memory cell row and the second memory cell row.


US Pat. No. 11,068,339

READ FROM MEMORY INSTRUCTIONS, PROCESSORS, METHODS, AND SYSTEMS, THAT DO NOT TAKE EXCEPTION ON DEFECTIVE DATA

Intel Corporation, Santa...


1. A processor comprising:a decode unit to decode a read from memory instruction, the read from memory instruction to indicate a source memory operand and a destination storage location; and
an execution unit coupled with the decode unit, the execution unit, in response to the decode of the read from memory instruction, to:read data from the source memory operand;
store an indication of defective data in an architecturally visible storage location, when the data is defective, wherein the architecturally visible storage location is one selected from a group consisting of a general-purpose register and at least one condition code bit; and
complete execution of the read from memory instruction without causing an exceptional condition, when the data is defective.


US Pat. No. 11,068,338

CONSENUS OF SHARED BLOCKCHAIN DATA STORAGE BASED ON ERROR CORRECTION CODE

Alipay (Hangzhou) Informa...


1. A computer-implemented method for processing blockchain data on a computing device communicably coupled to a blockchain network, the method comprising:retrieving a plurality of blocks from a blockchain node of the blockchain network;
encoding the plurality of blocks using error correction coding (ECC) to generate a plurality of encoded blocks; and
for each encoded block in the plurality of encoded blocks:dividing the encoded block into a plurality of datasets, wherein the plurality of datasets include one or more datasets of information bits and one or more datasets of redundant bits;
calculating hash values of the plurality of datasets;
sending, to each of a plurality of blockchain nodes of the blockchain network, a request that includes at least one of the plurality of datasets, the hash values, and a data storage scheme that provides assignments of the plurality of datasets to the plurality of blockchain nodes;
receiving responses for accepting the request from at least a number of blockchain nodes that equals a number of the one or more datasets of information bits; and
sending, to each of the plurality of blockchain nodes, a notification for adopting the data storage scheme.


US Pat. No. 11,068,337

DATA PROCESSING APPARATUS THAT DISCONNECTS CONTROL CIRCUIT FROM ERROR DETECTION CIRCUIT AND DIAGNOSIS METHOD

FUJITSU LIMITED, Kawasak...


1. A data processing apparatus comprising:a processor; and
a direct memory access (DMA) controller coupled to the processor, the DMA controller including
a control circuit that controls a DMA transfer of data,
an error detection circuit that performs an error detection on the data based on a character assigned in association with the data to output a result of the error detection to the control circuit, and
a diagnosis circuit that disconnects the control circuit from the error detection circuit to diagnose an operation of the error detection circuit and provide a diagnosis result to the processor.

US Pat. No. 11,068,336

GENERATING ERROR CHECKING DATA FOR ERROR DETECTION DURING MODIFICATION OF DATA IN A MEMORY SUB-SYSTEM

MICRON TECHNOLOGY, INC., ...


1. A system comprising:a memory component;
a processing device, operatively coupled with the memory component, to:receive a request to store a first data;
receive the first data and a first error-checking data, the first error-checking data being based on a cyclic redundancy check (CRC) operation of the first data;
generate a second data by modifying the first data;
generate a second error-checking data of the second data by using the first error-checking data and a difference between the first data and the second data; and
determine whether the second data contains an error by comparing the second error-checking data with a third error-checking data generated from a CRC operation of the second data.


US Pat. No. 11,068,335

MEMORY SYSTEM AND OPERATION METHOD THEREOF

SK hynix Inc., Gyeonggi-...


1. A memory system comprising:a first memory device including a first input/output buffer;
a second memory device including a second input/output buffer; and
a controller including a cache memory allocated for either first data or second data,
wherein, during a first partial section of a first program section to program the first data input from a host in the first memory device, the controller receives the first data and stores the first data in the cache memory, and transfers the first data to the first input/output buffer and then releases the first data in the cache memory without any request from the host after the first partial section of the first program section before second partial section of a second program section starts,
wherein, during the second partial section of the second program section to program the second data input from the host in the second memory device, the controller receives the second data and stores the second data in the cache memory, and transfers the second data to the second input/output buffer and then releases the second data in the cache memory without any request from the host after the second partial section of the second program section before a second operation of the first partial section of the first program section starts, and
wherein, even though the first and second program sections are overlapped with each other, the first and second partial sections are not overlapped with each other, and the controller stores only one of the first and second data in the cache memory for each of the first and second partial sections.

US Pat. No. 11,068,334

METHOD AND SYSTEM FOR ENHANCING COMMUNICATION RELIABILITY WHEN MULTIPLE CORRUPTED FRAMES ARE RECEIVED

Echelon Corporation, San...


1. A communications receiver comprising:an error detector for determining whether a first and second received frame are corrupted, each frame comprising of a plurality of bits;
a filter for determining whether the second received frame should be recovered; and
a frame generator for generating a recovered frame based on the plurality of bits of the first and second received frames and a plurality of probability indicators of the second frame, in response to the error detector determining the first and second received frames are corrupted and the filter determining that the second received frame should be recovered, wherein each of the plurality of probability indicators indicates a probability of a corresponding bit of the second frame being correct.

US Pat. No. 11,068,333

DEFECT ANALYSIS AND REMEDIATION TOOL

Bank of America Corporati...


1. An apparatus comprising:a database configured to store:a set of basis functions comprising a set of first functions, a set of second functions, and a set of third functions, the set of first functions comprising functions of one adjustable parameter, the set of second functions comprising functions of two adjustable parameters, the set of third functions comprising functions of three adjustable parameters;
a set of inputs;
a set of input parameters;
a set of primary outputs obtained from a first system, the set of primary outputs comprising a first primary output, a second primary output, and a third primary output; and
a set of secondary outputs obtained from a second system using the set of inputs and the set of input parameters, the set of secondary outputs comprising a first secondary output, a second secondary output, and a third secondary output, the first secondary output the same as the first primary output, the second secondary output different from the second primary output, and the third secondary output different from the third primary output, wherein the second system contains one or more errors such that if the second system did not contain the one or more errors the first secondary output would be the same as the first primary output, the second secondary output would be the same as the second primary output, and the third secondary output would be the same as the third primary output;

a hardware processor configured to:compare the first secondary output to the first primary output and label the first secondary output as accurate in response to determining that the first secondary output is the same as the first primary output;
compare the second secondary output to the second primary output and label the second secondary output as inaccurate in response to determining that the second secondary output is different from the second primary output;
compare the third secondary output to the third primary output and label the third secondary output as inaccurate in response to determining that the third secondary output is different from the third primary output;
form a first matrix of basis functions chosen from the set of basis functions to represent the second system, each row of the first matrix including a first number of functions comprising one or more first functions of the set of first functions, one or more second functions of the set of second functions, and one or more third functions of the set of third functions;
form a first set of equations by setting the first matrix multiplied by a first vector comprising the set of inputs and the set of input parameters equal to a second vector comprising the set of secondary outputs;
determine that a first solution to the first set of equations does not exist;
in response to determining that the first solution to the first set of equations does not exist:form a second matrix of basis functions chosen from the set of basis functions to represent the second system, each row of the second matrix including a second number of basis functions comprising two or more first functions of the set of first functions, two or more second functions of the set of second functions, and two or more third functions of the set of third functions, the second number twice as large as the first number;
form a second set of equations by setting the second matrix multiplied by a third vector comprising the set of inputs, a first copy of the set of inputs, the set of input parameters, and a second copy of the set of input parameters equal to the second vector comprising the set of secondary outputs;
determine that a second solution to the second set of equations does exist;
in response to determining that the second solution to the second set of equations does exist:remove at least one basis function from a first row of the second matrix, the at least one basis function comprising either a first second function of the set of second functions or a first third function of the set of third functions;
form a third set of equations by setting the second matrix multiplied by the third vector equal to the second vector, wherein setting the first row of the second matrix multiplied by the third vector equal to the second secondary output defines a first equation of the third set of equations;
determine that a third solution to the third set of equations does exist;
remove at least one basis function from a second row of the second matrix, the second row different from the first row, the at least one basis function comprising either a second second function of the set of second functions or a second third function of the set of third functions;
form a fourth set of equations by setting the second matrix multiplied by the third vector equal to the second vector, wherein setting the second row of the second matrix multiplied by the third vector equal to the third secondary output defines a second equation of the fourth set of equations;
determine that a fourth solution to the fourth set of equations does not exist; and
in response to determining that the third solution does exist and that the fourth solution does not exist, send the second secondary output as a remediation candidate to an administrator;


receive a set of instructions from the administrator indicating that remediation was performed; and
generate a second set of secondary outputs from the second system using the set of inputs and the set of input parameters.


US Pat. No. 11,068,332

METHOD AND SYSTEM FOR AUTOMATIC RECOVERY OF A SYSTEM BASED ON DISTRIBUTED HEALTH MONITORING

EMC IP Holding Company LL...


1. A method for managing production hosts, the method comprising:obtaining, using a host monitoring agent, production host data associated with a production host of a plurality of production hosts,wherein:the host monitoring agent monitors operations of the production hosts and determines, based the product host data, whether or not to initiate a recovery operation, and
the production hosts host one or more virtual machines;

making a first determination, by the host monitoring agent and using the production host data, that the production host has violated a health-check rule; and
issuing, to a recovery agent, a notification based on the first determination,

wherein the recovery agent initiates a recovery action in response to receiving at least the notification.

US Pat. No. 11,068,331

PROCESSING SYSTEM, RELATED INTEGRATED CIRCUIT AND METHOD FOR GENERATING INTERRUPT SIGNALS BASED ON MEMORY ADDRESS

STMICROELECTRONICS APPLIC...


1. A processing system, comprising:a processor programmed to generate at least one read request for reading data from a memory with error detection and/or correction, the read request comprising an address signal identifying an address of a given memory area in the memory;
an error handling circuit configured to be connected to the memory and to receive an error signal from the memory, wherein the error signal comprises an error code indicating whether the data read from the memory contains errors, wherein the error handling circuit comprises a hardware circuit configured to:determine whether the address indicated by the address signal belongs to a first address range;
set an error code of a first error signal to the error code of the error signal when the address indicated by the address signal belongs to the first address range;
determine whether the address indicated by the address signal belongs to a second address range; and
set an error code of a second error signal to the error code of the error signal when the address indicated by the address signal belongs to the second address range; and

interrupt generator circuit configured togenerate one or more interrupt signals when the error code of the first error signal corresponds to one or more first reference values, and
generate the one or more interrupt signals when the error code of the second error signal corresponds to one or more second reference values.


US Pat. No. 11,068,330

SEMICONDUCTOR DEVICE AND ANALYSIS SYSTEM

RENESAS ELECTRONICS CORPO...


1. A semiconductor device, comprising:a module having a predetermined function;
an error information acquisition circuit that acquires error information about an error occurred in the module;
a stress acquisition circuit that acquires a stress cumulative value as a cumulative value of stress applied to the semiconductor device;
an analysis data storage that stores analysis data as data for analyzing a state of the semiconductor device, the analysis data being data which associates the error information with the stress cumulative value at a time of occurrence of the error; and
an operation time acquisition circuit that acquires a cumulative operation time which is a cumulative value of the operation time of the semiconductor device,
wherein the analysis data storage stores error time data as the analysis data, and
wherein the error time data is data which associates the error information with the stress cumulative value at the time when the error occurred and the cumulative operation time at the time when the error occurred.

US Pat. No. 11,068,329

ALERTING SYSTEM HAVING A NETWORK OF STATEFUL TRANSFORMATION NODES

salesforce.com, inc., Sa...


1. A system, comprising:an events producer comprising one or more hardware processors configured to generate events that are part of an events stream;
an alerting system that is configured to run on a server system comprising processing hardware, the alerting system comprising:a streams transformer comprising: a network of transformation nodes comprising: an input transformation node configured to receive at least some of the events; an output transformation node; and an intermediate transformation node coupled between the input transformation node and the output transformation node, wherein the intermediate transformation node is configured to: determine whether state information for that intermediate transformation node should be updated based on state information for each of the transformation nodes that the intermediate transformation node subscribes to each time a state update is received from at least one of the transformation nodes that the intermediate transformation node subscribes to, and generate a state update when the stored state information for each of the transformation nodes that the intermediate transformation node subscribes to collectively indicates that the state update should be generated,
wherein the output transformation node is configured to: receive state updates from any of the transformation nodes the output transformation node is subscribed to, store each of the state updates as state information, and generate a check result when stored state information for each of the transformation nodes that the output transformation node subscribes to collectively indicates that the check result should be generated, wherein the intermediate transformation node and the output transformation node are both configured to maintain state information for each transformation node that they subscribe to, wherein the state information is updated each time a state update is received from another transformation node; and
a state change processor configured to: perform an action when it is determined that the check result should trigger the action; and

a consumer system that is configured to consume and react to the action generated by the alerting system.

US Pat. No. 11,068,328

CONTROLLING OPERATION OF MICROSERVICES UTILIZING ASSOCIATION RULES DETERMINED FROM MICROSERVICES RUNTIME CALL PATTERN DATA

Dell Products L.P., Roun...


1. An apparatus comprising:at least one processing device comprising a processor coupled to a memory;
the at least one processing device being configured to perform steps of:obtaining runtime call pattern data for a plurality of microservices in an information technology infrastructure;
generating a model of the runtime call pattern data characterizing transitions between a plurality of states of the plurality of microservices;
capturing point of interest events from the runtime call pattern data utilizing the generated model;
determining, for a given sliding window time slot of the runtime call pattern data, association rules between the captured point of interest events, a given one of the association rules characterizing a relationship between a first point of interest event corresponding to a first state transition for a first one of the plurality of microservices occurring during the given sliding window time slot and at least a second point of interest event corresponding to a second state transition for a second one of the plurality of microservices occurring during the given sliding window time slot; and
controlling operation of the plurality of microservices in the information technology infrastructure based at least in part on the determined association rules;

wherein determining, for the given sliding window time slot of the runtime call pattern data, the association rules between the captured point of interest events comprises determining the given one of the association rules based at least in part on identifying at least a threshold difference between (i) actual frequencies of occurrence of the first and second point of interest events and (ii) expected frequencies of occurrence of the first and second point of interest events; and
wherein determining the association rules comprises:computing a first support of the first point of interest event, the first support denoting the actual frequency of occurrence of the first point of interest event in the given sliding window time slot of the runtime call pattern data; and
computing a second support of the second point of interest event, the second support denoting the actual frequency of occurrence of the second point of interest event in the given sliding window timeslot of the runtime call pattern data.


US Pat. No. 11,068,327

API MANAGER

Google LLC, Mountain Vie...


8. A system comprising:one or more computers and one or more storage devices storing instructions that are operable, when executed by the one or more computers, to cause the one or more computers to perform operations comprising:receiving, by an application programming interface (API) manager and from a third-party service provider, a condition associated with an API key;
storing the condition for the API key in a repository that includes conditions for API keys;
generating a reference key that maps to the API key;
receiving, by the API manager and from a mobile application executing on a mobile device, an API call for a third party API that includes the reference key that maps to the API key;
identifying the API key and the condition associated with the API key in the repository based on the API call that includes the reference key;
determining, by the API manager, that the condition associated with the API key that was identified from the repository is satisfied by the API call that includes the reference key; and
in response to determining, by the API manager, that the condition associated with the API key that was identified from the repository is satisfied by the API call that includes the reference key, relaying, by the API manager, the API call for the third party API to the third party API.


US Pat. No. 11,068,326

METHODS AND APPARATUS FOR TRANSMITTING TIME SENSITIVE DATA OVER A TUNNELED BUS INTERFACE

Apple Inc., Cupertino, C...


1. A method for transmitting time-sensitive data, the method comprising:adjusting a hybrid processor transaction performed in conjunction with a first independently operable processor apparatus and a second independently operable processor apparatus, the adjusting of the hybrid processor transaction comprising compensating for a clock difference between (i) a first time reference associated with the first independently operable processor apparatus and (ii) a second time reference associated with the second independently operable processor apparatus, each of the first independently operable processor apparatus and the second independently operable processor apparatus being configured to operate independently of each other, wherein the compensating comprises a fine adjustment scheme based on the clock difference being a first value, and wherein the compensating comprises a coarse adjustment scheme different from the fine adjustment scheme based on the clock difference being a second value larger than the first value; and
causing provision of the adjusted hybrid processor transaction to the second independently operable processor apparatus from the first independently operable processor apparatus.

US Pat. No. 11,068,325

EXTENSIBLE COMMAND PATTERN

DreamWorks Animation LLC,...


1. A method for implementing a command stack for an application, the method comprising:receiving an input for executing a first command of the application;
initiating execution of the first command;
executing one or more second commands which are set to execute based on execution of the first command;
completing execution of the first command;
generating a first nested command stack associated with the first command;
including the one or more second commands in the first nested command stack; and
including the first command in the command stack and the first nested command stack in the command stack, wherein the first nested command stack in the command stack includes the one or more second commands and the one or more second commands is not directly added to the command stack.

US Pat. No. 11,068,324

ADVANCED NOTIFICATION SYSTEM

AppDirect, Inc., San Fra...


1. A computer-implemented method for notifying users of events, comprising:creating a template of a plurality of templates that corresponds to an event of a plurality of events, the template comprising a name, a description, and conditions for selection of the template, each template utilized for generating a notification;
previewing the template during creation of the template to ensure proper rendering of the template in the notification;
generating a test notification that satisfies parameters specified through a user interface, each of the parameters selected from one of several predetermined parameters;
in response to generating the test notification, sending the test notification to users to ensure proper delivery and rendering of the notification;
specifying an audience for the template, the audience comprising a description of at least one user that will receive the notification generated from the template; and
initiating, through a marketplace platform comprising a plurality of micro services, sending of the notification, the initiating comprising pushing the notification of the event to at least a first queue, the first queue comprising a messaging queue comprising notifications accumulated in an order of occurrence.

US Pat. No. 11,068,323

AUTOMATIC REGISTRATION OF EMPTY POINTERS

SPLUNK Inc., San Francis...


1. A method for using a registry to facilitate app development, comprising sending one or more libraries of code to a client device, wherein the one or more libraries cause the client device to perform, during execution of application code that references the one or more libraries of code, operations comprising:detecting an attempted utilization, by a first app feature, of a pointer that has not been registered in the registry;
responsive to detecting the attempted utilization, automatically registering the pointer in the registry without a pointer definition for the pointer;
detecting, subsequent to registering the pointer in the registry, the pointer definition for the pointer, wherein the pointer definition associates the pointer with a second app feature; and
responsive to detecting the pointer definition, automatically registering the first app feature in the registry to receive notifications about events pertaining to the second app feature.

US Pat. No. 11,068,322

METHODS OF OPERATING COMPUTER SYSTEM WITH DATA AVAILABILITY MANAGEMENT SOFTWARE


13. A method comprising:receiving, at a data availability manager (DAM) application from one or more executable first applications, an indication that a first data item generated by the one or more executable first applications is available for processing;
in response to the indication, changing a status of the first data item within the DAM application to indicate the first data item is available;
determining, using one or more of multiple data dependency rules, that a second data item to be generated by an executable second application is dependent upon an availability of the first data item, wherein the data dependency rules indicate that different ones of multiple second data items are processed by different ones of multiple second applications and are dependent on different ones of multiple first data items;
after the determination that the second data item is dependent upon the availability of the first data item, determining whether the second data item is runnable and, if runnable, changing a status of the second data item to runnable; and
publishing, by the DAM application, the runnable status of the second data item to enable the executable second application to be informed of the runnable status of the second data item.

US Pat. No. 11,068,321

SYSTEM AND METHOD FOR DYNAMICALLY DELIVERING CONTENT

AMADEUS S.A.S., Biot (FR...


23. A computer program product comprising:a non-transitory computer-readable storage medium; and
instructions stored on the non-transitory computer-readable storage medium that, when executed by a processor, cause the processor to deliver content from a content provider system to a user device, the content provider system being configured to deliver the content in response to a request having one or more predefined formats, the user device comprising an application executing on the user device, the application being associated with an application interface, the application being configured to receive input data in one or more input data blocks from at least one user through the application interface, the input data comprising a plurality of data items, and the application comprising an application extension, the instructions comprising:
dynamically connecting the application to the content provider system via a bridging device during execution of the application extension;
activating, by the application extension, a connection to the bridging device in response to detecting an activation condition;
transmitting at least some of the data items comprised in each input data block from the application to the bridging device during the connection to the bridging device;
generating, by the bridging device, the request for the content according to one of the one or more predefined formats using each data item transmitted from the application to the bridging device; and
transmitting the request from the bridging device to the content provider system.

US Pat. No. 11,068,320

INTERFACE COMBINING MULTIPLE SYSTEMS INTO ONE

Firestone Industrial Prod...


1. A vehicle system, comprising:a vehicle having a chassis;
a vehicle control unit (VCU) supported by said chassis;
a plurality of data modules, each said data module configured for providing an output signal with a specific data type, and said plurality of data modules comprising a tire pressure monitoring sensor and at least one of a trip computer, a radio-frequency identification emitter, a load sensor, a height controller, a vehicle maintenance sensor, and a tire tread sensor; and
a data conversion interface connected in between said VCU and each of said plurality of data modules, said data conversion interface configured to receive said output signal from each of said data modules, convert said output signal from each of said data modules to a common data format, and transmit each of said converted output signals to said VCU,
wherein at least one of said plurality of data modules includes a memory, said memory having a program that enables communication between said at least one of said plurality of data modules and said data conversion interface, said data conversion interface further configured to receive instruction from said memory on how to convert said specific data type to said common data format,
wherein said data conversion interface is further configured to enable two-way communication between said plurality of data modules and said VCU.

US Pat. No. 11,068,319

CRITICAL SECTION SPEEDUP USING HELP-ENABLED LOCKS

Oracle International Corp...


1. A method, comprising:performing, at one or more computing devices:acquiring, by a first data accessor of a plurality of data accessors, a first lock associated with a first critical section, wherein the first critical section comprises one or more operations including a first operation;
initiating, by the first data accessor, a first help session associated with the first operation, wherein the first help session comprises implementing, by at least a second data accessor which (a) has requested the first lock and (b) has not been granted the first lock, one or more sub-operations of the first operation;
wherein the second data accessor implements the one or more sub-operations of the first operation while waiting to acquire the first lock; and
releasing the first lock by the first data accessor after at least the first operation has been completed.


US Pat. No. 11,068,318

DYNAMIC THREAD STATUS RETRIEVAL USING INTER-THREAD COMMUNICATION

INTERNATIONAL BUSINESS MA...


1. A method for retrieving a status for a slave hardware thread in a processing system comprising a plurality of hardware threads configured in a plurality of interconnected integrated processor blocks, the method comprising:receiving a status request for the slave hardware thread from a master hardware thread at an inbox;
in response to receiving the status request at the inbox, determining a status of the slave hardware thread using status logic and a status register associated with the slave hardware thread, wherein the status logic is configured to determine the status of the slave hardware thread without interrupting processing of the slave hardware thread; and
communicating a status response based on the status to the master hardware thread,
wherein the status register is configured to store status information for the slave hardware thread, wherein determining the status of the slave hardware thread comprises analyzing the status register by the status logic to determine the status slave hardware thread, and wherein the status information includes a performance counter for the slave hardware thread.

US Pat. No. 11,068,317

INFORMATION PROCESSING SYSTEM AND RESOURCE ALLOCATION METHOD

HITACHI, LTD., Tokyo (JP...


1. An information processing system comprising:a management server; and
a plurality of processing servers,
the information processing system providing execution of an application program through a network in accordance with a request from a user,
wherein each of the processing servers includes one or more processors for executing the application program,
the management server manages computer resources that are capable of being executed as each of the processing servers, a usage state of the computer resource, and a cluster usage situation or request information from the user, and information relevant to an execution time of the application program, for each cluster executing one application program, and
in a case where no computer resources managed by the management server is available, and an expected execution time of the application program that is a target of an execution request is less than or equal to a reference value, at the time of receiving a degree of parallel and the expected execution time of the execution from the user, along with an execution request of the application program,
a cluster generation unit of the management server selects a computer resource executing an application program having a small degree of deviation from an initial expected execution time from the computer resources in use that are managed by the management server, even in a case where a plurality of application programs are temporarily simultaneously executed, and
arranges and executes the application program that is the target of the execution request with respect to the selected computer resource.

US Pat. No. 11,068,316

SYSTEMS AND METHOD FOR MANAGING MEMORY RESOURCES USED BY SMART CONTRACTS OF A BLOCKCHAIN

LiquidApps Ltd, Tel-Aviv...


1. A system for managing a limited allocated memory resource (LAMR) storing data used by a smart contract code of a plurality of smart contracts of a LAMR-based blockchain dataset stored in a blockchain storage system, wherein the LAMR defines storage space for storing data for each of the plurality of smart contracts, the system comprising:at least one hardware processor of the blockchain storage system executing a code for:
receiving, from a client terminal by a gateway service node, a request for performing at least one action by the smart contract code stored in the blockchain storage system, wherein the gateway service node acts as a proxy for the blockchain storage system;
simulating, by the gateway service node, the request to perform the at least one action to determine whether the smart contract code requires data for performing the at least one action;
based on the simulation that performing the at least one action requires data, providing, by a gateway service node to the blockchain storage system, a request to identify the data for execution of the at least one action in the LAMR of a blockchain storage system;
when the requested data is not found in the LAMR by the blockchain storage system, providing by the blockchain storage system to the gateway service node, an indication that the requested data is not found in the LAMR, wherein in response to the indication the gateway service node acquires a cryptographic proof of the requested data from the LAMR of the blockchain storage system;
wherein the cryptographic proof is used by the gateway service node for acquiring a copy of the requested data from a virtual allocated memory resource (VAMR);
providing the copy of the requested data to the blockchain storage system;
storing, by the blockchain storage system, the copy of the requested data in the LAMR for performing the at least one action by the blockchain storage system using the stored copy, the performance of the at least one action updates the stored copy in the LAMR;
performing, by the smart contract code, the at least one action using the stored copy to produce an updated stored copy;
replacing, by the blockchain storage system, the cryptographic proof with a new cryptographic proof created by processing the updated stored copy in the LAMR; and
providing, the updated copy by the blockchain storage system to the gateway storage system, for storing the updated stored copy in the VAMR.

US Pat. No. 11,068,315

HYPERVISOR ATTACHED VOLUME GROUP LOAD BALANCING

Nutanix, Inc., San Jose,...


1. A method comprising:partitioning a volume group having multiple logical unit numbers (LUNs) into multiple shards;
assigning a first shard of the multiple shards from a second controller executing on a second node to a first controller executing on a first node and assigning a second shard of the multiple shards from the first controller to the second controller, based at least in part on a load the second controller having a lower load than the first controller;
establishing a connection between a virtual machine on the second node to the first controller for the virtual machine to access data within the volume group;
directing, by the first controller, an access request from the virtual machine to access the first shard via the connection; and
storing information describing assignment of the first and second shards.

US Pat. No. 11,068,314

MICRO-LEVEL MONITORING, VISIBILITY AND CONTROL OF SHARED RESOURCES INTERNAL TO A PROCESSOR OF A HOST MACHINE FOR A VIRTUAL ENVIRONMENT

Juniper Networks, Inc., ...


1. A method comprising:generating, by a policy controller, a policy to apply within a data center;
distributing, by the policy controller, the policy to a policy agent executing on a computing device included within the data center, wherein the computing device includes processing circuitry having a plurality of processor cores;
monitoring, by the policy agent, usage metrics relating to a resource shared by the plurality of processor cores;
mapping, by the policy agent, a first subset of the usage metrics to a first process executing on a first virtual computing environment and mapping a second subset of the usage metrics to a second process executing on the first virtual computing environment by correlating process identifiers associated with each of the first process and the second process to the first virtual computing environment and to information published by the plurality of processor cores about individual threads executing on each of the plurality of processor cores;
mapping, by the policy agent, a third subset of the usage metrics to a third process executing on a second virtual computing environment and mapping a fourth subset of the usage metrics to a fourth process executing on the second virtual computing environment by correlating process identifiers associated with each of the third process and the fourth process to the second virtual computing environment and to information published by the plurality of processor cores about individual threads executing on each of the plurality of processor cores, wherein the first virtual computing environment and the second virtual computing environment execute on the processing circuitry;
determining, by the policy agent and based on the mapped usage metrics and the policy, that the first process executing on the first virtual computing environment is using the resource in a manner that adversely affects the performance of the second process executing on the first virtual computing environment;
responsive to determining that the first process is using the resource in a manner that adversely affects the performance of the second process, restricting, by the policy agent, access to the resource by the first process while the first process is executing on the first virtual computing environment and without restricting access to the resource by the second process executing on the first virtual computing environment;
determining, by the policy agent and based on the mapped usage metrics and the policy, that the third process executing on the second virtual computing environment is using the resource in a manner that adversely affects the performance of the fourth process executing on the second virtual computing environment;
responsive to determining that the third process is using the resource in a manner that adversely affects the performance of the fourth process, restricting, by the policy agent, access to the resource by the third process while the third process is executing on the second virtual computing environment and without restricting access to the resource by the fourth process executing on the second virtual computing environment;
reporting, by the policy agent, information about resource restrictions placed on the first process and the third process by the policy agent; and
generating, by the policy controller and based on the reported information, a notification.

US Pat. No. 11,068,313

CLOUD BROKERAGE APPLICATION DECOMPOSITION AND MIGRATION

International Business Ma...


1. A computer implemented method for cloud brokerage application decomposition and migration, the method comprising:receiving, by one or more computer processors, a candidate application;
identifying, by the one or more computer processors, components of the candidate application;
identifying, by the one or more computer processors, cloud service provider offerings;
mapping, by the one or more computer processors, the cloud service provider offerings to technology categories of a technology offering database;
analyzing, by the one or more computer processors, component granularity of one or more components according to the technology categories of the technology offering database;
decomposing, by the one or more computer processors, a component into sub-components according to the component granularity;
collecting, by the one or more computer processors, performance information on the cloud service provider offerings;
generating, by the one or more computer processors, a plurality of simulated performance scenarios associated with deploying sub-components across the cloud service provider offerings;
calculating, by the one or more computer processors, a score associated with a degree of matching between service provider offerings and sub-component needs;
estimating, by the one or more computer processors, a cost associated with each simulated performance scenario of the plurality of simulated performance scenarios;
ranking, by the one or more computer processors, the plurality of performance scenarios according to the performance information, the score, and the cost, yielding a ranked set of performance scenarios; and
deploying, by the one or more computer processors, sub-components according to an automated decision tree and the ranked set of performance scenarios, wherein the automated decision tree is based on a user's specified thresholds and precedence selections.

US Pat. No. 11,068,312

OPTIMIZING HARDWARE PLATFORM UTILIZATION FOR HETEROGENEOUS WORKLOADS IN A DISTRIBUTED COMPUTING ENVIRONMENT


1. A computer-implemented method comprising:hosting a first workload at least partly using a first virtual computing resource that is provisioned on a hardware resource of a service provider network;
receiving first utilization data indicating a first amount of the hardware resource that is utilized by the first workload;
based at least in part on the first utilization data, mapping the first workload to a first workload category from a group of predefined workload categories;
receiving second utilization data indicating a second amount of hardware resources that are utilized by a second workload;
based at least in part on the second utilization data, mapping the second workload to a second workload category from the group of predefined workload categories;
determining that the first workload category and the second workload category are predefined complimentary workload categories that are computationally complementary to be hosted on a same hardware device;
provisioning a second virtual computing resource on the hardware resource based at least in part on the first workload category and the second workload category being predefined complimentary workload categories; and
hosting the second workload at least partly using the second virtual computing resource.

US Pat. No. 11,068,311

ALLOCATING COMPUTING RESOURCES TO A CONTAINER IN A COMPUTING ENVIRONMENT

Red Hat, Inc., Raleigh, ...


1. A method comprising:determining, by a processing device, that a dependent computing resource is to be allocated to a software component, wherein the dependent computing resource depends on another computing resource being allocated to the software component before the dependent computing resource is allocated to the software component;
determining, by the processing device, a parameter value for a backoff process for checking an availability of the dependent computing resource, the parameter value being determined using another parameter value for another backoff process for checking the availability of the other computing resource;
determining, by the processing device, that the dependent computing resource is available by executing the backoff process using the parameter value; and
in response to determining that the dependent computing resource is available, allocating, by the processing device, the dependent computing resource to the software component.

US Pat. No. 11,068,310

SECURE STORAGE QUERY AND DONATION

INTERNATIONAL BUSINESS MA...


1. A method comprising:receiving a query for an amount of storage in memory of a computer system to be donated to a secure interface control of the computer system, wherein the query is received from a hypervisor;
determining, by the secure interface control, the amount of storage to be donated base don a plurality of secure entities supported by the secure interface control as a plurality of predetermined values;
returning, by the secure interface control, a response to the query indicative of the amount of storage as a response to the query;
receiving a donation of storage to secure for use by the secure interface control based on the response to the query, wherein the donation of storage is performed by the hypervisor; and
verifying, by the secure interface control, a command sequence and completion status of a sequence of interactions for the donation of storage.

US Pat. No. 11,068,309

PER REQUEST COMPUTER SYSTEM INSTANCES

Amazon Technologies, Inc....


7. A system, comprising:one or more processors; and
memory that stores computer-executable instructions that, as a result of being executed, cause the one or more processors to:obtain a request to process data by a request instance;
instantiate the request instance, based at least in part on a request instance configuration indicating a language runtime and information indicating a first portion of an application code to initially execute based at least in part on an entry point generated by at least copying the first portion of the application code from a shared memory region into a memory region allocated to the request instance, where the request instance executes the first portion of the application code and the application code is supported by the language runtime indicated in the request instance configuration;
cause the request instance to process the data in accordance with the request to generate a response; and
provide, based at least in part on a set of factors, the response to a computer system associated with the request.


US Pat. No. 11,068,308

THREAD SCHEDULING FOR MULTITHREADED DATA PROCESSING ENVIRONMENTS

TEXAS INSTRUMENTS INCORPO...


1. A system comprising:a buffer manager circuit configured to:determine availability of buffers to be acquired for respective processing threads; and
identify a first one and a second one of the respective processing threads as stalled due to unavailability of a first buffer in the buffers to be acquired for the first one and the second one of the processing threads; and
store respective states for each of the buffers in memory, a first one of the respective states corresponding to the first buffer, the first one of the states identifies the first one and the second one of the processing threads that are stalled due to unavailability of the first one of the buffers; and

a thread execution manager circuit configured to initiate execution of a third one of the processing threads that is not identified as stalled.

US Pat. No. 11,068,307

COMPUTING NODE JOB ASSIGNMENT USING MULTIPLE SCHEDULERS

Capital One Services, LLC...


1. A method, comprising:receiving, by a computing node included in a set of computing nodes, a set of heartbeat messages that originated from the set of computing nodes,wherein the set of heartbeat messages is related to selecting, among the set of computing nodes, a leader computing node to process one or more jobs associated with the set of computing nodes,

determining, by the computing node and based on the set of heartbeat messages, a set of scores representing how qualified each computing node is to process the one or more jobs based on processing capabilities of corresponding computing nodes, of the set of computing nodes, and one or more processing constraints associated with the one or more jobs;
determining, by the computing node and based on the set of scores, whether the computing node is to be selected as the leader computing node based on whether a unique identifier associated with the computing node corresponds to a unique identifier in one of the set of heartbeat messages, of the set of heartbeat messages, associated with a highest score in the set of scores; and
selectively processing, by the computing node, the one or more jobs based on determining whether the computing node is to be selected as the leader computing node.

US Pat. No. 11,068,306

RE-USING DATA STRUCTURES BEYOND THE LIFE OF AN IN-MEMORY PROCESSING SESSION

ORACLE FINANCIAL SERVICES...


1. A method comprising:receiving, by a data processing system, a request to execute a first run comprising a first set of tasks;
creating, by a data processing system, a first session to execute the first run;
executing, by the data processing system, the first run in the first session, wherein the execution of the first run comprises: identifying a task of the first set of tasks as requiring use of a dataset, obtaining a dataframe for the dataset, processing the task of the first set of tasks using the dataframe, and generating an updated dataframe based on the processing of the task;
receiving, by the data processing system, a request to execute a second run comprising a second set of tasks, wherein a dependency exists between the first run and the second run based on a requirement that the first run and the second run use the same dataset;
creating, by the data processing system, a second session to execute the second run; and
executing, by the data processing system, the second run in the second session, wherein the execution of the second run comprises: identifying a task of the second set of tasks as requiring use of the same dataset, loading the updated dataframe for the same dataset, and processing the task of the second set of tasks using the updated dataframe.

US Pat. No. 11,068,305

SYSTEM CALL MANAGEMENT IN A USER-MODE, MULTI-THREADED, SELF-SCHEDULING PROCESSOR

Micron Technology, Inc., ...


1. A processor coupleable to an interconnection network in a system having a host processor, comprising:a processor core adapted to execute a plurality of instructions; and
a core control circuit coupled to the processor core, the core control circuit comprising:
an interconnection network interface coupleable to the interconnection network to transmit one or more system call messages to the host processor and to receive a work descriptor data packet, the interconnection network interface adapted to decode the received work descriptor data packet into an initial program count and a received argument for an execution thread;
a thread control memory comprising a plurality of registers, the plurality of registers comprising a thread identifier pool register storing a plurality of thread identifiers, a program count register storing the initial program count, a data cache, and a general purpose register storing the received argument;
an execution queue coupled to the thread control memory;
a control logic and thread selection circuit coupled to the execution queue, the control logic and thread selection circuit adapted, in response to the received work descriptor data packet, to assign a thread identifier of the plurality of thread identifiers to the execution thread, to automatically place the thread identifier in the execution queue, and to automatically and periodically select the thread identifier for execution by the processor core of an instruction of the execution thread, of the plurality of instructions, the processor core using data stored in the data cache or general purpose register; and
system call circuitry adapted to generate the one or more system call messages and to modulate a number of the one or more system call messages transmitted to the host processor in a predetermined period of time.

US Pat. No. 11,068,304

INTELLIGENT SCHEDULING TOOL

Microsoft Technology Lice...


1. A system for intelligent scheduling, the system comprising:a telephone unit;
a processor coupled to the telephone unit; and
a computer-readable medium storing instructions that are operative when executed by the processor to:determine, using a connectivity prediction model, first call connectivity rate predictions;
determine timeslot resources for a first time period;
allocate, based at least on the first call connectivity rate predictions and timeslot resources for the first time period, leads to timeslots in the first time period using contextual bandit learning, the contextual bandit learning providing an option to exploit a current solution or to explore a new solution in order to identify a global optimal solution, wherein the contextual bandit learning analyzes context vectors to identify the global optimal solution;
determine, within a timeslot in the first time period and using a lead scoring model, a first lead prioritization among leads within the timeslot in the first time period;
configure, based at least on the first lead prioritization, the telephone unit with lead information for placing a phone call to a selected lead; and
update the connectivity prediction model using the contextual bandit learning.


US Pat. No. 11,068,303

ADJUSTING THREAD BALANCING IN RESPONSE TO DISRUPTIVE COMPLEX INSTRUCTION

INTERNATIONAL BUSINESS MA...


1. A computer system, comprising:a memory storing a computer-executable instruction; and
a processor configured to:allocate the computer-executable instruction to a first thread;
decode the computer-executable instruction;
determine a type of the computer-executable instruction based on information obtained by decoding the computer-executable instruction; and
based on determining that the computer-executable instruction is a disruptive complex instruction, change a mode of allocating hardware resources to an instruction-based allocation mode,
wherein, in the instruction-based allocation mode, the processor adjusts allocation of the hardware resources among the first thread and a second thread to predetermined fixed amounts, for the duration from start to end of processing of the disruptive complex instruction by the processor, based on types of instructions allocated to the first and second threads,
wherein the processor adjusts the allocation of the hardware resources to the first thread by restricting a number of completion table entries allowed to the first thread in the instruction-based allocation mode, and
wherein the disruptive complex instruction includes an instruction composed of several micro operations, and some of the micro operations take place after instruction completion boundary.


US Pat. No. 11,068,302

METHOD FOR REGULATING SYSTEM MANAGEMENT MODE FUNCTION CALLS AND SYSTEM THEREFOR

Dell Products L.P., Roun...


1. A method comprising:receiving a system management interrupt (SMI);
invoking a system management mode (SMM) master function in response to receiving the SMI, the SMM master function to:save state information, the state information including a first value retrieved from a first register and a second value retrieved from a second register;
determine a first function associated with the SMI based on the first value;
determine a first calling address associated with the SMI based on the second value;
increment a first counter corresponding to a unique combination of the first function and the first calling address; and
selectively invoking the first function based on the value of the first counter and based on a first predetermined threshold.


US Pat. No. 11,068,301

APPLICATION HOSTING IN A DISTRIBUTED APPLICATION EXECUTION SYSTEM

Google LLC, Mountain Vie...


1. A method for executing applications in a distributed computing system, the method comprising:storing a plurality of applications for distribution among a plurality of application servers in the distributed computing system;
receiving one or more requests related to at least one of the plurality of applications;
executing, by the plurality of application servers, the at least one of the plurality of applications in response to the one or more requests;
obtaining usage information for the at least one of the plurality of applications, the usage information indicating a frequency with which data for the at least one of the plurality of applications is accessed in response to the one or more requests; and
storing, in volatile memory, based on the usage information for the at least one of the plurality of applications, data for the at least one of the plurality of applications.

US Pat. No. 11,068,300

CROSS-DOMAIN TRANSACTION CONTEXTUALIZATION OF EVENT INFORMATION

CA, Inc., New York, NY (...


1. A method comprising:correlating a first topology map for a distributed application with a second topology map for the distributed application, wherein the first topology map comprises software nodes that represent software components of the distributed application and the second topology map comprises infrastructure nodes that represent infrastructure components that support the distributed application; and
for a first aggregate of distributed traces for a first transaction type of the distributed application:detecting first event information of a first infrastructure node at a first time,
detecting second event information of a first software node at a second time,
determining that the first event information corresponds to the second event information based on (i) the correlation of the first topology map with the second topology map identifying an association between the first infrastructure node and the first software node, and (ii) the first time and the second time being within a defined range of time of each other, and

associating the first event information with the second event information in the first aggregate.

US Pat. No. 11,068,299

MANAGING FILE SYSTEM METADATA USING PERSISTENT CACHE

EMC IP Holding Company LL...


1. A method of managing metadata in a data storage system, the method comprising:receiving an I/O (input/output) request that specifies a set of data to be written to a file system in the data storage system, the file system backed by a set of non-volatile storage devices;
computing values of multiple metadata blocks that the file system will use to organize the set of data in the file system, the multiple metadata blocks including at least one of a virtual block map (VBM), an allocation bitmap, and a superblock;
aggregating the computed values of the metadata blocks into a single transaction; and
atomically issuing the transaction to a persistent cache, such that values of all of the metadata blocks are written to the persistent cache or none of them are, the persistent cache thereafter flushing the values of the metadata blocks, or updated versions thereof, to the set of non-volatile storage devices backing the file system,
wherein the persistent cache provides multiple flushing policies, and wherein the method further comprises specifying a delayed flushing policy for pages in the persistent cache that store the values of the multiple metadata blocks, the delayed flushing policy specifying flushing at a slower rate than another flushing policy provided by the persistent cache,
wherein the method further comprises:receiving a second I/O request specifying a second set of data to be written to the file system;
computing values of a second set of metadata blocks that the file system will use to organize the second set of data in the file system, the second set of metadata blocks including a common metadata block that that is also one of the multiple metadata blocks, a value of the common metadata block stored in a cache page of the persistent cache; and
atomically issuing a second transaction to the persistent cache, such that the values of all of the second set of metadata blocks are written to the persistent cache or none of them are, the persistent cache updating the cache page that stores the value of the common metadata block to reflect a change in the value of common metadata block for incorporating the second set of data,

wherein the data storage system further includes a set of internal volumes operatively disposed between the persistent cache and the set of non-volatile storage devices, and wherein writing the transaction to the persistent cache includes writing the values of the multiple metadata blocks to pages of the persistent cache using an addressing scheme that addresses pages by identifier of one of the set of internal volumes and offset into that internal volume.

US Pat. No. 11,068,298

HARDWARE ACCELERATION METHOD AND RELATED DEVICE

Huawei Technologies Co., ...


1. A hardware acceleration method, comprising:receiving, by a virtualized infrastructure manager (VIM), first request information from a network functions virtualization orchestrator (NFVO), wherein the first request information is configured to request the VIM to deploy a to-be-accelerated virtualized network function (VNF) onto a host in a management domain of the VIM, wherein a hardware resource of the host meets a requirement of the to-be-accelerated VNF, and the requirement of the to-be-accelerated VNF includes information indicating a type of a required hardware acceleration resource, and indicating a size of the required hardware acceleration resource in the to-be-accelerated VNF; and
deploying, by the VIM, the to-be-accelerated VNF onto the host in the management domain of the VIM.

US Pat. No. 11,068,297

OUTBOARD MOTOR AND METHODS OF USE THEREOF

ROBBY GALLETTA ENTERPRISE...


1. An outboard motor for attachment to a transom of a craft, said outboard motor comprising:a powerhead affixed to the transom of the craft;
a telescopic drive shaft, said telescopic drive shaft having a first shaft section rotationally connected to said powerhead a second shaft section rotationally connected to a propeller shaft, said second shaft section slidably interlinked to said first shaft section;

a telescopic drive shaft housing, said telescopic drive shaft rotationally positioned therein said telescopic drive shaft housing, said telescopic drive shaft;a first housing bearing to rotationally support a first drive shaft housing end and second housing bearing to rotationally support a second drive shaft housing end; and
a steering mechanism affixed thereto the transom and coupled to said drive shaft housing;
wherein said steering mechanism is configured to provide independent rotation of said drive shaft housing and said propeller shaft relative to said powerhead to facilitate navigational control of the craft.

US Pat. No. 11,068,296

VIRTUALISED SOFTWARE APPLICATION PERFORMANCE

British Telecommunication...


1. A method of improving performance of a software application executing with a virtualized computing infrastructure wherein the application has associated a hypervisor profile of characteristics of a hypervisor in the infrastructure, a network communication profile of characteristics of network communication for the application, a data storage profile of characteristics of data storage for the infrastructure, and an application profile defined collectively by the other profiles, the method comprising:training a classifier to generate a first classification using training data sets based on application profiles so as to classify an application profile as underperforming;
training the classifier to generate a set of second classifications and a set of third classifications using training data sets based at least one of:an application profile, a hypervisor profile, a network communication profile, or a data storage profile,
wherein the classifications in the second set classify a profile for which optimization will provide improved application performance, and
wherein the classifications in the third set classify a profile for which additional infrastructure resource is required to provide improved application performance; and

in response to applying the first classifications, the second classifications and the third classifications to data sets based on the application profile, implementing one or more of the following for the application:optimization of a hypervisor, network or data storage, or
deploying an additional resource for the hypervisor, network or data storage.


US Pat. No. 11,068,295

DEVICE OPERATION ACROSS MULTIPLE OPERATING SYSTEM MODALITIES

Ghost Locomotion Inc., M...


1. A method for device operation across multiple operating system modalities, comprising:performing, by a first operating system executing in a first virtual machine that is supported by a hypervisor, one or more device initialization operations for a device;
determining, by a second operating system executing in a second virtual machine that is supported by the hypervisor, that the device is in an initialized state;
indicating, by the second operating system to the hypervisor, that the device is in an initialized state; and
performing, by the second operating system, one or more device operations of the device in the initialized state.

US Pat. No. 11,068,294

BALANCING PROCESSING LOADS OF VIRTUAL MACHINES

Telefonaktiebolaget LM Er...


1. A method performed by a load balancer of a network arrangement configured to balance processing loads to a first and a second virtual machine (VM) instance contained within a single virtual machine, the network arrangement comprising at least a processing circuitry to run the first and second VM instances, and the load balancer connected to the first and second VM instances via a network interface controller (NIC), the method comprising:obtaining first information about usage of the processing circuitry of the network arrangement when a first application runs on the first VM instance;
obtaining second information about usage of the processing circuitry when a second application runs on the second VM instance, wherein the first and second VM instances execute on the same processing circuitry of the network arrangement;
receiving first incoming data corresponding to a first plurality of jobs to be executed by the first VM instance;
receiving second incoming data corresponding to a second plurality of jobs to be executed by the second VM instance;
determining a first job count limit for how many of the first plurality of jobs the first VM instance is allowed to execute using the processing circuitry before releasing the processing circuit, based on a combination of the obtained first information, the obtained second information, a total number of jobs in the first plurality of jobs indicated by the received first incoming data to be executed by the first VM instance, a total number of jobs in the second plurality of jobs indicated by the received second incoming data to be executed by the second VM instance, an amount of time each job of the first plurality of jobs will take to execute on the first VM instance and an amount of time each job of the second plurality of jobs will take to execute on the second VM instance;
determining a second job count limit for how many of the second plurality of jobs the second VM instance is allowed to execute using the processing circuitry before releasing the processing circuit, based on a combination of the obtained first information, the obtained second information, the total number of jobs in the first plurality of jobs indicated by the received first incoming data to be executed by the first VM instance, the total number of jobs in the second plurality of jobs indicated by the received second incoming data to be executed by the second VM instance, an amount of time each job of the second plurality of jobs will take to execute on the second VM instance and an amount of time each job of the first plurality of jobs will take to execute on the first VM instance;
instructing the first VM instance to release the processing circuitry after the first VM instance completes execution of the first job count limit of the first plurality of jobs and before completing remaining jobs of the first plurality of jobs when the first job count limit is less than the total number of jobs in the first plurality of jobs, to allow the second VM instance to use the processing circuitry for execution of jobs in the second plurality of jobs;
instructing the second VM instance to release the processing circuitry after the second VM instance completes execution of the second job count limit of the second plurality of jobs and before completing remaining jobs of the second plurality of jobs when the second job count limit is less than the total number of jobs in the second plurality of jobs, to allow the first VM instance to use the processing circuitry for execution of remaining jobs in the first plurality of jobs;
responsive to determining the first VM instance has completed execution of the first job count limit of the first plurality of jobs, instructing the second VM instance to execute the second plurality of jobs; and
responsive to determining the second VM instance has completed execution of the second job count limit of the second plurality of jobs, instructing the first VM instance to execute the remaining jobs of the first plurality of jobs.

US Pat. No. 11,068,293

PARALLEL HARDWARE HYPERVISOR FOR VIRTUALIZING APPLICATION-SPECIFIC SUPERCOMPUTERS

GLOBAL SUPERCOMPUTING COR...


1. A hypervisor system for virtualizing application-specific supercomputers, the system comprising: (a) at least one software-virtual hardware pair consisting of a software application, and an application-specific virtual supercomputer for accelerating the software application, where: i. the application-specific virtual supercomputer comprises a plurality of virtual tiles; and ii. the software application and the virtual tiles communicate among themselves with communication messages; (b) a plurality of reconfigurable physical tiles, where each virtual tile of each application-specific virtual supercomputer can be implemented on at least one reconfigurable physical tile, by configuring the reconfigurable physical tile to perform the virtual tile's function; and (c) a scheduler implemented in hardware, for parallel pre-emptive scheduling of the virtual tiles on the reconfigurable physical tiles; where the scheduler is separate from the reconfigurable physical tiles; and where the scheduler is able to simultaneously perform a plurality of pre-emptive scheduling actions, where a pre-emptive scheduling action comprises pre-empting a virtual tile operating on a reconfigurable physical tile, letting the virtual tile remain pre-empted for a period of time, and then resuming operation of the pre-empted virtual tile on a possibly different reconfigurable physical tile; and where the virtual tile retains ability to receive a message when pre-empted; and where the scheduler includes hardware for ensuring that incoming messages to a virtual tile being pre-empted are not lost or misdelivered as a result of one of the simultaneous pre-emptive scheduling actions, where in each of the plurality of reconfigurable physical tiles, the hardware comprises hardware counters to count messages sent to a virtual tile but not yet received by the virtual tile; and before the scheduler pre-empts a virtual tile v1, for each virtual tile v0 sending messages to the virtual tile v1, the scheduler temporarily prevents further messages from being sent from the virtual tile v0 to the virtual tile v1 and waits for all messages already sent from the virtual tile v0 to the virtual tile v1 to arrive at the virtual tile v1, by utilizing the hardware counters.

US Pat. No. 11,068,292

COMPUTING SYSTEM TRANSLATION TO PROMOTE EFFICIENCY

Core Scientific, Inc.


1. A computing system, comprising:a first computing device; and
a second computing device, including:a processor; and
a compiler;

wherein the first computing device is configured to obtain a profile of the processor of the second computing device, select an algorithm from a plurality of algorithms for the processor of the second computing device to compute while the second computing device is in an idle power mode, and provide instructions for computing the selected algorithm from the plurality of algorithms;
wherein the selected algorithm from the plurality of algorithms corresponds to a maximum expected efficiency value of the plurality of algorithms;
wherein the first computing device is further configured to test the plurality of algorithms for the second computing device prior to selecting the algorithm from the plurality of algorithms; and
wherein the compiler of the second computing device is configured to automatically convert the provided instructions to a different language that is compatible with the processor of the second computing device.

US Pat. No. 11,068,291

SPOOFING CPUID FOR BACKWARDS COMPATIBILITY

Sony Interactive Entertai...


1. A method, comprising:in a computing device, responding to a call from an application for information regarding a processor on the computing device by returning information regarding a different processor than the processor on the computing device wherein responding to a call from the application includes use of microcode stored on a memory in a core of the processor, wherein the microcode returns processor ID data including at least one of processor model, processor family, cache capabilities, translation lookaside buffer capabilities, processor serial number, processor brand, processor manufacturer, thread/core topology, cache topology, extended features, virtual address size, or physical address size that differs depending on whether the processor determines that the application is a new device application or a legacy device application.

US Pat. No. 11,068,290

SYSTEMS AND METHODS FOR PROVIDING INTERACTIVE STREAMING MEDIA

Google LLC, Mountain Vie...


1. A method comprising:executing, by a client simulation system comprising a processor, a simulation of a client device, the simulation of the client device including a virtual application stack comprising a runtime layer and an operating framework;
executing, by the client simulation system, an application in the simulation of the client device, wherein the application generates (1) image data stored in the data storage, and (2) a request to access the runtime layer or the operating framework of the virtual application stack of the simulation of the client device;
transmitting, by the client simulation system, to a first client device via a network interface, a media stream generated using the image data stored in the data storage;
providing, by the client simulation system, responsive to the application generating the request to access the runtime layer or the operating framework, the request to the virtual application stack of the simulation of the client device;
intercept, using a messaging interface provided by the client simulation system to the simulation of the client device, the request to access the runtime layer or the operating framework of the virtual application stack;
transmitting, by the client simulation system, via the network interface, responsive to intercepting the request to access the runtime layer or the operating framework of the virtual application stack, the request-to a corresponding messaging interface forming part of a corresponding application stack of the first client device using the messaging interface, causing the first client device to:access, using a corresponding operating framework or a corresponding runtime layer of the corresponding application stack, sensor information from sensors of the first client device via a hardware abstraction layer of the corresponding application stack;
generate a response to the request that includes the sensor information; and
transmit, using the corresponding messaging interface, the response to the messaging interface of the simulation of the client device;

receiving, by the client simulation system, using the messaging interface for the simulation of the client device, from the first client device, the response to the request; and
providing, by the client simulation system, to the application in the simulation of the client device, the sensor information included in the response received from the first client device.

US Pat. No. 11,068,289

INSTALLATION ASSIST APPARATUS, INSTALLATION ASSIST METHOD, AND COMPUTER PROGRAM PRODUCT

TAIYO YUDEN CO., LTD., T...


1. An installation assist apparatus comprising:a memory; and
a hardware processor coupled to the memory and configured to:receive: an input of installation positions of a first optical wireless communication device and a second optical wireless communication device that perform optical wireless communication; and an input of an angle of elevation representing an inclination of an optical axis center line to a horizontal line, the optical axis center line connecting the first and second optical wireless communication devices;
determine whether each of the first and second optical wireless communication devices is affected by solar light, the determination being carried out based on the installation positions of the first and second optical wireless communication devices, the angle of elevation, an influence angle representing a maximum value of an incident angle of solar light affecting the optical wireless communication devices, and solar positions through a whole year; and
cause a display device to display a result of the determination on whether each of the first and second optical wireless communication devices is affected by solar light,

wherein the hardware processor receives an input of the influence angle and carries out the determination by using the input influence angle.

US Pat. No. 11,068,288

METHOD OF CONTROLLING COMMUNICATION SYSTEM INCLUDING MODE SWITCHING BETWEEN MODES FOR RECEIVING A TOUCH INPUT OR AN AUDIO INPUT, COMMUNICATION SYSTEM, AND STORAGE MEDIUM

HITACHI, LTD., Tokyo (JP...


1. A method of controlling a communication system including a processor, a memory, an audio input apparatus configured to process speech input as audio input, an audio output apparatus, and a touch panel including a display unit configured to process touch input, the method comprising:an inquiry step of generating, by the processor, inquiry information including at least one option, and outputting the inquiry information from one of the audio output apparatus and the touch panel;
an input step of receiving, by the processor, an answer to the inquiry information through one of the audio input apparatus and the touch panel;
a guidance step of generating, by the processor, candidates for guidance information that correspond to the answer, and outputting the candidates for the guidance information from one of the audio output apparatus and the touch panel; and
a mode selection step of choosing, by the processor, a mode suitable for a running status of the communication system from a touch communication mode, in which a user is prompted to use the touch input, and an audio communication mode, in which the user is prompted to use the audio input,
each of the inquiry step and the guidance step comprising using one of the touch communication mode and the audio communication mode that is chosen in the mode selection step,
wherein the running status of the communication system comprises a value of noise detected by the audio input apparatus,
wherein the mode selection step comprises a first step of choosing, by the processor, the audio communication mode when the value of noise is equal to or less than a first threshold, and choosing, by the processor, the touch communication mode when the value of noise exceeds the first threshold,
wherein the running status of the communication system further comprises a number of candidates for the guidance information, and
wherein when the value of noise is equal to or less than the first threshold and the audio communication mode is chosen, the mode selection step further comprises a second step of switching, by the processor, to the touch communication mode when the number of candidates for the guidance information is equal to or less than a second threshold, and
wherein when the value of noise is equal to or less than the first threshold and the audio communication mode is chosen, the mode selection step further comprises a third step of continuing to choose, by the processor, the audio communication mode when the number of candidates for the guidance information exceeds the second threshold.

US Pat. No. 11,068,287

REAL-TIME GENERATION OF TAILORED RECOMMENDATIONS ASSOCIATED WITH CLIENT INTERACTIONS

BANK OF AMERICA CORPORATI...


1. A system for real-time generation of tailored recommendations associated with client interactions, the system comprising:at least one non-transitory storage device; and
at least one processing device coupled to the at least one non-transitory storage device, wherein the at least one processing device is configured to:identify a first interaction associated with an associate, wherein the first interaction is an upcoming interaction with a first user;
extract information associated with the first user;
transfer the extracted information to an associate device;
transmit a first set of control signals to the associate device, wherein the first set of control signals cause a graphical user interface of the associate device to display the extracted information to the associate;
identify a type of the first interaction;
generate one or more topics associated with the first interaction based on the type of the first interaction;
transfer the one or more topics to the associate device;
transmit a second set of control signals to the associate device, wherein the second set of control signals cause the graphical user interface of the associate device to display the one or more topics to the associate;
generate, in real-time, one or more tips during the first interaction with the first user based on one or more responses received during the first interaction with the first user;
transmit a third set of control signals to the associate device, wherein the third set of control signals cause the graphical user interface of the associate device to display the one or more tips;
generate one or more tone recommendations in real-time based on the one or more responses received during the first interaction with the first user, wherein the one or more tone recommendations comprise at least one of a neutral tone, a joyful tone, and a formal tone;
transmit a fourth set of control signals to the associate device, wherein the fourth set of control signals cause the graphical user interface of the associate device to display the one or more tone recommendations;
receive screen sharing instructions from the associate;
project, via a communication channel, one or more areas of the graphical user interface of the associate device onto a user device of the first user based on the screen sharing instructions received from the associate;
reprioritize the one or more topics in real-time based on the one or more responses received during the first interaction with the first user; and
transmit a fifth set of control signals to the associate device, wherein the fifth set of control signals transform the graphical user interface of the associate device excluding the one or more areas projected onto the user device to display reprioritization of the one or more topics.


US Pat. No. 11,068,286

SMART CONTEXT AWARE SUPPORT ENGINE FOR APPLICATIONS

Oracle International Corp...


1. A non-transitory computer-readable medium storing computer-executable instructions that when executed by a processor of a computing device causes the processor to:monitor user interaction with a user interface to detect an occurrence of a condition indicative of the user interaction requiring assistance with a user interface element of the user interface, wherein the condition includes detecting an invalid input into the user interface element;
access executable code of the user interface for the user interface element to extract parameters from the executable code of the user interface element, wherein the extracted parameters from the executable code define expected values or valid input values for the user interface element;
evaluate the user interaction and the extracted parameters from the executable code to identify one or more first entity objects;
in response to receiving a query submitted by the user, execute a natural language process upon the query to extract one or more second entity objects by controlling the natural language process to:identify grammar tokens from the query;
perform part of speech tagging upon the grammar tokens to assign the grammar tokens to parts of speech; and
perform lemmatization upon the grammar tokens based upon the parts of speech to create the one or more second entity objects;

query a documentation database using a combination of the one or more first entity objects and the one or more second entity objects to identify one or more documentation topics that are mapped to the one or more first entity objects and the one or more second entity objects;
rank each of the one or more documentation topics to determine which document topics provide a solution to the condition, wherein each rank is based at least in part upon a strength of a correspondence between content within each documentation topic to the combination of the one or more first entity objects and the one or more second entity objects; and
control the user interface to render content of a documentation topic that is selected based upon the ranks of the documentation topics to provide a solution to assist the user interaction;
wherein the solution includes instructions to assist the user to correctly interact with the user interface element and input valid values into the user interface element.

US Pat. No. 11,068,285

MACHINE-LEARNING MODELS APPLIED TO INTERACTION DATA FOR DETERMINING INTERACTION GOALS AND FACILITATING EXPERIENCE-BASED MODIFICATIONS TO INTERFACE ELEMENTS IN ONLINE ENVIRONMENTS

Adobe Inc., San Jose, CA...


11. A non-transitory computer-readable medium having program code that is stored thereon, the program code executable by one or more processing devices for performing operations comprising:identifying interaction data associated with user interactions with a user interface of an interactive computing environment;
computing goal clusters of the interaction data by performing an n-gram clustering operation based on sequences of the user interactions and time spent in interaction states associated with each of the user interactions, wherein the goal clusters represent interaction goals comprising a goal action or a goal interface display of a set of users performing the sequences of the user interactions;
performing inverse reinforcement learning on the goal clusters to return rewards and policies corresponding to each of the goal clusters;
computing likelihood values of additional sequences of user interactions falling within the goal clusters based on the policies corresponding to each of the goal clusters;
assigning the additional sequences to the goal clusters with greatest likelihood values;
computing interface experience metrics of the additional sequences using the rewards and the policies corresponding to the goal clusters of the additional sequences; and
transmitting the interface experience metrics to an online platform, wherein the interface experience metrics are usable for changing arrangements of interface elements to improve the interface experience metrics.

US Pat. No. 11,068,284

SYSTEM FOR MANAGING USER EXPERIENCE AND METHOD THEREFOR

Huuuge Global Ltd., Larn...


1. A system for managing a user experience in a gaming application portal hosting a plurality of different gaming applications including a first gaming application and a second gaming application, the system comprising:an application recorder embodied on a non-transitory computer readable storage device and configured to capture data corresponding to actions of a first user in a playing of the first gaming application of the gaming application portal, wherein the data captured by the application recorder comprises skills executed by the first user to perform each of the actions in the first gaming application and at least one of a duration of time taken by the first user to perform each of the actions in the first gaming application and a number of levels successfully completed by the first user in the first gaming application; and
a hardware processor communicatively coupled to the application recorder, the hardware processor comprising:a first module configured to:acquire the captured data from the application recorder; and
train a first digital entity based on the captured data to simulate the actions of the first user, wherein the first module implements machine learning algorithms for training of the first digital entity; and

a second module configured to:implement the first digital entity at an initial level in an active state of the second gaming application, the first digital entity being configured to execute the simulated actions of the first user in a playing of the second gaming application, the playing of the second gaming application being independent of the playing of the first gaming application; and

control a point of entry of the first user in the second gaming application based on the actions performed by the first digital entity during the playing of the second gaming application by:determining a level achieved by the first digital entity during the playing of the second gaming application;
discontinue a playing action of the first digital entity in the second gaming application at the determined level;
enable the first user to enter and play the second gaming application starting at a next level subsequent to the determined level.



US Pat. No. 11,068,283

SEMICONDUCTOR APPARATUS, OPERATION METHOD THEREOF, AND STACKED MEMORY APPARATUS HAVING THE SAME

SK hynix Inc., Icheon (K...


1. A semiconductor apparatus comprising:a storage device including a data area and a code area and storing program codes provided from a host device in the code area; and
a controller including a plurality of unit processors, each of the plurality of unit processors including an internal memory; and
wherein the controller is configured to receive an operation policy, which includes one or more of a processor ID, a code ID, a code address, and an option number, from the host device and to control the plurality of unit processors based on the operation policy,
wherein the processor ID is an identifier for each of the plurality of unit processors, the code ID is an identifier for each of the program codes, the code address indicates a position of the code area where each of the program codes is stored, and the option number represents a specific combination of the processor ID, the code ID, and the code address, and
wherein the controller is configured to receive the operation policy including only the option number when the operation policy is not changed, and receive the operation policy including the processor ID, the code ID, the code address, and the option number when the operation policy is changed, from the host device.

US Pat. No. 11,068,282

APPARATUSES, METHODS AND SYSTEMS FOR PERSISTING VALUES IN A COMPUTING ENVIRONMENT

Refinitiv US Organization...


1. A method for persisting values in a computing environment, comprising:loading a plurality of classes associated with a computer program into memory via processor by way of a class loader;
scanning at least one class of said plurality of classes loaded into memory via processor for at least one persistence-annotated field within said at least one class, said at least one persistence-annotated field having a persistence annotation indicating that a data value associated with the annotated field is to be persisted; and
writing byte code into said at least one class containing said at least one persistence-annotated field via processor, said byte code causing a first object instantiated from said at least one class to have said at least one persistence-annotated field therein.

US Pat. No. 11,068,281

ISOLATING APPLICATIONS AT THE EDGE

Fastly, Inc., San Franci...


1. A method comprising:identifying, in an isolation runtime environment, a request from a Hypertext Transfer Protocol (HTTP) accelerator service to be processed by an application;
identifying an isolation resource from a plurality of isolation resources reserved in advance of the request;
identifying an artifact associated with the application;
initiating execution of code for the application and passing context to the code;
after initiating execution of the code, copying data from the artifact to the isolation resource using the context; and
returning control to the HTTP accelerator service upon executing the code.

US Pat. No. 11,068,280

METHODS AND SYSTEMS FOR PERFORMING AN EARLY RETRIEVAL PROCESS DURING THE USER-MODE STARTUP OF AN OPERATING SYSTEM

HyTrust, Inc., Mountain ...


1. A method for a computing system, the computing system comprising a processor, a memory and a data storage device, the method comprising:loading, by a boot loader, an operating system of the computing system from a boot partition into the memory;
during a user-mode startup of the operating system of the computing system that occurs after the loading of the operating system from the boot partition into the memory, the user-mode startup performed by an execution of one or more user-mode processes, and prior to an execution of a service control manager process, pausing the user-mode startup of the operating system, and executing an early retrieval process, the execution of the early retrieval process including: (i) retrieving, with network services other than that initialized by the service control manager process and from a network-attached key management server, one or more of a decryption key corresponding to an encrypted file, a decryption key corresponding to an encrypted folder, a decryption key corresponding to an encrypted data partition, or an access control policy, and (ii) if the decryption key corresponding to the encrypted file is retrieved, transmitting the decryption key corresponding to the encrypted file to an access control driver of the operating system, if the decryption key corresponding to the encrypted folder is retrieved, transmitting the decryption key corresponding to the encrypted folder to the access control driver, if the decryption key corresponding to the encrypted data partition is retrieved, transmitting the decryption key corresponding to the encrypted data partition to a disk filter driver of the operating system, and if the access control policy is retrieved, transmitting the access control policy to the access control driver, wherein the execution of the early retrieval process begins at any time during or after an execution of a master session manager process and prior to the execution of the service control manager process; and
resuming the user-mode startup of the operating system with at least one of the encrypted file, the encrypted folder, the encrypted data partition or the access control policy accessible to the operating system.

US Pat. No. 11,068,279

CONCURRENT REPLACEMENT OF DISTRIBUTED CONVERSION AND CONTROL ASSEMBLY

INTERNATIONAL BUSINESS MA...


1. A method, comprising:identifying, by a computing device, a first distributed conversion and control assembly (DCCA) in a central electronics complex (CEC) of a computer system, the CEC containing the first DCCA and a second DCCA, each of the first DCCA and the second DCCA having a flexible service processor (FSP);
determining, by the computing device, that the computer system satisfies preconditions for concurrent replacement of the first DCCA;
disabling, by the computing device, control software for a thermal and power management device (TPMD) of the first DCCA;
fencing off, by the computing device, the first DCCA;
depowering, by the computing device, the first DCCA;
receiving, by the computing device, a new media access control (MAC) address of a replacement DCCA;
reconfiguring, by the computing device, an operating system of the CEC to recognize the new MAC address of the replacement DCCA;
powering on, by the computing device, the replacement DCCA;
removing, by the computing device, the fencing off of the first DCCA; and
resetting, by the computing device, an FSP of the replacement DCCA.

US Pat. No. 11,068,278

DUAL INLINE MEMORY MODULE WITH MULTIPLE BOOT PROCESSES BASED ON FIRST AND SECOND ENVIRONMENTAL CONDITIONS

Dell Products L.P., Roun...


1. An information handling system, comprising:a memory controller;
a dual in-line memory module (DIMM) coupled to the memory controller via a memory channel; and
a processor configured:during a first in time boot process of the information handling system, to determine a first environmental condition of the information handling system, and to initialize the memory controller and the DIMM to determine a first set of initialization parameters for the memory controller and the DIMM;
store the first environmental condition and the first set of initialization parameters in a non-volatile memory; and
during a second in time boot process of the information handling system, to determine if a second environmental condition is different than the first environmental condition, if the second environmental condition is not different then to continue the second in time boot process initializing the memory controller and the DIMM using the first set of initialization parameters, and if the second environmental condition is different then to initialize the memory controller and the DIMM to determine a second set of initialization parameters for the memory controller and the DIMM.


US Pat. No. 11,068,277

MEMORY ALLOCATION TECHNIQUES AT PARTIALLY-OFFLOADED VIRTUALIZATION MANAGERS

Amazon Technologies, Inc....


1. A method, comprising:transmitting, from an offloaded virtualization management component of a virtualization host to a hypervisor of the virtualization host, a memory inventory request;
receiving, at the offloaded virtualization management component from the hypervisor, a response to the memory inventory request, wherein the response identifies a first portion of a memory of the virtualization host; and
assigning, by the offloaded virtualization management component, at least a subset of the first portion to a first virtual machine.

US Pat. No. 11,068,276

CONTROLLED CUSTOMIZATION OF SILICON INITIALIZATION

Intel Corporation, Santa...


1. A boot code device, comprising:initial boot block (IBB) circuitry to store non-customizable initialization data for at least one functional circuit in communication with the IBB circuitry; and
global platform database (GPD) circuitry to store at least one customizable parameter associated with the at least one functional circuit in communication with the IBB circuitry;
wherein, the IBB circuitry is also to generate a pointer that points to a location where the at least one customizable parameter is stored in the GPD circuitry; and the IBB circuitry is further to determine if the at least one customizable parameter has been updated, to modify the pointer to point to the updated customizable parameter of the GPD circuitry, and to utilize the updated customizable parameter of the GPD circuitry.

US Pat. No. 11,068,275

PROVIDING A TRUSTWORTHY INDICATION OF THE CURRENT STATE OF A MULTI-PROCESSOR DATA PROCESSING APPARATUS

ARM Limited, Cambridge (...


1. A data processing apparatus comprising:a plurality of processing devices; and
power control circuitry configured to:
measure a current workload of a processing device of the plurality of processing devices; and
based on a comparison of the processing device workload to an expected value, reduce the power consumption of the processing device and allocate a workload to one or more other processing devices of the plurality of processing devices,
wherein the power control circuitry is configured to reduce power consumption by reducing power consumption for the processing device if it is not performing a high priority task.

US Pat. No. 11,068,274

PRIORITIZED INSTRUCTIONS IN AN INSTRUCTION COMPLETION TABLE OF A SIMULTANEOUS MULTITHREADING PROCESSOR

International Business Ma...


1. A method of operating a simultaneous multithreading processor (SMP) configured to execute a plurality of threads, the method comprising:executing, using a dedicated prioritization resource of the SMP dedicated to executing instructions included in a prioritized thread, instructions included in a first thread of the plurality of threads that is designated as the prioritized thread;
updating an instruction completion table based on information from hang detection logic of the SMP, wherein the hang detection logic is configured to identify one or more hung threads of the plurality of threads by determining, based on a respective timer for each of the plurality of threads, whether a respective instruction of the respective thread has, within a given number of processor cycles, neither been completed nor flushed, wherein the respective timer is respectively reset upon each of completion and flush of a previous instruction of the respective thread, the information identifying the one or more hung threads;
selecting a second thread of the plurality of threads according to a predefined scheme that cycles through the plurality of threads in a predefined order;
determining, using the instruction completion table, whether any instructions of the second thread have been dispatched;
upon determining that no instructions of the second thread have been dispatched, determining that the second thread is ineligible to have an instruction prioritized;
selecting a third thread of the plurality of threads according to the predefined scheme;
accessing the instruction completion table to determine whether the third thread is eligible to have a first instruction of the third thread prioritized;
responsive to determining that the third thread is eligible, designating the third thread as the prioritized thread;
performing a next-to-complete plus one (NTC+1) flush of the third thread; and
executing the first instruction of the third thread using the dedicated prioritization resource.

US Pat. No. 11,068,273

SWAPPING AND RESTORING CONTEXT-SPECIFIC BRANCH PREDICTOR STATES ON CONTEXT SWITCHES IN A PROCESSOR

Microsoft Technology Lice...


1. A branch prediction circuit, comprising:a private branch prediction memory configured to store at least one branch prediction state for a current context of a current process executing in an instruction processing circuit of a processor;
the branch prediction circuit configured to:speculatively predict an outcome of a branch instruction in the current process executing in the instruction processing circuit, based on a branch prediction state among the at least one branch prediction state in the current context in the private branch prediction memory associated with the branch instruction;
receive a process identifier identifying a new context swapped into the instruction processing circuit; and
in response to the process identifier indicating the new context different from the current context swapped into the instruction processing circuit:cause at least one branch prediction state associated with the new context to be stored as at least one branch prediction state in the private branch prediction memory.



US Pat. No. 11,068,272

TRACKING AND COMMUNICATION OF DIRECT/INDIRECT SOURCE DEPENDENCIES OF PRODUCER INSTRUCTIONS EXECUTED IN A PROCESSOR TO SOURCE DEPENDENT CONSUMER INSTRUCTIONS TO FACILITATE PROCESSOR OPTIMIZATIONS

Microsoft Technology Lice...


1. A processor, comprising:an instruction processing circuit comprising one or more instruction pipelines and configured to fetch a plurality of instructions from a memory into an instruction pipeline among the one or more instruction pipelines;
an instruction dependency tracking table circuit comprising a plurality of source entries each associated with a respective source, and each configured to store a source dependency indicator indicating a producer instruction of the associated source; and
an instruction dependency tracking circuit configured to:receive an instruction from an instruction pipeline among the one or more instruction pipelines;
determine if the received instruction comprises a source as a target operand; and
in response to determining the received instruction comprises the source as the target operand:store an instruction identifier of the received instruction in an updated source dependency indicator in a source entry associated with the source in the instruction dependency tracking table circuit; and
communicate the updated source dependency indicators stored in the plurality of source entries to at least one processing circuit in the instruction processing circuit configured to process an instruction having a source dependency in the updated source dependency indicators.



US Pat. No. 11,068,271

ZERO CYCLE MOVE USING FREE LIST COUNTS

Apple Inc., Cupertino, C...


1. A processor comprising:a free list comprising a plurality of entries with a number of the plurality of entries being less than or equal to a number of rename registers in the processor, including:one or more first entries for rename registers that are not currently assigned;
one or more second entries for rename registers that are currently assigned and unduplicated; and
one or more third entries for rename registers that are currently assigned and duplicated;
wherein at least one of each of the first entries, the second entries, and the third entries:is associated with a corresponding rename register identifier (ID); and
is configured to store a count of a number of mappings for the corresponding rename register ID;


a register file separate from the free list; and
a register rename unit configured to:determine both a source operand and a destination operand of a given move instruction are registers;
identify a given rename register ID associated with the source operand; and
based at least in part on determining a count of a number of mappings in the free list for the given rename register ID being less than a maximum value:assign the given rename register ID to the destination operand of the given move instruction; and
convey the given rename register ID from a reorder buffer to instructions younger in program order than the move instruction that have a data dependency on the move instruction.



US Pat. No. 11,068,270

APPARATUS AND METHOD FOR GENERATING AND PROCESSING A TRACE STREAM INDICATIVE OF INSTRUCTION EXECUTION BY PROCESSING CIRCUITRY

ARM LIMITED, Cambridge (...


26. A method of processing a trace stream generated to indicate instruction execution by processing circuitry, comprising:receiving the trace stream comprising a plurality of trace elements indicative of execution by the processing circuitry of predetermined instructions within a sequence of instructions executed by the processing circuitry, said sequence including a branch-future instruction that indicates an identified instruction following said branch-future instruction within said sequence, execution of the branch-future instruction being such that subsequent encountering of said identified instruction in said sequence by the processing circuitry causes execution of the identified instruction to be prevented and instead causes the processing circuitry to branch to a target address identified by the branch-future instruction irrespective of a form of the identified instruction;
traversing, responsive to each trace element, a program image from a current instruction address until a next one of the predetermined instructions is detected within said program image, and producing from the program image information indicative of the instructions between said current instruction address and said next one of the predetermined instructions;
responsive to detecting the branch-future instruction when traversing said program image, storing within a branch control cache branch control information derived from the branch-future instruction; and
when detecting with reference to the branch control information that the identified instruction has been reached during traversal of the program image, treating said identified instruction as the next one of said predetermined instructions.

US Pat. No. 11,068,269

INSTRUCTION DECODING USING HASH TABLES

Parallels International G...


1. A method, comprising:generating, by a computer system, an aggregated vector of differentiating bit scores representing at least a subset of a set of processor instructions;
identifying, based on the aggregated vector of differentiating bit scores, one or more opcode bit positions; and
constructing a hash table implementing a current level of a decoding tree representing the subset of the set of processor instructions, wherein the hash table is indexed by one or more opcode bits identified by the one or more opcode bit positions.

US Pat. No. 11,068,268

DATA STRUCTURE PROCESSING

Arm Limited, Cambridge (...


1. An apparatus comprising:an instruction decoder to decode instructions; and
processing circuitry to perform data processing in response to the instructions decoded by the instruction decoder; in which:
in response to a data structure processing instruction specifying at least one input data structure identifier and an output data structure identifier, the instruction decoder is configured to control the processing circuitry to perform a processing operation on at least one input data structure identified by the at least one input data structure identifier to generate an output data structure identified by the output data structure identifier;
the at least one input data structure and the output data structure each comprise an arrangement of data corresponding to a plurality of memory addresses; and

the apparatus comprises a plurality of sets of one or more data structure metadata registers, each set of one or more data structure metadata registers associated with a corresponding data structure identifier and designated to hold address-indicating metadata for identifying the plurality of memory addresses for the data structure identified by the corresponding data structure identifier, in which
said data structure metadata registers associated with a given data structure identifier comprise one of:a fixed subset of a plurality of general purpose registers which are also accessible in response to non-data-structure-processing instructions supported by the instruction decoder and the processing circuitry; and
a dedicated set of data structure metadata registers, separate from said plurality of general purpose registers.

US Pat. No. 11,068,267

HIGH BANDWIDTH LOGICAL REGISTER FLUSH RECOVERY

INTERNATIONAL BUSINESS MA...


1. A method comprising:receiving a flush request at a processing unit, the processing unit in a current state defined by contents of registers in a register file, the processing unit comprising a plurality of slices, each of the plurality of slices comprising an execution unit, and the flush request including an identifier of a previously issued instruction; and
restoring the processing unit to a previous state defined by contents of the registers in the register file prior to the previously issued instruction being issued, the restoring comprising:searching previous state buffers in at least two of the plurality of slices to locate data describing the contents of the registers in the register file prior to the previously issued instruction being issued;
combining the located data to generate results of the searching, wherein a first portion of the located data is received from one of the at least two of the plurality of slices via a first data path and a second portion of the located data is received from another of the at least two of the plurality of slices via a second data path different than the first data path; and
updating the contents of the registers in the register file based at least in part on the results, the updating via a single port into the register file, thereby allowing the single port into the register file to be shared between the at least two of the plurality of slices.


US Pat. No. 11,068,266

HANDLING AN INPUT/OUTPUT STORE INSTRUCTION

INTERNATIONAL BUSINESS MA...


1. A data processing system for handling an input/output store instruction, the data processing system comprising:a data processing unit configured to perform a method, the method comprising:identifying an input/output function by an address specified using the input/output store instruction, the input/output store instruction specifying at least the input/output function with an offset through the address, at least one of data to be transferred and a pointer to data to be transferred, and a length of the data;
verifying whether access to the input/output function is allowed on an address space and on a guest instance level;
completing the input/output store instruction before an execution of the input/output store instruction in a selected component of the data processing system different from the data processing unit is completed, the selected component configured to asynchronously load from and store data to at least one external device;
providing notification through an interrupt, based on detecting an error during an asynchronous execution of the input/output store instruction in the data processing unit;
detecting separately errors by the hardware to ensure that the input/output store instruction has not been forwarded to an input/output bus yet;
keeping store information for retries of executing the input/output store instruction;
analyzing errors and checking for a retry possibility; and
triggering one or more retries of executing the input/output store instruction.


US Pat. No. 11,068,265

SEQUENCE ALIGNMENT METHOD OF VECTOR PROCESSOR

Samsung Electronics Co., ...


1. A sequence alignment method of a vector processor, the sequence alignment method comprising:loading a sequence, the sequence being an instance of vector data, the instance of vector data including N elements, wherein N=2n, n being a natural number;
repeatedly performing, for each case when n=1, 2, . . . , log2(N),dividing the sequence into a set of two groups, each group including a common quantity of i elements, wherein i is a natural number, and
aligning respective i-th elements of each pair of adjacent groups to generate a sequence of sorted elements according to a single instruction multiple data (SIMD) mode,
wherein the dividing and aligning includesgenerating a copy sequence in a different order from the sequence by using a permutation operation, and
performing “minmax” operations on the sequence and the copy sequence,

wherein the dividing and the aligning is repeatedly performed to generate a new sequence of sorted elements,
wherein the repeatedly performing the dividing and the aligning includesperforming the dividing and the aligning only once for a case when n=1, and
performing the dividing and the aligning 2(n?1) times for a case when n=2, . . . , log2(N); and


transmitting the new sequence of sorted elements as output data.

US Pat. No. 11,068,264

PROCESSORS, METHODS, SYSTEMS, AND INSTRUCTIONS TO LOAD MULTIPLE DATA ELEMENTS TO DESTINATION STORAGE LOCATIONS OTHER THAN PACKED DATA REGISTERS

Intel Corporation, Santa...


1. A processor comprising:a cache;
a decode unit to decode an instruction, the instruction to indicate a packed data register of a plurality of packed data registers that is to store a source packed memory address information, the source packed memory address information to include a plurality of memory address information data elements; and
an execution unit coupled with the decode unit and the cache, the execution unit to execute the decoded instruction, to:load a plurality of data elements from a plurality of memory addresses that are each to correspond to a different one of the plurality of memory address information data elements;
configure a cache line in the cache that corresponds to a destination location of the instruction to be unreadable and unevictable;
store the plurality of loaded data elements in the cache line; and
configure the cache line as readable after the plurality of loaded data elements have been stored in the cache line.


US Pat. No. 11,068,263

SYSTEMS AND METHODS FOR PERFORMING INSTRUCTIONS TO CONVERT TO 16-BIT FLOATING-POINT FORMAT

Intel Corporation, Santa...


1. A processor comprising:a control register to specify a rounding mode;
fetch circuitry to fetch a format conversion instruction;
a decode unit to decode the format conversion instruction, the format conversion instruction having an opcode, a first field to specify a source vector register, a second field to specify a destination vector register, the source vector register to store a source vector having a plurality of 32-bit single-precision floating point data elements; and
execution circuitry coupled with the decode unit, the execution circuitry to execute the decoded format conversion instruction to:convert the 32-bit single-precision floating point data elements of the source vector to corresponding 16-bit floating point data elements, according to the rounding mode specified by the control register, the 16-bit floating point data elements having a format, the format including a sign bit, an 8-bit exponent, seven explicit mantissa bits, and one implicit mantissa bit; and
store the 16-bit floating point data elements in a first half of a result in the destination vector register, wherein the format conversion instruction is to specify whether a second half of the result is to be zeroes.


US Pat. No. 11,068,262

SYSTEMS AND METHODS FOR PERFORMING INSTRUCTIONS TO CONVERT TO 16-BIT FLOATING-POINT FORMAT

Intel Corporation, Santa...


1. A chip comprising:a plurality of memory controllers;
a level-two (L2) cache memory coupled to the plurality of memory controllers;
a processor coupled to the plurality of memory controllers, and coupled to the L2 cache memory, the processor having a plurality of cores, including a core that, in response to a format conversion instruction having a first source operand including a first 32-bit single-precision floating point data element, and a second source operand including a second 32-bit single-precision floating point data element, is to:convert the first 32-bit single-precision floating point data element to a first 16-bit floating point data element, wherein, when the first 32-bit single-precision floating point data element is a normal data element, conversion is to be performed according to a rounding mode specified by the format conversion instruction, and the first 16-bit floating point data element is to have a sign bit, an 8-bit exponent, seven explicit mantissa bits, and one implicit mantissa bit, and wherein, when the first 32-bit single-precision floating point data element is a not-a-number (NaN) data element, the first 16-bit floating point data element is to have a mantissa with a most significant bit set to one;
convert the second 32-bit single-precision floating point data element to a second 16-bit floating point data element, wherein, when the second 32-bit single-precision floating point data element is a normal data element, conversion is to be performed according to the rounding mode, and the second 16-bit floating point data element is to have a sign bit, an 8-bit exponent, seven explicit mantissa bits, and one implicit mantissa bit, and wherein when the second 32-bit single-precision floating point data element is a NaN data element, the second 16-bit floating point data element is to have a mantissa with a most significant bit set to one; and
store the first 16-bit floating point data element in a lower order half of a destination register and the second 16-bit floating point data element in a higher order half of the destination register;

an interconnect coupled to the processor; and
a bus controller coupled to the processor.

US Pat. No. 11,068,261

METHOD AND SYSTEM FOR DEVELOPING MICROSERVICES AT SCALE

JPMORGAN CHASE BANK, N.A....


1. A method for providing a development accelerator for microservices, the method being implemented by at least one processor, the method comprising:obtaining, by the at least one processor, at least one code set that includes a first plurality of computer program codes representing a framework for developing a plurality of microservices in a network environment, the framework including the development accelerator to facilitate development of the plurality of microservices;
obtaining, by the at least one processor from the network environment, a plurality of runtime routines relating to a second plurality of computer program codes of network functions with respect to the plurality of microservices;
compiling, by the at least one processor in a data package, the plurality of runtime routines, the at least one code set, and at least one instruction set relating to textual directions for developing the plurality of microservices, the at least one instruction set including an assembly manual in a human-readable language with guidance for implementing the at least one code set to develop the plurality of microservices; and
storing, by the at least one processor, the data package in a central repository.

US Pat. No. 11,068,260

CONTAINERIZING SOURCE CODE FOR EXECUTION IN DIFFERENT LANGUAGE USING DRAG-AND-DROP OPERATION

HYPERNET LABS, INC., Red...


1. A method comprising:detecting a command to navigate a user interface to a machine station;
responsive to detecting the command, generating for display using the user interface a station identifier corresponding to the machine station and a drag-and-drop interface;
receiving a source code file by way of a drag-and-drop operation being performed with respect to the drag-and-drop interface;
selecting a machine of the machine station to execute the source code file;
containerizing the source code file based on a language used by the selected machine;
commanding the selected machine to execute the containerized source code file; and
generating for display results of the executed containerized source code file using the user interface.

US Pat. No. 11,068,259

MICROSERVICE-BASED DYNAMIC CONTENT RENDERING

T-Mobile USA, Inc., Bell...


1. A computer-implemented method, comprising:executing, at an application server, an application code to generate application web page code for an application web page;
invoking, at the application server, an application framework according to a script in the application code, the application framework including logic components that direct the application server to retrieve feature codes that generate feature presentation codes of features from a Feature as a Service (FaaS) data store and content codes from a Content as a Service (CaaS) data store via corresponding Representational State Transfer (REST)ful web services, and RESTful application program interfaces (APIs) for accessing the corresponding RESTful web services of the FaaS data store and the CaaS data store;
retrieving, via the application server, a feature code of a feature via a corresponding RESTful web service from the FaaS data store according to an additional script in the application code using the application framework;
executing, at the application server, the feature code to generate feature presentation code that is incorporated into the application web page code of the application web page, the feature presentation code dictating a layout of the feature in the application web page;
retrieving, via the application server, content code that is associated with the feature from the CaaS data store via a corresponding RESTful web service according to the additional script in the application code using the application framework;
executing, at the application server, the content code to retrieve specific content for populating the feature from a content data store, and incorporating the specific content with the feature presentation code in the application web page code; and
sending, via the application server, the application web page code to a web browser on a computing device for rendering into the application web page.

US Pat. No. 11,068,258

ASSEMBLING DATA DELTAS IN CONTROLLERS AND MANAGING INTERDEPENDENCIES BETWEEN SOFTWARE VERSIONS IN CONTROLLERS USING TOOL CHAIN

Aurora Labs Ltd., Tel Av...


1. A non-transitory computer readable medium including instructions that, when executed by at least one processor, cause the at least one processor to perform operations for receiving and integrating a delta file into a first controller to address a security vulnerability, comprising:receiving, at the first controller from a production toolchain, the delta file for addressing the security vulnerability of the first controller, the delta file comprising a plurality of deltas corresponding to a software change for the first controller, at least a portion of the delta file having code:update a program counter of the first controller to execute an instruction contained in the delta file;
extract data from the delta file for storage on the first controller; and
link execution of at least one code segment of the delta file to execution of current controller software on the first controller;

storing the delta file at a first memory location in a single memory of the first controller;
executing the delta file at the first memory location of the first controller; and
updating memory addresses in the first controller to correspond to the plurality of deltas from the delta file while allowing the first controller to execute operations at a second memory location of the first controller.

US Pat. No. 11,068,257

METHOD FOR PROCESSING A SOFTWARE PROJECT

Beckhoff Automation GmbH,...


1. A method for processing a software project comprising a primary code and of a machine code on a first processing station by a first user, comprising the following method steps carried out in order:downloading a first copy of the primary code from a first memory to the first processing station;
modifying the first copy of the primary code;
generating a first program version of the machine code, wherein the first program version of the machine code is generated from the first copy of the primary code;
uploading the first program version of the machine code to a second memory; and

further comprising the following method steps:checking whether, between the downloading of the first copy of the primary code and the uploading of the first program version of the machine code into the second memory, a second program version of the machine code generated from a second copy of the primary code has been uploaded into the second memory;
downloading a modified second copy of the primary code from the first memory if it is determined during checking that a second program version of the machine code has been uploaded into the second memory;
issuing a request to the first user to merge the first copy of the primary code and the second copy of the primary code into a third copy of the primary code; and

further comprising one of the two following method steps:generating a third program version of the machine code from the third copy of the primary code, uploading the third program version of the machine code to the second memory, and automatically uploading the third copy of the primary code into the first memory, triggered by the upload of the third program version of the machine code into the second memory; or
overwriting the second program version of the machine code with the first program version of the machine code, triggered by a response of the first user to the issued request, and automatically uploading the first copy of the primary code into the first memory, triggered by the upload of the first program version of the machine code into the second memory.

US Pat. No. 11,068,256

SYSTEMS AND METHODS FOR PUSHING FIRMWARE BINARIES USING NESTED MULTI-THREADER OPERATIONS

Bank of Montreal, Toront...


1. A computer-implemented method comprising:during a first parent thread executed by a computer:querying, by the computer, a first on-board administration module of a first server enclosure using a first internet protocol address of the first server enclosure to retrieve a first set of data records containing hardware and firmware information of a first set of blade servers in the first server enclosure;
spawning, by the computer, a first set of child threads corresponding to the first set of blade servers, wherein each child thread of the first set of child threads is nested within the first parent thread;
pushing, by the computer, a firmware upgrade binary for at least a subset of the first set of blade servers using corresponding subset of child threads of the first set of child threads;

during a second parent thread executed by the computer contemporaneously with the first parent thread:querying, by the computer, a second on-board administration module of a second server enclosure using a second internet protocol address of the second server enclosure to retrieve a second set of data records containing hardware and firmware information of a second set of blade servers in the second server enclosure;
spawning, by the computer, a second set of child threads corresponding to the second set of blade servers, wherein each child thread of the second set of child threads is nested within the second parent thread; and
pushing, by the computer, a firmware upgrade binary for at least a subset of the second set of blade servers using corresponding subset of child threads of the set of child threads.


US Pat. No. 11,068,255

PROCESSING SYSTEM, RELATED INTEGRATED CIRCUIT, DEVICE AND METHOD

STMicroelectronics Applic...


1. A processing system comprising:a digital processing unit;
one or more non-volatile memories configured to store a firmware to be executed by the digital processing unit,
a diagnostic circuit configured to execute a self-test operation of the processing system in response to a diagnostic mode enable signal; and
a reset circuit configured to perform a complex reset of the processing system by:generating a first reset of the processing system in response to a given event, wherein the processing system is configured to set the diagnostic mode enable signal in response to the first reset, thereby activating execution of the self-test operation;
generating a second reset of the processing system once the self-test operation has been executed;

wherein the one or more non-volatile memories comprise a first programmable memory area for storing a first updateable firmware image, a second programmable memory area for storing a second updateable firmware image and a third memory area for storing signature data;
wherein the processing system is configured to execute the first firmware image or the second firmware image as a function of a firmware selection signal;
wherein the processing system comprises a signature search circuit configured to generate the firmware selection signal as a function of the signature data; and
wherein the signature search circuit is configured to read the signature data in response to the first reset and to, in response to the second reset, generate the firmware selection signal as a function of the signature data read in response to the first reset.

US Pat. No. 11,068,254

SYSTEMS AND METHODS FOR GENERATING AND MANAGING DOMAIN-BASED TECHNOLOGY ARCHITECTURE

Cigna Intellectual Proper...


1. A domain-based technology deployment and management system, comprising:a plurality of application systems, at least one of the plurality of application systems comprising a system processor and a system memory; and
a technology management server comprising a processor and a memory, wherein the technology management server is in communication with the plurality of application systems, wherein the processor is configured to:receive an architecture definition file created from a prior snapshot of the application systems wherein the architecture definition file identifies a prior system status for each snapshotted application system;
scan the application systems to determine a present system status for each application system;
classify each of the scanned application systems into an associated technology domain using a domain classification algorithm;
identify each scanned application system with a changed system status by comparing the associated prior system status to the associated present system statuses;
obtain a system update for each scanned application system with a changed system status, wherein the system update is obtained based on the technology domain, wherein the system updates define implementation characteristics of each changed scanned application system;
redefine the architecture definition file with the system updates; and
apply the architecture definition file to the application systems to update the application systems based, in part, on the system updates.


US Pat. No. 11,068,253

SOFTWARE UPGRADE AND DOWNGRADE USING GHOST ENTRIES

Hewlett Packard Enterpris...


1. A method to modify a software program, comprising:extracting, from a configuration program file, a future list of one or more future active entries relating to a future version of the software program;
extracting, from the configuration program file, a future list of one or more future ghost entries relating to the future version of the software program;
comparing the lists of one or more future active entries and one or more future ghost entries to a current list of one or more current active entries and a current list of one or more current ghost entries of a current version of the software program; and
performing at least one upgrade or at least one downgrade of the current version of the software program in response to at least one comparison of the current and future lists so as to produce the future version of the software program.

US Pat. No. 11,068,252

SYSTEM AND METHOD FOR DYNAMICALLY DETERMINING APPLICABLE UPDATES

Dell Products L.P., Roun...


1. A method for dynamically determining applicable updates for an information handling system, the method comprising:downloading, by a processor, an update package that includes an update installer for updating the information handling system;
retrieving an operating system build number of the information handling system;
parsing a metadata file included in the update package to determine a device group based on the operating system build number, wherein the device group includes the applicable updates for the information handling system;
determining a mode of installation of the applicable updates based on a supported operating system build number of the device group, wherein the device group includes the applicable updates, wherein the applicable updates include a base driver and an extension driver, and wherein the mode of installation includes a declarative componentized hardware (DCH) compliant installation mode and non-DCH compliant installation mode;
determining a sequence of installation of the applicable updates, wherein the sequence of installation includes a staging delay between installation of the base driver and the extension driver; and
installing the applicable updates according to the sequence of installation and the mode of installation.

US Pat. No. 11,068,251

METHOD FOR DISTRIBUTING SOFTWARE UPGRADE IN A COMMUNICATION NETWORK

LumenRadio AB, Gothenbur...


1. A method for distributing a software upgrade in a meshed communication network that includes a plurality of nodes, each node is configured to execute a node specific version of a software and is configured to communicate with one or more neighbouring nodes, the method comprising:a) storing, on each node, software upgrade information used to upgrade software to the node specific version of the software executed on each node, the software upgrade information including patch files for all versions of the software previously executed on each node;
b) transmitting from a first node, to the one or more neighbouring nodes, version information representing the node specific version currently executed on the first node;
c) comparing the node specific version currently executed on the first node with the node specific version of the software executed on each of the one or more neighbouring nodes; and
d) receiving, in the first node, software upgrade information from each neighbouring node when the node specific version currently executed on the first node represents an older version of the software executed on the neighbouring node, wherein the software upgrade information includes at least one additional patch file to upgrade the node specific version currently executed on the first node, wherein the at least one additional patch file is a version specific patch file;
d1) upgrading, when software upgrade information is received from the neighbouring node, the software to be executed on the first node based on the received upgrade information and storing the upgrade information on the first node;
wherein the method further comprises selecting, in the first node, a version specific patch file based on the version information, and upgrading the software in step d1) using the version specific patch file.

US Pat. No. 11,068,250

CROWDSOURCED API RESOURCE CONSUMPTION INFORMATION FOR INTEGRATED DEVELOPMENT ENVIRONMENTS

MICROSOFT TECHNOLOGY LICE...


1. A first computer system comprising:a processor;
a computer readable storage medium having stored thereon program code that, when executed by the processor, causes the processor to:receive a specification of a target computing device through an integrated development environment (IDE) operating on the first computer system, wherein the IDE comprises a graphical user interface (GUI) comprising a plurality of panels;
receive input referencing an application programming interface (API) call through a first panel in the plurality of panels of the GUI of the IDE operating on the first computer system;
in response to the input, send a second computer system a request for statistics data associated with resource consumption during execution of the API call by a set of source devices, the set of source devices corresponding to the specification of the target computing device,
wherein the API call is compiled to include a first set of instructions for starting measurement of resource consumption before a second set of instructions implementing operations of the API call and a third set of instructions for stopping the measurement of resource consumption after the second set of instructions implementing the operations of the API call;
receive the statistics data associated with the resource consumption during execution of the API call by the set of source devices,wherein the data comprises statistics data based on resource consumption data from the set of source devices, the statistics data comprising data selected from the group consisting of:
(a) a latency between a start of the execution of the API call on the source device and an end of the execution of the API call on the source device,
(b) an amount of processing power consumed by the API call between a start of the execution of the API call on the source device and an end of the execution of the API call on the source device,
(c) an amount of memory consumed by the API call between a start of the execution of the API call on the source device and an end of the execution of the API call on the source device,
(d) an amount of secondary storage utilization consumed by the API call between a start of the execution of the API call on the source device and an end of the execution of the API call on the source device, and
(e) an amount of network bandwidth consumed by the API call between a start of the execution of the API call on the source device and an end of the execution of the API call on the source device; and


present the data through a second panel in the plurality of panels of the GUI of the IDE.

US Pat. No. 11,068,249

DOWNLOADING AND LAUNCHING AN APP ON A SECOND DEVICE FROM A FIRST DEVICE

Samsung Electronics Co., ...


1. A non-transitory computer-readable medium that includes a program that when executed by a computer performs a method comprising:receiving, at a first application executing on a first device, information indicative of whether a second application is installed on a second device communicatively connected to the first device;
determining whether a first setting is saved if the received information indicates the second application is not installed on the second device, wherein the first setting indicates a previous user selection not to install the second application on the second device;
if the first setting is not saved, providing a first screen for display within the first application, wherein the first screen presents an installation action performable on the second device that involves the second application, and wherein the first setting is saved in response to a user selection, via the first screen, not to install the second application on the second device; and
if the first setting is saved, discovering, by the first application, a third device that can execute the second application, wherein the third device is communicatively connected to the first device and is different from the second device.

US Pat. No. 11,068,248

STAGGERING A STREAM APPLICATION'S DEPLOYMENT

International Business Ma...


1. A computer-implemented method, comprising:staggering a stream application's deployment on one or more computers, by:providing one or more configuration settings that define a plurality of delays for instantiation or initialization of at least one target processing element of the stream application based on the stream application's run-time conditions or events, wherein the one or more configuration settings define a first delay to wait until a first signal is received from one or more other processing elements and a second signal is received from a stream manager, wherein the one or more configuration settings define a second delay to wait until a specified amount of data has been processed by the one or more other processing elements exceeds a threshold value, wherein the stream application is represented by an operator graph, and wherein the one or more processing elements of the operator graph are instantiated when the operator graph is executed, and after the initialization is complete, a processing element invokes call-back logic, that signals a preceding processing element or data source that the processing element can receive data; and
instantiating or initializing the target processing element of the stream application when the plurality of delays defined by the configuration settings have been satisfied.


US Pat. No. 11,068,247

VECTORIZING CONDITIONAL MIN-MAX SEQUENCE REDUCTION LOOPS

Microsoft Technology Lice...


1. A software compilation process performed by a translator which is a software program executing on computer hardware, the process comprising:the translator receiving a source code which contains a loop;
the translator automatically determining that the loop satisfies the following conditions:
(a) the loop has a loop index, a loop condition which refers to the loop index, and a loop body,
(b) the loop body has an extremum test for identifying an extremum value,
(c) the loop body has an extremum value assignment which is configured to assign the extremum value to an extremum variable when the extremum test is satisfied,
(d) the extremum value assigned is based on the loop index,
(e) the loop body also has an extremum index assignment which is configured to assign a value of an index expression to an extremum index variable when the extremum test is satisfied; and
(f) the index expression is based on the loop index; and
the translator automatically producing a non-empty at least partially vectorized translation of the loop which includes a vectorization of the extremum index assignment and also includes a vectorization of the extremum value assignment, wherein the vectorizations comprise channel data structures in a digital memory, each channel data structure has a respective extremum value and corresponding loop index value, and the producing produces a translation of the loop which comprises wind-down code that is configured to perform extremum value assignment and extremum index assignment at least in part by gathering across the channel data structures one or more candidates for the extremum value and one or more corresponding candidates for the loop index value.

US Pat. No. 11,068,246

CONTROL FLOW GRAPH ANALYSIS

INTERNATIONAL BUSINESS MA...


1. A computer executable method for analyzing a control flow graph by an abstract interpretation of a program comprising:extracting a current program state of the program from a program state buffer, the current program state including a program counter and a virtual address register, the virtual address register including a branch instruction address to one or more branch targets;
generating an edge of a control flow graph from the branch instruction address to each of the one or more branch targets of the virtual address register;
adding a new program state to the program state buffer, having one of the one or more branch targets as a new program counter and having a virtual address of the current program state as a new virtual address register; and
assigning a visit flag to the program counter of the current program state.

US Pat. No. 11,068,245

CONTAINERIZED DEPLOYMENT OF MICROSERVICES BASED ON MONOLITHIC LEGACY APPLICATIONS

LZLABS GMBH, Zurich (CH)...


1. A scalable container-based system implemented in computer instructions stored in a non-transitory medium, the system comprising:a source code repository containing the source code of a monolithic legacy application containing a plurality of programs executable in a legacy computing environment to perform a plurality of transactions;
a source code analyzer operable to parse the source code and to identify, for each transaction in the plurality of transactions, a transaction definition vector identifying each program potentially called during the transaction, to create a plurality of transaction definition vectors;
a transaction state definition repository operable to store the plurality of transaction definition vectors;
an activity log analyzer operable to create a dynamic definition repository identifying which programs are actually used by the monolithic legacy application in performing in at least a subset of the plurality of transactions by creating a plurality of dynamic definition vectors that correspond to at least a portion of the plurality of transaction definition vectors;
a microservice definition optimizer operable to compare the plurality of transaction definition vectors to the dynamic definition repository by comparing one or more of the plurality of dynamic transaction definition vectors to a corresponding transaction definition vector and remove unused programs from the transaction definition vectors to create a plurality of microservice definition vectors defining a plurality of microservices;
a microservice image builder operable to, for each microservice definition vector of the plurality of microservice definition vectors, locate for each program identified by the microservice definition vector compiled source code binaries compiled to run in the legacy computing environment to form a plurality of microservice images corresponding to the microservice definition vectors;
a microservice image repository operable to store the plurality of microservice images;
a complementary component repository operable to store a set of binary images of emulator elements of a legacy emulator that, together, are less than a complete legacy emulator, said images corresponding to a plurality of functions or sets of functions of said legacy computing environment, and said images executable in a distinct computer environment characterized by an instruction set distinct from the instruction set of the legacy environment;
a container builder operable to form a container image for each microservice or a set of microservices in the plurality of microservices using the corresponding microservice image or images from the microservice image repository and using image files from the complementary component repository for the emulator elements of the legacy emulator corresponding to functions or sets of functions employed by the microservice or set of microservices when executed, as identified by signatures of calls in the binaries in the microservice or set of microservices, to create a plurality of container images;
a container image repository operable to store the plurality of container images executable in the distinct computing environment; and
a container management system operable to create at least one container for execution in the distinct computing environment and to run at least one microservice stored in container image repository in the at least one container.

US Pat. No. 11,068,244

OPTIMIZED TRANSPILATION

salesforce.com, inc., Sa...


1. An apparatus, comprising: a processing device; and a memory device coupled to the processing device, the memory device having instructions stored thereon that, in response to execution by the processing device, cause the processing device to: parse input source code and generate a tree representing the input source code; optimize the tree by recursively traverse the tree starting with a root node of the tree as a current node; if the current node represents a reusable sub-tree already encountered during traversal, replace the current node with a first leaf node assigned to a variable; and if the current node does not represent a reusable sub-tree already encountered during traversal, and the current node has already been encountered during traversal, assign a new variable, replace the current node with a second leaf node referencing the new variable, and replace a previous instance of the current node with a third leaf node referencing the new variable; and transpile the optimized tree to generate output source code.

US Pat. No. 11,068,243

APPLICATION STACK BUILDER BASED ON NODE FEATURES

RED HAT, INC., Raleigh, ...


1. A method, comprising:determining a set of node features of a node executable on a computer system, wherein a node feature of the set of node features specifies a first hardware component that is abstracted by the node;
determining application dependencies of an application;
creating a builder image on the node, the builder image being based on the application, a combination of the application dependencies of the application, and the set of node features, wherein the application and the application dependencies are compatible with the first hardware component;
determining a set of optimized libraries corresponding to the combination of application dependencies and to the node;
creating, based on the builder image and the set of optimized libraries, an application runtime container, wherein the application runtime container has a set of kernel features that supports the first hardware component; and
running the application and the set of optimized libraries in the application runtime container.

US Pat. No. 11,068,242

METHOD AND SYSTEM FOR GENERATING AND EXECUTING CLIENT/SERVER APPLICATIONS

Naver Corporation


1. A method for generating an application for use by an end user, the end user being associated with at least one of a first device and a second device where the at least one of the first device and the second device possess respective functional capability sets, comprising:producing an application behavior model (ABM), the ABM including information regarding organization of the application and actions available for request by the end user;
storing the ABM in an application server, the application server including an application execution engine;
the application server communicating remotely with the at least one of the first device and the second device to which the end user is associated, wherein the ABM is made available to the at least one of the first device and the second device to which the end user is associated when the application is to be executed;
producing instructions for modelling one or more aspects of generating a client application, said producing of the instructions for modelling including providing a Generation Model output (GMo) where the GMo corresponds with the at least one of the first device and the second device to which the end user is associated;
using the GMo to select an application template from a repository including a first application template and a second application template, the first and second application templates corresponding respectively with the first and second devices, wherein the functional capability set of the first device differs from the functional capability set of the second device, and wherein the first application template is configured to accommodate for the functional capability set of the first device and the second application template is configured to accommodate for the functional capability set of the second device;
instantiating the selected one of the first and second application templates to produce program code, the program code corresponding with a client application;
wherein the program code of the client application, which is launchable at the at least one of the first device and the second device to which the end user is associated, permits selected input to be communicated from the at least one of the first device and the second device to which the end user is associated to the application server in accordance with the ABM;
and wherein, responsive to receiving the selected input from the at least one of the first device and second device to which the end user is associated, a sequence of processing steps is performed by the application execution engine.

US Pat. No. 11,068,241

GUIDED DEFINITION OF AN APPLICATION PROGRAMMING INTERFACE ACTION FOR A WORKFLOW

ServiceNow, Inc., Santa ...


1. A system for building workflows, the system comprising:one or more hardware processors; and
a non-transitory memory storing instructions that, when executed by the one or more hardware processors, causes the one or more hardware processors to perform operations comprising:receiving a specification of an application programming interface (API), wherein the specification defines a function of the API, an input to the function, an output from the function, and a uniform resource locator (URL) that addresses the API;
receiving, via a graphical user interface (GUI), a first selection defining a first action that invokes the function of the API, an input to the first action, and an output from the first action;
generating a first mapping between the input to the first action and the input to the function based on the specification;
generating a second mapping between the output from the function and the output from the first action based on the specification;
receiving, via the GUI, a second selection defining a second action and an input to the second action;
generating an association between the output from the first action and the input to the second action; and
generating a workflow that comprises the first mapping and the second mapping as part of the workflow, wherein the function of the API is invokable by a user without the user referencing script associated with the API, wherein a value of the output of the first action is passed from the first action to the second action during execution of the workflow.


US Pat. No. 11,068,240

APERIODIC PSEUDO-RANDOM NUMBER GENERATOR USING BIG PSEUDO-RANDOM NUMBERS


1. A computer implemented method of generating aperiodic pseudo random numbers, the method comprising:using a pseudo-random number generator module to generate a first sequence of large pseudo random numbers, where for each large pseudo random number in the first sequence of large pseudo random numbers the pseudo-random number generator module (a) multiplies a first constant with a second constant and with a third constant to calculate a seed number, (b) sets the first constant as a current constant, and (c) for each large pseudo random number multiplies the seed number with the current constant and sets (i) the current constant as the next constant in an ordered set of constants containing the first constant, the second constant, and the third constant, and (ii) the pseudo-random number as the seed number; and
using a module for splitting big numbers output by the pseudo-random number generator module for (d) creating a second sequence of pseudo random numbers by (i) selectively splitting each large pseudo-random number in the first sequence of large pseudo random numbers into a plurality of groups of digits of the large pseudo-random number, and (ii) associating each group of digits of each large pseudo-random number in the first sequence of large pseudo random numbers with a pseudo random number in the second sequence of pseudo random numbers, and (e) for outputting the second sequence of pseudo random numbers.

US Pat. No. 11,068,239

CURVE FUNCTION DEVICE AND OPERATION METHOD THEREOF

NEUCHIPS CORPORATION, Hs...


1. A curve function device configured to calculate an approximate value of a curve function by using an input value, the function device comprising:a lookup table having at least one bias value field;
a weight calculation circuit extracting a bias value of a current segment and a bias value of a next segment from the bias value field of the lookup table according to a first partial bits of the input value and calculating a weight value of the current segment according to the bias value of the current segment and the bias value of the next segment; and
a linear function circuit coupled to the weight calculation circuit to receive the weight value of the current segment, wherein the linear function circuit extracts the bias value of the current segment from the bias value field of the lookup table according to the first partial bits of the input value, and the linear function circuit calculates a linear function value as the approximate value of the curve function by using the bias value of the current segment, the weight value of the current segment, and a second partial bits of the input value.

US Pat. No. 11,068,238

MULTIPLIER CIRCUIT

Arm Limited, Cambridge (...


1. A multiplier circuit comprising:a carry-save adder (CSA) network comprising a plurality of carry-save adders to perform partial product additions to reduce a plurality of partial products to a redundant result value represented using a carry-save representation, the CSA network comprising:a first stage of carry-save adders to perform a first subset of the partial product additions using selected portions of the partial products to generate a plurality of sub-products; and
at least one further stage of carry-save adders to perform a further subset of the partial product additions using the plurality of sub-products generated by the first stage and remaining portions of the partial products, to generate the redundant result value;

sub-product processing circuitry to apply a processing function to the plurality of sub-products generated by the first stage of carry-save adders, to generate processed sub-products represented using the carry-save representation, said processing function comprising at least one operation other than addition; and
input control circuitry to inject the processed sub-products as inputs to a subset of carry-save adders of said at least one further stage, to provide a sum-of-processed-sub-products mode in which the redundant result value generated by said at least one further stage represents a sum of the processed sub-products generated by the sub-product processing circuitry.